Search is not available for this dataset
text
string
meta
dict
\documentclass{article} \usepackage{fullpage} \usepackage{parskip} \usepackage{titlesec} \usepackage{xcolor} \usepackage[colorlinks = true, linkcolor = blue, urlcolor = blue, citecolor = blue, anchorcolor = blue]{hyperref} \usepackage[natbibapa]{apacite} \usepackage{eso-pic} \AddToShipoutPictureBG{\AtPageLowerLeft{\includegraphics[scale=0.7]{powered-by-Authorea-watermark.png}}} \renewenvironment{abstract} {{\bfseries\noindent{\abstractname}\par\nobreak}\footnotesize} {\bigskip} \titlespacing{\section}{0pt}{*3}{*1} \titlespacing{\subsection}{0pt}{*2}{*0.5} \titlespacing{\subsubsection}{0pt}{*1.5}{0pt} \usepackage{authblk} \usepackage{graphicx} \usepackage[space]{grffile} \usepackage{latexsym} \usepackage{textcomp} \usepackage{longtable} \usepackage{tabulary} \usepackage{booktabs,array,multirow} \usepackage{amsfonts,amsmath,amssymb} \providecommand\citet{\cite} \providecommand\citep{\cite} \providecommand\citealt{\cite} % You can conditionalize code for latexml or normal latex using this. \newif\iflatexml\latexmlfalse \providecommand{\tightlist}{\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}% \AtBeginDocument{\DeclareGraphicsExtensions{.pdf,.PDF,.eps,.EPS,.png,.PNG,.tif,.TIF,.jpg,.JPG,.jpeg,.JPEG}} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \begin{document} \title{The ISME Journal Template\\} \author{Linus Pauling} \date{} \begingroup \let\center\flushleft \let\endcenter\endflushleft \maketitle \endgroup \subsection*{Preparation of Articles}\label{auto-label-subsection-804801} Please note that original articles must contain the following components.\\ Please see below for further details.\\ \begin{itemize} \tightlist \item Cover letter\\ \item Title page (excluding acknowledgements)\\ \item Abstract\\ \item Introduction\\ \item Materials (or Subjects) and Methods\\ \item Results\\ \item Discussion\\ \item Acknowledgements\\ \item Conflict of Interest\\ \item References\\ \item Figure legends\\ \item Tables\\ \item Figures\\ \end{itemize} Reports of clinical trials must adhere to the registration and reporting requirements listed in the Editorial Policies.\\ \textbf{Cover Letter:}\\ The uploaded covering letter must state the material is original research, has not been previously published and has not been submitted for publication elsewhere while under consideration. If the manuscript has been previously considered for publication in another journal, please include the previous reviewer comments, to help expedite the decision by the Editorial team. Please include a Conflict of Interest statement.\\ \textbf{Title Page:}\\ The title page should bear the title of the paper, the full names of all the authors and their affiliations, together with the name, full postal address, telephone and fax numbers and e-mail address of the author to whom correspondence and offprint requests are to be sent (this information is also asked for on the electronic submission form). The title page must also contain a Conflict of Interest statement (see Editorial Policy section).\\ \begin{itemize} \tightlist \item The title should be brief, informative, of 150 characters or less and should not make a statement or conclusion.\\ \item The running title should consist of no more than 50 letters and spaces. It should be as brief as possible, convey the essential message of the paper and contain no abbreviations.\\ \item Authors should disclose the sources of any support for the work, received in the form of grants and/or equipment and drugs.\\ \item If authors regard it as essential to indicate that two or more co-authors are equal in status, they may be identified by an asterisk symbol with the caption `These authors contributed equally to this work' immediately under the address list.\\ \end{itemize} \textbf{Subject Categories:}\\ The Subject Categories are used to structure the current and archived online content of The ISME Journal, and to help readers interested in particular areas of microbial ecology find relevant information more easily. Subject Categories are also indicated in the table of contents and on the title page of the published article. Authors should suggest an appropriate Subject Category for the submitted manuscript. One category may be selected from the following list:\\ \begin{itemize} \tightlist \item Microbial population and community ecology\\ \item Microbe-microbe and microbe-host interactions\\ \item Evolutionary genetics\\ \item Integrated genomics and post-genomics approaches in microbial ecology\\ \item Microbial engineering\\ \item Geomicrobiology and microbial contributions to geochemical cycles\\ \item Microbial ecology and functional diversity of natural habitats\\ \item Microbial ecosystem impacts\\ \end{itemize} \textbf{Abstract:}\\ Original Articles must be prepared with an unstructured abstract designed to summarise the essential features of the paper in a logical and concise sequence.\\ \textbf{Materials/Subjects and Methods:}\\ This section should contain sufficient detail, so that all experimental procedures can be reproduced, and include references. Methods, however, that have been published in detail elsewhere should not be described in detail. Authors should provide the name of the manufacturer and their location for any specifically named medical equipment and instruments, and all drugs should be identified by their pharmaceutical names, and by their trade name if relevant.\\ \textbf{Results and Discussion:}\\ The Results section should briefly present the experimental data in text, tables or figures. Tables and figures should not be described extensively in the text, either. The discussion should focus on the interpretation and the significance of the findings with concise objective comments that describe their relation to other work in the area. It should not repeat information in the results. The final paragraph should highlight the main conclusion(s), and provide some indication of the direction future research should take.\\ \textbf{Acknowledgements:}\\ These should be brief, and should include sources of support including sponsorship (e.g. university, charity, commercial organisation) and sources of material (e.g. novel drugs) not available commercially.\\ \textbf{Conflict of Interest:}\\ Authors must declare whether or not there are any competing financial interests in relation to the work described. This information must be included at this stage and will be published as part of the paper. Conflict of interest should be noted in the cover letter and also on the title page. Please see the Conflict of Interest documentation in the Editorial Policy section for detailed information.\\ \textbf{References:}\\ Only papers directly related to the article should be cited. Exhaustive lists should be avoided. References should follow the Havard format. In the text of the manuscript, a reference should be cited by author and year of publication eg (Bailey \& Kowalchuk, 2006) and (Heidelberg et al, 1994) and listed at the end of the paper in alphabetical order of first author. References should be listed and journal titles abbreviated according to the style used by Index Medicus, examples are given below.\\ All authors should be listed for papers with up to six authors; for papers with more than six authors, the first six only should be listed, followed by et al. Abbreviations for titles of medical periodicals should conform to those used in the latest edition of Index Medicus. The first and last page numbers for each reference should be provided. Abstracts and letters must be identified as such. Papers in press may be included in the list of references.\\ Personal communications must be allocated a number and included in the list of references in the usual way or simply referred to in the text; the authors may choose which method to use. In either case authors must obtain permission from the individual concerned to quote his/her unpublished work.\\ Examples:\\ Journal article: Cho JC, Kim MW, Lee DH, Kim SJ. (1997). Response of bacterial communities to changes in composition of extracellular organic carbon from phytoplankton in Daechung reservoir (Korea). Arch Hydrobiol 138:559--576.\\ Journal article, e-pub ahead of print: Eng-Kiat L, Bowles DJ. A class of plant glycosyltransferases involved in cellular homeostasis. EMBO J 2004; e-pub ahead of print 8 July 2004, doi: 10.1038/sj.emboj.7600295.\\ Journal article, in press: Lim E-K, Ashford DA, Hou B, Jackson RG, Bowles DJ. (2004). Arabidopsis glycosyltransferases as biocatalysts in fermentation for regioselective synthesis of diverse quercetin glucosides. Biotech Bioeng (in press).\\ Complete book: Sambrook J, Fritsch E, Maniatis T. (1989). Molecular Cloning: a Laboratory Manual. Cold Spring Harbor Press: New York. Chapter in book: Zinder, SH. (1998). Methanogens. In: Burlage, RS (ed). Techniques in Microbial Ecology. Oxford University Press: Oxford, pp 113-- 136.~\\ \textbf{Tables:}\\ Tables should only be used to present essential data; they should not duplicate what is written in the text. It is imperative that any tables used are editable, ideally presented in Excel. Each must be uploaded as a separate workbook with a title or caption and be clearly labelled, sequentially. Please make sure each table is cited within the text and in the correct order, e.g. (Table 3). Please save the files with extensions .xls / .xlsx / .ods / or .doc or .docx. Please ensure that you provide a `flat' file, with single values in each cell with no macros or links to other workbooks or worksheets and no calculations or functions.\\ \textbf{Figures:}\\ Figures and images should be labelled sequentially and cited in the text. Figures should not be embedded within the text but rather uploaded as separate files. Detailed guidelines for submitting artwork can be found by downloading our\\ Artwork Guidelines. The use of three-dimensional histograms is strongly discouraged when the addition of the third dimension gives no extra information.\\ \textbf{Artwork Guidelines:}\\ Detailed guidelines for submitting artwork can be found by downloading the guidelines PDF. Using the guidelines, please submit production quality artwork with your initial online submission. If you have followed the guidelines, we will not require the artwork to be resubmitted following the peerreview process, if your paper is accepted for publication.\\ \textbf{Colour on the web:}\\ Authors who wish their articles to have FREE colour figures on the web (only available in the HTML (full text) version of manuscripts) must supply separate files in the following format. These files should be submitted as supplementary information and authors are asked to mention they would like colour figures on the web in their submission letter.\\ \textbf{Reuse of Display Items:}\\ See the Editorial Policy section for information on using previously published tables or figures. Standard abbreviations: Because the majority of readers will have experience in microbial ecology, the journal will accept papers which use certain standard abbreviations, without definition in the summary or in the text. Non-standard abbreviations should be defined in full at their first usage in the Summary and again at the first usage in the text, in the conventional manner. If a term is used 1-4 times in the text, it should be defined in full throughout the text and not abbreviated.\\ \textbf{Supplementary Information:}\\ Supplementary information (SI) is peer-reviewed material directly relevant to the conclusion of an article that cannot be included in the printed version owing to space or format constraints. The article must be complete and selfexplanatory without the SI, which is posted on the journal's website and linked to the article. SI may consist of data files, graphics, movies or extensive tables. Please see our Artwork Guidelines for information on accepted file types. Authors should submit supplementary information files in the FINAL format as they are not edited, typeset or changed, and will appear online exactly as submitted. When submitting SI, authors are required to:\\ \begin{itemize} \tightlist \item Include a text summary (no more than 50 words) to describe the contents of each file.\\ \item Identify the types of files (file formats) submitted.\\ \item Include the text ``Supplementary information is available at (journal name)'s website'' at the end of the article and before the references.\\ \end{itemize} \section*{Results}\label{results} This section is only included in papers that rely on primary research. This section catalogues the results of the experiment. The results should be presented in a clear and unbiased way. Most results sections will contain \href{http://authorea.com}{links}~as well as citations~\hyperref[csl:1]{[1]}~and equations such as~\(e^{i\pi}+1=0\)\\ \section*{Conclusion}\label{auto-label-section-853974} The conclusion should reinforce the major claims or interpretation in a way that is not mere summary. The writer should try to indicate the significance of the major claim/interpretation beyond the scope of the paper but within the parameters of the field. The writer might also present complications the study illustrates or suggest further research the study indicates is necessary. \selectlanguage{english} \FloatBarrier \section*{References}\sloppy \phantomsection \label{csl:1}1. Einstein A. {Näherungsweise Integration der Feldgleichungen der Gravitation}. In: \textit{Albert Einstein: Akademie-Vorträge}. 1916. Wiley-Blackwell, pp 99–108. \end{document}
{ "alphanum_fraction": 0.7892285298, "avg_line_length": 37.2357723577, "ext": "tex", "hexsha": "1b84deee775911f5623de18bb367664e2d22db49", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "58c4e1db3c32ec467a6d62de33cc0673ab7a1e94", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ricardoi/templates_latex", "max_forks_repo_path": "ISME_template/The ISME Journal Template.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "58c4e1db3c32ec467a6d62de33cc0673ab7a1e94", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ricardoi/templates_latex", "max_issues_repo_path": "ISME_template/The ISME Journal Template.tex", "max_line_length": 177, "max_stars_count": null, "max_stars_repo_head_hexsha": "58c4e1db3c32ec467a6d62de33cc0673ab7a1e94", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ricardoi/templates_latex", "max_stars_repo_path": "ISME_template/The ISME Journal Template.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3288, "size": 13740 }
\chapter{Useful numbers} The table below gives some useful physical values for parameters often used in modelling. \begin{center} \begin{longtable}{lll} \hline Definition & Symbol & Value\\ \hline \endfirsthead % \multicolumn{2}{c}{{\tablename} -- Continued} \\[0.5ex] \hline Definition & Symbol & Value and units\\ \hline \endhead %This is the footer for all pages except the last page of the table... \\[0.5ex] \multicolumn{2}{l}{{Continued on Next Page\ldots}} \\ \endfoot %This is the footer for the last page of the table... \hline \endlastfoot % Radius of Earth (at equator) & $R_E^{eq}$ & $\m[6.3781\times 10^6]$\\ Radius of Earth (at pole) & $R_E^{p}$ & $\m[6.3568\times 10^6]$\\ Radius of Earth (average value) & $R_E^{av}$ & $\m[6.371\times 10^6]$\\ Mass of Earth & $M_E$ & $\kg[5.9742\times 10^{24}]$\\ Mass of Moon & $M_M$ & $\kg[7.36\times 10^{22}]$\\ Mass of Sun & $M_S$ & $\kg[1.98892\times 10^{30}]$\\ Earth's rotation rate (based on sidereal day) & $\Omega$ & $\rads[7.2921\times 10^{-5}]$\\ \end{longtable} \end{center}
{ "alphanum_fraction": 0.548566879, "avg_line_length": 36.9411764706, "ext": "tex", "hexsha": "2e776d8f37cfaeaef2144a8cf0676bbb79e74ede", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2020-10-28T17:16:31.000Z", "max_forks_repo_forks_event_min_datetime": "2020-05-21T22:50:19.000Z", "max_forks_repo_head_hexsha": "cfcba990d52ccf535171cf54c0a91b184db6f276", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "msc-acse/acse-9-independent-research-project-Wade003", "max_forks_repo_path": "software/multifluids_icferst/manual/useful_numbers.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "cfcba990d52ccf535171cf54c0a91b184db6f276", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "msc-acse/acse-9-independent-research-project-Wade003", "max_issues_repo_path": "software/multifluids_icferst/manual/useful_numbers.tex", "max_line_length": 98, "max_stars_count": 2, "max_stars_repo_head_hexsha": "cfcba990d52ccf535171cf54c0a91b184db6f276", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "msc-acse/acse-9-independent-research-project-Wade003", "max_stars_repo_path": "software/multifluids_icferst/manual/useful_numbers.tex", "max_stars_repo_stars_event_max_datetime": "2020-05-11T03:08:38.000Z", "max_stars_repo_stars_event_min_datetime": "2020-05-11T02:39:46.000Z", "num_tokens": 417, "size": 1256 }
\chapter{Integrating CCPP with a host model} \label{chap_hostmodel} \setlength{\parskip}{12pt} %\label{section: addhostmodel} This chapter describes the process of connecting a host model with the pool of CCPP physics schemes through the CCPP framework. This work can be split into several distinct steps outlined in the following sections. \section{Checking variable requirements on host model side} The first step consists of making sure that the necessary variables for running the CCPP physics schemes are provided by the host model. A list of all variables required for the current pool of physics can be found in \execout{ccpp-framework/doc/DevelopersGuide/CCPP\_VARIABLES\_XYZ.pdf} (\execout{XYZ}: SCM, FV3). In case a required variable is not provided by the host model, there are several options: \begin{itemize} \item If a particular variable is only required by schemes in the pool that will not get used, these schemes can be commented out in the ccpp prebuild config (see Sect.~\ref{sec_addscheme}). \item If a variable can be calculated from existing variables in the model, an interstitial scheme (usually called \execsub{scheme\_name\_pre}) can be created that calculates the missing variable. However, the memory for this variable must be allocated on the host model side (i.\,e. the variable must be defined but not initialized in the host model). Another interstitial scheme (usually called \execsub{scheme\_name\_post}) might be required to update variables used by the host model with the results from the new scheme. At present, adding interstitial schemes should be done in cooperation with the GMTB Help Desk (\url{[email protected]}). \item In some cases, the declaration and calculation of the missing variable can be placed entirely inside the host model. Please consult with the GMTB Help Desk. \end{itemize} At present, only two types of variable definitions are supported by the CCPP framework: \begin{itemize} \item Standard Fortran variables (\execout{character}, \execout{integer}, \execout{logical}, \execout{real}) defined in a module or in the main program. For \execout{character} variables, a fixed length is required. All others can have a \execout{kind} attribute of a kind type defined by the host model. \item Derived data types defined in a module or the main program. \end{itemize} With the CCPP, it is possible to refer to components of derived types or to slices of arrays in the metadata table (see Listing~\ref{lst_metadata_table_hostmodel} in the following section for an example). \section{Adding metadata variable tables for the host model} In order to establish the link between host model variables and physics scheme variables, the host model must provide metadata tables similar to those presented in Sect.~\ref{sec_writescheme}. The host model can have multiple metadata tables or just one, but for each variable required by the pool of CCPP physics schemes, one and only one entry must exist on the host model side. The connection between a variable in the host model and in the physics scheme is made through its \execout{standard\_name}. The following requirements must be met when defining variables in the host model metadata tables: \begin{itemize} \item The \execout{standard\_name} must match that of the target variable in the physics scheme. \item The type, kind, shape and size of the variable (as defined in the host model Fortran code) must match that of the target variable. \item The attributes \execout{units}, \execout{rank}, \execout{type} and \execout{kind} in the host model metadata table must match those in the physics scheme table. \item The attributes \execout{optional} and \execout{intent} must be set to \execout{F} and \execout{none}, respectively. \item The \execout{local\_name} of the variable must be set to the name the host model cap (see Sect.~\ref{sec_hostmodel_cap}) uses to refer to the variable. \item The name of the metadata table must match the name of the module or program in which the variable is defined, or the name of the derived data type if the variable is a component of this type. \item For metadata tables describing module variables, the table must be placed inside the module. \item For metadata tables describing components of derived data types, the table must be placed immediately before the type definition. \end{itemize} Listing~\ref{lst_metadata_table_hostmodel} provides examples for host model metadata tables. \begin{sidewaysfigure} \begin{lstlisting}[language=Fortran, %basicstyle=\scriptsize\fontfamily{qcr}\fontshape{n}\fontseries{l}\selectfont basicstyle=\scriptsize\ttfamily, label=lst_metadata_table_hostmodel, caption=Example metadata table for a host model] module example_vardefs implicit none !> \section arg_table_example_vardefs !! | local_name | standard_name | long_name | units | rank | type | kind | intent | optional | !! |------------|---------------|-----------|-------|------|-----------|--------|--------|----------| !! | ex_int | example_int | ex. int | none | 0 | integer | | none | F | !! | ex_real1 | example_real1 | ex. real | m | 2 | real | kind=8 | none | F | !! | errmsg | error_message | err. msg. | none | 0 | character | len=64 | none | F | !! | errflg | error_flag | err. flg. | flag | 0 | logical | | none | F | !! integer, parameter :: r15 = selected_real_kind(15) integer :: ex_int real(kind=8), dimension(:,:) :: ex_real1 character(len=64) :: errmsg logical :: errflg ! Derived data types !> \section arg_table_example_ddt !! | local_name | standard_name | long_name | units | rank | type | kind | intent | optional | !! |------------|---------------|-----------|-------|------|-----------|--------|--------|----------| !! | ext%l | example_flag | ex. flag | flag | 0 | logical | | none | F | !! | ext%r | example_real3 | ex. real | kg | 2 | real | r15 | none | F | !! | ext%r(:,1) | example_slice | ex. slice | kg | 1 | real | r15 | none | F | !! type example_ddt logical :: l real, dimension(:,:) :: r end type example_ddt type(example_ddt) :: ext end module example_vardefs \end{lstlisting} \end{sidewaysfigure} \section{Writing a host model cap for the CCPP} \label{sec_hostmodel_cap} The purpose of the host model cap is to abstract away the communication between the host model and the CCPP physics schemes. While CCPP calls can be placed directly inside the host model code, it is recommended to separate the cap in its own module for clarity and simplicity. The host model cap is responsible for: \begin{description} \item[\textbf{Allocating memory for variables needed by physics.}] This is only required if the variables are not allocated by the host model, for example for interstitial variables used exclusively for communication between the physics schemes. \item[\textbf{Allocating the \execout{cdata} structure.}] The \execout{cdata} structure handles the data exchange between the host model and the physics schemes and must be defined in the host model cap or another suitable location in the host model. The \execout{cdata} variable must be persistent in memory. Note that \execout{cdata} is not restricted to being a scalar but can be a multi-dimensional array, depending on the needs of the host model. For example, a model that uses a 1-dimensional array of blocks for better cache-reuse may require \execout{cdata} to be a 1-dimensional array of the same size. Another example of a multi-dimensional array of \execout{cdata} is in the GMTB SCM, which uses a 1-dimensional \execout{cdata} array for $N$ independent columns. \item[\textbf{Calling the suite initialization subroutine.}] The suite initialization subroutine takes two arguments, the name of the runtime suite definition file (of type \execout{character}) and the name of the \execout{cdata} variable that must be allocated at this point. \item[\textbf{Populating the \execout{cdata} structure.}] Each variable required by the physics schemes must be added to the \execout{cdata} structure on the host model side. This is an automated task and accomplished by inserting a preprocessor directive \begin{lstlisting}[language=Fortran] #include ccpp_modules.inc \end{lstlisting} at the top of the cap (before \execout{implicit none}) to load the required modules (e.\,g. module \execout{example\_vardefs} in listing~\ref{lst_metadata_table_hostmodel}), and a second preprocessor directive \begin{lstlisting}[language=Fortran] #include ccpp_fields.inc \end{lstlisting} after the \execout{cdata} variable and the variables required by the physics schemes are allocated. \emph{Note.} The current implementations of CCPP in SCM and FV3 require a few manual additions of variables to the \execout{cdata} structure to complete the CCPP suite initialization step. These are special cases that will be addressed in the future. \item[\textbf{Providing interfaces to call CCPP for the host model.}] The cap must provide functions or subroutines that can be called at the appropriate places in the host model (dycore) time integration loop and that internally call \execout{ccpp\_run} and handle any errors returned. \end{description} Listing~\ref{lst_host_cap_template} contains a simple template of a host model cap for CCPP, which can also be found in \execout{ccpp-framework/doc/DevelopersGuide/host\_cap\_template.F90}. \begin{figure} \lstinputlisting[language=Fortran, %basicstyle=\scriptsize\fontfamily{qcr}\fontshape{n}\fontseries{l}\selectfont basicstyle=\scriptsize\ttfamily, label=lst_host_cap_template, caption=Fortran template for a CCPP host model cap]{./host_cap_template.F90} \end{figure} \section{Configuring and running the CCPP prebuild script} \label{sec_ccpp_prebuild_config} The CCPP prebuild script \execout{ccpp-framework/scripts/ccpp\_prebuild.py} is the central piece of code that connects the host model with the CCPP physics schemes (see Figure~\ref{fig_ccpp_design_with_ccpp_prebuild}). This script must be run before compiling the CCPP physics library, the CCPP framework and the host model cap. The CCPP prebuild script automates several tasks based on the information collected from the metadata tables on the host model side and from the individual physics schemes: \begin{itemize} \item Compiles a list of variables required to run all schemes in the CCPP physics pool. \item Compiles a list of variables provided by the host model. \item Matches these variables by their \execout{standard\_name}, checks for missing variables and mismatches of their attributes (e.\,g., units, rank, type, kind) and processes information on optional variables (see also Sect.~\ref{sec_writescheme}). \item Creates Fortran code (\execout{ccpp\_modules.inc}, \execout{ccpp\_fields.inc}) that stores pointers to the host model variables in the \execout{cdata} structure. \item Auto-generates the caps for the physics schemes. \item Populates makefiles with schemes and caps. \end{itemize} \begin{figure}[h] \centerline{\includegraphics[width=0.95\textwidth]{./images/ccpp_design_with_ccpp_prebuild.pdf}} \caption{Role and position of the CCPP prebuild script and the \execout{cdata} structure in the software architecture of an atmospheric modeling system.}\label{fig_ccpp_design_with_ccpp_prebuild} \end{figure} In order to connect CCPP with a host model \execsub{XYZ}, a Python-based configuration file for this model must be created in the directory \execout{ccpp-framework/scripts} by, for example, copying an existing configuration file in this directory, for example \begin{lstlisting}[language=bash] cp ccpp_prebuild_config_FV3.py ccpp_prebuild_config_XYZ.py \end{lstlisting} and adding \execout{XYZ} to the \execout{HOST_MODELS} list in the section \execout{User definitions} in \execout{ccpp\_prebuild.py}. The configuration in \execout{ccpp\_prebuild\_config\_XYZ.py} depends largely on (a) the directory structure of the host model itself, (b) where the \execout{ccpp-framework} and the \execout{ccpp-physics} directories are located relative to the directory structure of the host model, and (c) from which directory the \execout{ccpp\_prebuild.py} script is executed before/during the build process (this is referred to as \execout{basedir} in \execout{ccpp\_prebuild\_config\_XYZ.py}). Here, it is assumed that both \execout{ccpp-framework} and \execout{ccpp-physics} are located in the top-level directory of the host model, and that \execout{ccpp\_prebuild.py} is executed from the same top-level directory (recommended setup). The following variables need to be configured in \execout{ccpp\_prebuild\_config\_XYZ.py}, here shown for the example of SCM: \begin{lstlisting}[language=python] # Add all files with metadata tables on the host model side, # relative to basedir = top-level directory of host model VARIABLE_DEFINITION_FILES = [ 'scm/src/gmtb_scm_type_defs.f90', 'scm/src/gmtb_scm_physical_constants.f90' ] # Add all physics scheme files relative to basedir SCHEME_FILES = [ 'ccpp-physics/GFS_layer/GFS_initialize_scm.F90', 'ccpp-physics/physics/GFS_DCNV_generic.f90', ... 'ccpp-physics/physics/sfc_sice.f', ] # Auto-generated makefile snippet that contains all schemes SCHEMES_MAKEFILE = 'ccpp-physics/CCPP_SCHEMES.mk' # CCPP host cap in which to insert the ccpp_field_add statements; # determines the directory to place ccpp_{modules,fields}.inc TARGET_FILES = [ 'scm/src/gmtb_scm.f90', ] # Auto-generated makefile snippet that contains all caps CAPS_MAKEFILE = 'ccpp-physics/CCPP_CAPS.mk' # Directory where to put all auto-generated physics caps CAPS_DIR = 'ccpp-physics/physics' # Optional arguments - only required for schemes that use # optional arguments. ccpp_prebuild.py will throw an exception # if it encounters a scheme subroutine with optional arguments # if no entry is made here. Possible values are: 'all', 'none', # or a list of standard_names: [ 'var1', 'var3' ]. OPTIONAL_ARGUMENTS = { #'subroutine_name_1' : 'all', #'subroutine_name_2' : 'none', #'subroutine_name_3' : [ 'var1', 'var2'], } # HTML document containing the model-defined CCPP variables HTML_VARTABLE_FILE = 'ccpp-physics/CCPP_VARIABLES.html' # LaTeX document containing the provided vs requested CCPP variables LATEX_VARTABLE_FILE = 'ccpp-framework/doc/DevelopersGuide/CCPP_VARIABLES.tex' ########################################### # Template code to generate include files # ########################################### # Name of the CCPP data structure in the host model cap; # in the case of SCM, this is a vector with loop index i CCPP_DATA_STRUCTURE = 'cdata(i)' # Modules to load for auto-generated ccpp_field_add code # in the host model cap (e.g. error handling) MODULE_USE_TEMPLATE_HOST_CAP = \ ''' use ccpp_errors, only: ccpp_error ''' # Modules to load for auto-generated ccpp_field_get code # in the physics scheme cap (e.g. derived data types) MODULE_USE_TEMPLATE_SCHEME_CAP = \ ''' use machine, only: kind_phys use GFS_typedefs, only: GFS_statein_type, ... ''' \end{lstlisting} Once the configuration in \execout{ccpp\_prebuild\_config\_XYZ.py} is complete, run \begin{lstlisting}[language=bash] ./ccpp-framework/scripts/ccpp_prebuild.py --model=XYZ [--debug] \end{lstlisting} from the top-level directory. Without the debugging flag, the output should look similar to \begin{lstlisting}[language=bash,basicstyle=\scriptsize\ttfamily] INFO: Logging level set to INFO INFO: Parsing metadata tables for variables provided by host model ... INFO: Parsed variable definition tables in module gmtb_scm_type_defs INFO: Parsed variable definition tables in module gmtb_scm_physical_constants INFO: Metadata table for model SCM written to ccpp-physics/CCPP_VARIABLES.html INFO: Parsing metadata tables in physics scheme files ... INFO: Parsed tables in scheme GFS_initialize_scm INFO: Parsed tables in scheme GFS_DCNV_generic_pre ... INFO: Parsed tables in scheme sfc_sice INFO: Checking optional arguments in physics schemes ... INFO: Metadata table for model SCM written to ccpp-framework/doc/DevelopersGuide/CCPP_VARIABLES.tex INFO: Comparing metadata for requested and provided variables ... INFO: Generating module use statements ... INFO: Generated module use statements for 3 module(s) INFO: Generating ccpp_field_add statements ... INFO: Generated ccpp_field_add statements for 394 variable(s) INFO: Generating include files for host model cap scm/src/gmtb_scm.f90 ... INFO: Generated module-use include file scm/src/ccpp_modules.inc INFO: Generated fields-add include file scm/src/ccpp_fields.inc INFO: Generating schemes makefile snippet ... INFO: Added 38 schemes to makefile ccpp-physics/CCPP_SCHEMES.mk INFO: Generating caps makefile snippet ... INFO: Added 66 auto-generated caps to makefile ccpp-physics/CCPP_CAPS.mk INFO: CCPP prebuild step completed successfully. \end{lstlisting} \section{Building the CCPP physics library and software framework} \label{sec_ccpp_build} \subsection{Preface -- word of caution} As of now, the CCPP physics library and software framework are built as part of the host model (SCM, FV3GFS). The SCM uses a cmake build system for both the CCPP physics library and the CCPP software framework, while FV3GFS employs a traditional make build system for the CCPP physics library and a cmake build system for the CCPP software framework. Accordingly, \execout{CMakeLists.txt} files in the \execout{ccpp-physics} directory tree refer to an SCM build, while \execout{makefile} files refer to an FV3GFS build. Work is underway to provide a universal build system based on cmake that can be used with all host models It should be noted that the current build systems do not make full use of the makefile snippets auto-generated by \execout{ccpp\_prebuild.py} (c.\,f. previous section). The SCM uses hardcoded lists of physics schemes and auto-generated physics scheme caps, while FV3GFS makes use of the auto-generated list of physics scheme caps but uses a hardcoded list of physics scheme files. This is also due to the fact that script \execout{ccpp\_prebuild.py} at the moment only produces traditional \execout{makefile} snippets (e.\,g. \execout{CCPP\_SCHEMES.mk} and \execout{CCPP\_CAPS.mk}). Work is underway to create include files suitable for cmake for both schemes and caps, and to integrate these into the build system. \subsection{Build steps}\label{sec_ccpp_build_steps} The instructions laid out below to build the CCPP physics library and CCPP software framework independently of the host model make use of the cmake build system, which is also used with the GMTB single column model SCM. Several steps are required in the following order: \begin{description} \item[\textbf{Recommended directory structure.}] As mentioned in Section~\ref{sec_ccpp_prebuild_config}, we recommend placing the two directories (repositories) \execout{ccpp-framework} and \execout{ccpp-physics} in the top-level directory of the host model, and to adapt the CCPP prebuild config such that it can be run from the top-level directory. \item[\textbf{Set environment variables.}] In general, the CCPP requires the \execout{CC} and \execout{FC} variables to point to the correct compilers. If threading (OpenMP) will be used inside the CCPP physics or the host model calling the CCPP physics (see below), OpenMP-capable compilers must be used here. The setup scripts for SCM in \execout{scm/etc} provide useful examples for the correct environment settings (note that setting \execout{NETCDF} is not required for CCPP, but may be required for the host model). \item[\textbf{Configure and run \exec{ccpp\_prebuild.py}.}] This step is described in detail in Sect.~\ref{sec_ccpp_prebuild_config}. \item[\textbf{Build CCPP framework.}] The following steps outline a suggested way to build the CCPP framework: \begin{lstlisting}[language=bash] cd ccpp-framework mkdir build && cd build cmake -DCMAKE_INSTALL_PREFIX=$PWD .. # add -DOPENMP=1 before .. for OpenMP build # add -DCMAKE_BUILD_TYPE=Debug before .. for debug build make install # add VERBOSE=1 after install for verbose output \end{lstlisting} \item[\textbf{Update environment variables.}] The previous install step creates directories \execout{include} and \execout{lib} inside the build directory. These directories and the newly built library \execout{libccpp.so} need to be added to the environment variables \execout{FFLAGS} and \execout{LDFLAGS}, respectively (example for bash, assuming the current directory is still the above build directory): \begin{lstlisting}[language=bash] export FFLAGS="-I$PWD/include -I$PWD/src $FFLAGS" export LDFLAGS="-L$PWD/lib -lccpp" \end{lstlisting} \item[\textbf{Build CCPP physics library.}] Starting from the build directory \execout{ccpp-framework/build}: \begin{lstlisting}[language=bash] cd ../.. # back to top-level directory cd ccpp-physics mkdir build && cd build cmake .. # add -DOPENMP=1 before .. for OpenMP build make # add VERBOSE=1 after install for verbose output \end{lstlisting} \end{description} \subsection{Optional: Integration with host model build system} Following the steps outlined Section~\ref{sec_ccpp_build_steps}, the include files and the library \execout{libccpp.so} that the host model needs to be compiled/linked against to call the CCPP physics through the CCPP framework are located in \execout{ccpp-framework/build/include} and \execout{ccpp-framework/build/lib}. Note that there is no need to link the host model to the CCPP physics library in \execout{ccpp-physics/build}, as long as it is in the search path of the dynamic loader of the OS (for example by adding the directory \execout{ccpp-physics/build} to the \execout{LD\_LIBRARY\_PATH} environment variable). This is because the CCPP physics library is loaded dynamically by the CCPP framework using the library name specified in the runtime suite definition file (see the GMTB Single Column Model Technical Guide v1.0, Chapter 6.1.3, (\url{https://dtcenter.org/gmtb/users/ccpp/docs/}) for further information) Thus, setting the environment variables \execout{FFLAGS} and \execout{LDFLAGS} as in Sect.~\ref{sec_ccpp_build_steps} should be sufficient to compile the host model with its newly created host model cap (Sect.~\ref{sec_hostmodel_cap}) and connect to the CCPP library and framework. For a complete integration of the CCPP infrastructure and physics library build systems in the host model build system, users are referred to the existing implementations in the GMTB SCM.
{ "alphanum_fraction": 0.7606811417, "avg_line_length": 80.1543859649, "ext": "tex", "hexsha": "b0ca0b496e5bb9438cba6cb378b680b044451091", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "66f1a069b6b15748e08adbe940b8ceb9b39619ab", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "gold2718/ccpp-framework", "max_forks_repo_path": "doc/DevelopersGuide/chap_hostmodel.tex", "max_issues_count": 39, "max_issues_repo_head_hexsha": "66f1a069b6b15748e08adbe940b8ceb9b39619ab", "max_issues_repo_issues_event_max_datetime": "2021-09-03T16:57:43.000Z", "max_issues_repo_issues_event_min_datetime": "2019-01-25T21:50:33.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "gold2718/ccpp-framework", "max_issues_repo_path": "doc/DevelopersGuide/chap_hostmodel.tex", "max_line_length": 926, "max_stars_count": null, "max_stars_repo_head_hexsha": "66f1a069b6b15748e08adbe940b8ceb9b39619ab", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "gold2718/ccpp-framework", "max_stars_repo_path": "doc/DevelopersGuide/chap_hostmodel.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5560, "size": 22844 }
% LaTeX Article Template - customizing page format % % LaTeX document uses 10-point fonts by default. To use % 11-point or 12-point fonts, use \documentclass[11pt]{article} % or \documentclass[12pt]{article}. \documentclass[10pt,fleqn]{report} % Set left margin - The default is 1 inch, so the following % command sets a 1.25-inch left margin. \setlength{\oddsidemargin}{-0.0in} % Set width of the text - What is left will be the right margin. % In this case, right margin is 8.5in - 1.25in - 6in = 1.25in. \setlength{\textwidth}{6.5in} % Set top margin - The default is 1 inch, so the following % command sets a 0.75-inch top margin. \setlength{\topmargin}{-0.5in} % Set height of the text - What is left will be the bottom margin. % In this case, bottom margin is 11in - 0.75in - 9.5in = 0.75in \setlength{\textheight}{9.0in} \setlength{\parskip}{10pt} \setlength{\parindent}{0pt} \begin{document} \input{symbols} \pagestyle{myheadings} \markboth{Draft of \today}{Draft of \today} \title{Functions of Triangular Meshes} \author{ \sc John Alan McDonald } \date{\today} \maketitle \input{abstract} \tableofcontents %\listoffigures \section{Introduction} \pagestyle{myheadings} \markboth{Draft of \today}{Draft of \today} \input{introduction} \section{Notation and general results} \pagestyle{myheadings} \markboth{Draft of \today}{Draft of \today} \input{general} \section{Data Fitting} \pagestyle{myheadings} \markboth{Draft of \today}{Draft of \today} \input{data-fitting} \section{Registration} \pagestyle{myheadings} \markboth{Draft of \today}{Draft of \today} \input{registration} \section{Averaging} \pagestyle{myheadings} \markboth{Draft of \today}{Draft of \today} \input{averaging} \newpage {\small \bibliographystyle{plain} \bibliography{mesh} } \end{document}
{ "alphanum_fraction": 0.7387437465, "avg_line_length": 21.1647058824, "ext": "tex", "hexsha": "15345dc97f9ff9f53fe9b35af7e874db4f638693", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "970bcbf5e31e40017b2333039e1505c7ea2f56dd", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "palisades-lakes/les-elemens", "max_forks_repo_path": "doc/old/fotm/qpaper.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "970bcbf5e31e40017b2333039e1505c7ea2f56dd", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "palisades-lakes/les-elemens", "max_issues_repo_path": "doc/old/fotm/qpaper.tex", "max_line_length": 66, "max_stars_count": null, "max_stars_repo_head_hexsha": "970bcbf5e31e40017b2333039e1505c7ea2f56dd", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "palisades-lakes/les-elemens", "max_stars_repo_path": "doc/old/fotm/qpaper.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 568, "size": 1799 }
\documentclass[letterpaper, twoside, 12pt]{book} \usepackage{packet} \begin{document} \setcounter{chapter}{1} \chapter{Part 2.1: Sections 13.3-13.4} \setcounter{chapter}{13} \setcounter{section}{2} \section{Arc Length and Curvature} \begin{problem} Let $\vect{r}(t)=\<6t, t^3, 3t^2\>$. Use the lengths of the line segments connecting $\vect{r}(0)$, $\vect{r}(1)$, $\vect{r}(2)$, and $\vect{r}(3)$ to approximate the length of the curve from $t=0$ to $t=3$. \end{problem} \begin{solution} \end{solution} \begin{definition} Let $\harpvec{r}(t) = \<f(t),g(t),h(t)\>$ be a vector function. Then the \textbf{arclength} or \textbf{length} of the curve given by $\harpvec{r}(t)$ from $t=a$ to $t=b$ is \[ L = \int_a^b \left| \lim_{\Delta{t}\to0} \frac{\vect{r}(t+\Delta{t})-\vect{r}(t)}{\Delta{t}} \right| \dvar{t} = \int_a^b |\vect{r}'(t)| \dvar{t} \] \end{definition} \begin{problem} Find the length of the curve given by $\vect{r}(t)=\<6t, t^3, 3t^2\>$ from $t=0$ to $t=3$. (Hint: $9t^4+36t^2+36$ is a perfect square polynomial.) \end{problem} \begin{solution} \end{solution} \begin{definition} Let $s(t)$ be the \textbf{arclength function/parameter} representing the length of a curve from the point given by $\harpvec{r}(0)$ to the point given by $\harpvec{r}(t)$. (Assume $s(t)<0$ for $t<0$.) \end{definition} \begin{theorem} The arclength function $s(t)$ is given by the definite integral \[ s(t) = \int_0^t |\vect{r}'(\tau)| \dvar{\tau} \] \end{theorem} \begin{theorem} The derivative of the arclength function gives the lengths of the tangent vectors given by the derivative of the position function: \[ \frac{ds}{dt} = \left|\frac{d\vect{r}}{dt}\right| \] \end{theorem} \begin{problem} Compute $s(t)$ for $\vect{r}(t)=\<6t, t^3, 3t^2\>$, and use it to find the arclength parameter corresponding to $t=-2$. \end{problem} \begin{solution} \end{solution} \begin{problem} Find the length of an arc of the circular helix with vector equation $\vect{r}(t) = \<\cos(t),\sin(t),t\>$ from $(1,0,0)$ to $(1,0,2\pi)$. \end{problem} \begin{definition} The \textbf{unit tangent vector} $\vect{T}$ to a curve $\vect{r}$ is the direction of the derivative $\vect{r}'(t)=\frac{d\vect{r}}{dt}$. \end{definition} \begin{theorem} \[ \vect{T} = \frac{d\vect{r}/dt}{|d\vect{r}/dt|} = \frac{d\vect{r}}{ds} \] \end{theorem} \begin{problem} Find the unit tangent vector to the curve given by $\vect{r}(t)=\<3t^2,2t\>$ at the point where $t=-3$. \end{problem} \begin{solution} \end{solution} \begin{definition} The \textbf{curvature} $\kappa$ of a curve $C$ at a given point is the magnitude of the rate of change of $\vect{T}$ with respect to arclength $s$. \end{definition} \begin{theorem} \[ \kappa = \left| \frac{d\vect{T}}{ds} \right| = \left| \frac{1}{ds/dt} \frac{d\vect{T}}{dt} \right| = \frac{1}{|d\vect{r}/dt|} \left| \frac{d\vect{T}}{dt} \right| \] \end{theorem} \begin{theorem} An alternate formula for curvature is given by \[ \kappa = \frac{|\vect{r}'(t)\times\vect{r}''(t)|}{|\vect{r}'(t)|^3} \] \end{theorem} \begin{problem} Prove that the helix given by the vector equation $\vect{r}(t) = \<\cos(t),\sin(t),t\>$ has constant curvature. \end{problem} \begin{solution} \end{solution} \begin{problem} (OPTIONAL) Prove that the alternate formula for curvature is accurate by showing \[ \frac{1}{|d\vect{r}/dt|} \left| \frac{d\vect{T}}{dt} \right| = \frac{|\vect{r}'\times\vect{r}''|}{|\vect{r}'|^3} \] (Some of the solution has been provided.) \end{problem} \begin{solution} Begin by observing that $ \vect{r}' = \left|\frac{d\vect{r}}{dt}\right|\vect{T} = \frac{ds}{dt}\vect{T} $, and by the product rule it follows that $ \vect{r}'' = \frac{d^2s}{dt^2}\vect{T} + \frac{ds}{dt}\vect{T}' $. (...) % (Continue this argument by taking the cross-product of % $\vect{r}'$ and $\vect{r}''$, simplifying by using the fact that % $\vect{v}\times\vect{v}=\vect{0}$, then taking its magnitude % and simplifying using the fact that $|\vect{T}|=1$ and % $\vect{T},\vect{T}'$ are perpendicular (why?). % You should end up with % $\left(\frac{ds}{dt}\right)^2\left|\vect{T}'\right|$, which can % be used with $\frac{ds}{dt}=|\vect{r}'|$ to finish the proof.) \end{solution} \begin{definition} The \textbf{unit normal vector} $\vect{N}$ to a curve $\vect{r}$ is the direction of the derivative of the unit tangent vector $\vect{T}'(t)=\frac{d\vect{T}}{dt}$. (By definition, this vector points into the direction of the curve.) \end{definition} \begin{theorem} \[ \vect{N} = \frac{\vect{T}'}{|\vect{T}'|} \] \end{theorem} \begin{problem} Prove that $\vect{N}$ actually is normal to the curve by using a theorem from a previous section. (Hint: $|\vect{T}|=1$.) \end{problem} \begin{solution} \end{solution} \begin{problem} Plot the curve given by $\vect{r}(t)=\<\cos(2t),\sin(2t)\>$, along with $\vect{T},\vect{N}$ at the point where $t=\frac{\pi}{2}$. \end{problem} \begin{problem} Give formuals for $\vect{T},\vect{N}$ in terms of $t$ for the vector function \[\vect{r}(t) = \< \sqrt{2}\sin t,2\cos t,\sqrt{2}\sin t \>\] \end{problem} \begin{solution} \end{solution} \begin{definition} The \textbf{binormal vector} $\harpvec{B}$ is the direction normal to both $\harpvec{T}$ and $\harpvec{N}$ according to the right-hand rule. \end{definition} \begin{theorem} \[ \vect{B}=\vect{T}\times\vect{N} \] \end{theorem} \begin{problem} Prove that $\vect{T}\times\vect{N}$ is a unit vector. \end{problem} \begin{solution} \end{solution} \begin{problem} Given the following information about $\vect{r}(t)$ at a point, evaluate the binormal vector $\vect{B}$ and curvature $\kappa$ at that same point: \[\frac{d\vect{r}}{dt}=\<-3,0,3\sqrt{3}\>\] \[\frac{d\vect{T}}{dt}=\<-\sqrt{3},0,-1\>\] \[\vect{T}=\<-\frac{1}{2},0,\frac{\sqrt{3}}{2}\>\] \[\vect{N}=\<-\frac{\sqrt{3}}{2},0,-\frac{1}{2}\>\] \end{problem} \begin{solution} \end{solution} \begin{definition} A \textbf{right-handed frame} is a group of three unit vectors which are all normal to one another and satisfy the right hand rule. \end{definition} \begin{example} $\veci,\vecj,\veck$ and $\vect T,\vect N,\vect B$ are examples of right-handed frames. \end{example} \begin{theorem} Any vector is a linear combination of the vectors in a right-handed frame. \end{theorem} \section{Motion in Space, Velocity, and Acceleration} \begin{definition} The \textbf{velocity} $\vect{v}(t)$ of a particle at time $t$ on a position function $\vect{r}(t)$ is its rate of change with respect to $t$. \end{definition} \begin{definition} The \textbf{speed} $|\vect{v}(t)|$ of a particle at time $t$ on a position function $\vect{r}(t)$ is the magnitude of its velocity. \end{definition} \begin{definition} The \textbf{direction} $\vect{T}(t)$ of a particle at time $t$ on a position function $\vect{r}(t)$ is the direction of its velocity. \end{definition} \begin{definition} The \textbf{acceleration} $\vect{a}(t)$ of a particle at time $t$ on a position function $\vect{r}(t)$ is the rate of change of its velocity with respect to $t$. \end{definition} \begin{theorem} \[ \vect{v}(t)=\vect{r}'(t) \] \[ |\vect{v}(t)|=|\vect{r}'(t)|=\frac{ds}{dt} \] \[ \vect{T}(t) = \frac{\vect v}{|\vect v|} \] \[ \vect{a}(t)=\vect{v}'(t)=\vect{r}''(t) \] \end{theorem} \begin{problem} Given a position function $\harpvec{r}(t) = \<t^3,t^2\>$ find its velocity, speed, and acceleration at $t = 1$. \end{problem} \begin{definition} \textbf{Ideal projectile motion} is an approximation of real-world motion assuming constant acceleration due to gravity in the $y$ direction and no acceleration in the $x$ direction: \[ \vect{a}(t) = \<0,-g\> \] \end{definition} \begin{theorem} The velocity and position functions for a particle with initial velocity $\vect{v}_0=\<v_{x,0},v_{y,0}\>$ and beginning at position $P_0=\<x_0,y_0\>$ assuming ideal projectile motion are: \[ \vect{v}(t) = \<v_{x,0},-gt+v_{y,0}\> \] \[ \vect{r}(t) = \left\<v_{x,0}t+x_0,-\frac{1}{2}gt^2+v_{y,0}t+y_0\right\> \] \end{theorem} \begin{problem} Assume ideal projectile motion and and $g=10\frac{m}{s^2}$. What is the flight time of a projectile shot from the ground at an angle of $\pi/6$ with initial speed $100\frac{m}{s}$? \end{problem} \begin{solution} \end{solution} \begin{problem} Assume ideal projectile motion and and $g=10\frac{m}{s^2}$. What must have been the initial speed of a projectile shot from the ground at an angle of $\pi/3$ if it traveled $60$ meters horizontally after $4$ seconds? \end{problem} \begin{solution} \end{solution} \end{document}
{ "alphanum_fraction": 0.5530043967, "avg_line_length": 27.3663101604, "ext": "tex", "hexsha": "6dc5342e99ac6007a6e8a8922e3937ba4b67dcec", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f0f09d6cc9420d643f8ea446e57cb09dd6512843", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "StevenClontz/teaching-2015-spring", "max_forks_repo_path": "packet2_1.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f0f09d6cc9420d643f8ea446e57cb09dd6512843", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "StevenClontz/teaching-2015-spring", "max_issues_repo_path": "packet2_1.tex", "max_line_length": 81, "max_stars_count": null, "max_stars_repo_head_hexsha": "f0f09d6cc9420d643f8ea446e57cb09dd6512843", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "StevenClontz/teaching-2015-spring", "max_stars_repo_path": "packet2_1.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3223, "size": 10235 }
%% %% This is file `thesis-ex.tex', %% generated with the docstrip utility. %% %% The original source files were: %% %% uiucthesis2014.dtx (with options: `example') %% \def\fileversion{v2.25b} \def\filedate{2014/05/02} %% Package and Class "uiucthesis2014" for use with LaTeX2e. \documentclass[edeposit,fullpage]{uiucthesis2014} \begin{document} \title{Coffee Consumption of Graduate Students \\ Trying to Finish Dissertations} \author{Juan Valdez} \department{Food Science} \schools{B.A., University of Columbia, 1981\\ A.M., University of Illinois at Urbana-Champaign, 1986} \phdthesis \advisor{Java Jack} \degreeyear{1994} \committee{Professor Prof Uno, Chair\\Professor Prof Dos, Director of Research\\Assistant Professor Prof Tres\\Adjunct Professor Prof Quatro} \maketitle \frontmatter %% Create an abstract that can also be used for the ProQuest abstract. %% Note that ProQuest truncates their abstracts at 350 words. \begin{abstract} This is a comprehensive study of caffeine consumption by graduate students at the University of Illinois who are in the very final stages of completing their doctoral degrees. A study group of six hundred doctoral students\ldots. \end{abstract} %% Create a dedication in italics with no heading, centered vertically %% on the page. \begin{dedication} To Father and Mother. \end{dedication} %% Create an Acknowledgements page, many departments require you to %% include funding support in this. \chapter*{Acknowledgments} This project would not have been possible without the support of many people. Many thanks to my adviser, Lawrence T. Strongarm, who read my numerous revisions and helped make some sense of the confusion. Also thanks to my committee members, Reginald Bottoms, Karin Vegas, and Cindy Willy, who offered guidance and support. Thanks to the University of Illinois Graduate College for awarding me a Dissertation Completion Fellowship, providing me with the financial means to complete this project. And finally, thanks to my husband, parents, and numerous friends who endured this long process with me, always offering support and love. %% The thesis format requires the Table of Contents to come %% before any other major sections, all of these sections after %% the Table of Contents must be listed therein (i.e., use \chapter, %% not \chapter*). Common sections to have between the Table of %% Contents and the main text are: %% %% List of Tables %% List of Figures %% List Symbols and/or Abbreviations %% etc. \tableofcontents \listoftables \listoffigures %% Create a List of Abbreviations. The left column %% is 1 inch wide and left-justified \chapter{List of Abbreviations} \begin{symbollist*} \item[CA] Caffeine Addict. \item[CD] Coffee Drinker. \end{symbollist*} %% Create a List of Symbols. The left column %% is 0.7 inch wide and centered \chapter{List of Symbols} \begin{symbollist}[0.7in] \item[$\tau$] Time taken to drink one cup of coffee. \item[$\mu$g] Micrograms (of caffeine, generally). \end{symbollist} \mainmatter \chapter{This world} \section{Of the Nature of Flatland} I call our world Flatland, not because we call it so, but to make its nature clearer to you, my happy readers, who are privileged to live in Space. Imagine a vast sheet of paper on which straight Lines, Triangles, Squares, Pentagons, Hexagons, and other figures, instead of remaining fixed in their places, move freely about, on or in the surface, but without the power of rising above or sinking below it, very much like shadows--only hard with luminous edges--and you will then have a pretty correct notion of my country and countrymen. Alas, a few years ago, I should have said "my universe:" but now my mind has been opened to higher views of things. In such a country, you will perceive at once that it is impossible that there should be anything of what you call a "solid" kind; but I dare say you will suppose that we could at least distinguish by sight the Triangles, Squares, and other figures, moving about as I have described them. On the contrary, we could see nothing of the kind, not at least so as to distinguish one figure from another. Nothing was visible, nor could be visible, to us, except Straight Lines; and the necessity of this I will speedily demonstrate. Place a penny on the middle of one of your tables in Space; and leaning over it, look down upon it. It will appear a circle. But now, drawing back to the edge of the table, gradually lower your eye (thus bringing yourself more and more into the condition of the inhabitants of Flatland), and you will find the penny becoming more and more oval to your view, and at last when you have placed your eye exactly on the edge of the table (so that you are, as it were, actually a Flatlander) the penny will then have ceased to appear oval at all, and will have become, so far as you can see, a straight line. The same thing would happen if you were to treat in the same way a Triangle, or a Square, or any other figure cut out from pasteboard. As soon as you look at it with your eye on the edge of the table, you will find that it ceases to appear to you as a figure, and that it becomes in appearance a straight line. Take for example an equilateral Triangle--who represents with us a Tradesman of the respectable class. Figure 1 represents the Tradesman as you would see him while you were bending over him from above; figures 2 and 3 represent the Tradesman, as you would see him if your eye were close to the level, or all but on the level of the table; and if your eye were quite on the level of the table (and that is how we see him in Flatland) you would see nothing but a straight line. When I was in Spaceland I heard that your sailors have very similar experiences while they traverse your seas and discern some distant island or coast lying on the horizon. The far-off land may have bays, forelands, angles in and out to any number and extent; yet at a distance you see none of these (unless indeed your sun shines bright upon them revealing the projections and retirements by means of light and shade), nothing but a grey unbroken line upon the water. Well, that is just what we see when one of our triangular or other acquaintances comes towards us in Flatland. As there is neither sun with us, nor any light of such a kind as to make shadows, we have none of the helps to the sight that you have in Spaceland. If our friend comes closer to us we see his line becomes larger; if he leaves us it becomes smaller; but still he looks like a straight line; be he a Triangle, Square, Pentagon, Hexagon, Circle, what you will--a straight Line he looks and nothing else. You may perhaps ask how under these disadvantagous circumstances we are able to distinguish our friends from one another: but the answer to this very natural question will be more fitly and easily given when I come to describe the inhabitants of Flatland. For the present let me defer this subject, and say a word or two about the climate and houses in our country. How does this relate to coffee? We direct the reader to \cite{Trembly98}, \cite{Childish07}, and \cite{Presso10}. \include{1-introduction} \include{2-related} \include{3-model} \include{4-predictions} \chapter{Conclusions} We conclude that graduate students like coffee. \appendix* \include{Appendix.tex} \backmatter \bibliographystyle{apalike} \bibliography{thesisbib} \end{document} \endinput %% %% End of file `thesis-ex.tex'.
{ "alphanum_fraction": 0.7748291114, "avg_line_length": 39.2684210526, "ext": "tex", "hexsha": "0ed4938e91ee50886a527aaea5de0fc4c51da203", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "4f99e565df663252493021057ba8f4b419d1fd4e", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "jbae11/master_thesis", "max_forks_repo_path": "thesis-ex.tex", "max_issues_count": 16, "max_issues_repo_head_hexsha": "4f99e565df663252493021057ba8f4b419d1fd4e", "max_issues_repo_issues_event_max_datetime": "2018-11-27T01:03:24.000Z", "max_issues_repo_issues_event_min_datetime": "2017-12-12T23:59:04.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "jbae11/master_thesis", "max_issues_repo_path": "thesis-ex.tex", "max_line_length": 141, "max_stars_count": null, "max_stars_repo_head_hexsha": "4f99e565df663252493021057ba8f4b419d1fd4e", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "jbae11/master_thesis", "max_stars_repo_path": "thesis-ex.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1858, "size": 7461 }
\documentclass[]{article} \usepackage{lmodern} \usepackage{amssymb,amsmath} \usepackage{ifxetex,ifluatex} \usepackage{fixltx2e} % provides \textsubscript \ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \else % if luatex or xelatex \ifxetex \usepackage{mathspec} \else \usepackage{fontspec} \fi \defaultfontfeatures{Ligatures=TeX,Scale=MatchLowercase} \fi % use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} % use microtype if available \IfFileExists{microtype.sty}{% \usepackage{microtype} \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts }{} \usepackage[margin=1in]{geometry} \usepackage{hyperref} \hypersetup{unicode=true, pdftitle={Urban\_mobility}, pdfauthor={Ivana Kocanova}, pdfborder={0 0 0}, breaklinks=true} \urlstyle{same} % don't use monospace font for urls \usepackage{color} \usepackage{fancyvrb} \newcommand{\VerbBar}{|} \newcommand{\VERB}{\Verb[commandchars=\\\{\}]} \DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}} % Add ',fontsize=\small' for more characters per line \usepackage{framed} \definecolor{shadecolor}{RGB}{248,248,248} \newenvironment{Shaded}{\begin{snugshade}}{\end{snugshade}} \newcommand{\AlertTok}[1]{\textcolor[rgb]{0.94,0.16,0.16}{#1}} \newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.77,0.63,0.00}{#1}} \newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\BuiltInTok}[1]{#1} \newcommand{\CharTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\CommentTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}} \newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}} \newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{#1}} \newcommand{\DecValTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\ErrorTok}[1]{\textcolor[rgb]{0.64,0.00,0.00}{\textbf{#1}}} \newcommand{\ExtensionTok}[1]{#1} \newcommand{\FloatTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\ImportTok}[1]{#1} \newcommand{\InformationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}} \newcommand{\NormalTok}[1]{#1} \newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.81,0.36,0.00}{\textbf{#1}}} \newcommand{\OtherTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{#1}} \newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}} \newcommand{\RegionMarkerTok}[1]{#1} \newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\StringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\VariableTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\WarningTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \usepackage{graphicx,grffile} \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi} \def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} \makeatother % Scale images if necessary, so that they will not overflow the page % margins by default, and it is still possible to overwrite the defaults % using explicit options in \includegraphics[width, height, ...]{} \setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} \IfFileExists{parskip.sty}{% \usepackage{parskip} }{% else \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt} } \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \setcounter{secnumdepth}{0} % Redefines (sub)paragraphs to behave more like sections \ifx\paragraph\undefined\else \let\oldparagraph\paragraph \renewcommand{\paragraph}[1]{\oldparagraph{#1}\mbox{}} \fi \ifx\subparagraph\undefined\else \let\oldsubparagraph\subparagraph \renewcommand{\subparagraph}[1]{\oldsubparagraph{#1}\mbox{}} \fi %%% Use protect on footnotes to avoid problems with footnotes in titles \let\rmarkdownfootnote\footnote% \def\footnote{\protect\rmarkdownfootnote} %%% Change title format to be more compact \usepackage{titling} % Create subtitle command for use in maketitle \providecommand{\subtitle}[1]{ \posttitle{ \begin{center}\large#1\end{center} } } \setlength{\droptitle}{-2em} \title{Urban\_mobility} \pretitle{\vspace{\droptitle}\centering\huge} \posttitle{\par} \author{Ivana Kocanova} \preauthor{\centering\large\emph} \postauthor{\par} \predate{\centering\large\emph} \postdate{\par} \date{23/08/2019} \begin{document} \maketitle To better understand urban life, this research aims to construct networks utilizing travel-flow data. Using latest research of complex networks, we plan to uncover inherent community structure within city of Leeds. We believe these findings could be beneficial for urban planners, infrastructure maintenance and epidemic outbreak management. \hypertarget{exploring-origin-destionation-flows}{% \subsubsection{Exploring Origin-Destionation flows}\label{exploring-origin-destionation-flows}} We begin by loading the Origin-destination flows and linking them with spatial information loaded through shapefiles. \begin{Shaded} \begin{Highlighting}[] \CommentTok{# Required packages} \KeywordTok{library}\NormalTok{(sf)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Linking to GEOS 3.6.1, GDAL 2.2.3, PROJ 4.9.3 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{library}\NormalTok{(stplanr)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Registered S3 method overwritten by 'R.oo': ## method from ## throw.default R.methodsS3 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{library}\NormalTok{(leaflet)} \KeywordTok{library}\NormalTok{(tmap)} \KeywordTok{library}\NormalTok{(dplyr)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## ## Attaching package: 'dplyr' \end{verbatim} \begin{verbatim} ## The following objects are masked from 'package:stats': ## ## filter, lag \end{verbatim} \begin{verbatim} ## The following objects are masked from 'package:base': ## ## intersect, setdiff, setequal, union \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{library}\NormalTok{(igraph)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## ## Attaching package: 'igraph' \end{verbatim} \begin{verbatim} ## The following objects are masked from 'package:dplyr': ## ## as_data_frame, groups, union \end{verbatim} \begin{verbatim} ## The following objects are masked from 'package:stats': ## ## decompose, spectrum \end{verbatim} \begin{verbatim} ## The following object is masked from 'package:base': ## ## union \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{library}\NormalTok{(ggraph)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Loading required package: ggplot2 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{tmap_mode}\NormalTok{(}\StringTok{"plot"}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## tmap mode set to plotting \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{# Load the Origin-Destination (OD) flows} \NormalTok{flows <-}\StringTok{ }\KeywordTok{read.csv}\NormalTok{(}\StringTok{"data}\CharTok{\textbackslash{}\textbackslash{}}\StringTok{2011_census_flows.csv"}\NormalTok{)} \CommentTok{# Load Ward names lookup dataset} \NormalTok{wards <-} \StringTok{ }\KeywordTok{read.csv}\NormalTok{(}\StringTok{"data//Leeds_MSOA_Ward_LookUp.csv"}\NormalTok{)} \CommentTok{# Read in shapefile containing MSOA boundaries} \NormalTok{boundaries <-} \StringTok{ }\NormalTok{sf}\OperatorTok{::}\KeywordTok{read_sf}\NormalTok{(} \StringTok{"data//boundaries//Middle_Layer_Super_Output_Areas_December_2011_Super_Generalised_Clipped_Boundaries_in_England_and_Wales.shp"} \NormalTok{ )} \CommentTok{# Filter boundaries for Leeds } \NormalTok{leeds_boundaries <-}\StringTok{ }\NormalTok{boundaries }\OperatorTok{%>%} \StringTok{ }\KeywordTok{filter}\NormalTok{(stringr}\OperatorTok{::}\KeywordTok{str_detect}\NormalTok{(msoa11nm, }\StringTok{'Leeds'}\NormalTok{)) }\OperatorTok{%>%} \StringTok{ }\KeywordTok{st_transform}\NormalTok{(boundaries, }\DataTypeTok{crs =} \DecValTok{27700}\NormalTok{)} \CommentTok{# Add ward names } \NormalTok{leeds_OD_matrix <-}\StringTok{ }\NormalTok{flows }\OperatorTok{%>%}\StringTok{ } \StringTok{ }\KeywordTok{left_join}\NormalTok{(wards }\OperatorTok{%>%}\StringTok{ }\KeywordTok{select}\NormalTok{(msoa, ward_name), }\DataTypeTok{by =} \KeywordTok{c}\NormalTok{(}\StringTok{"origin"}\NormalTok{ =}\StringTok{ "msoa"}\NormalTok{)) }\OperatorTok{%>%}\StringTok{ } \StringTok{ }\KeywordTok{rename}\NormalTok{(}\StringTok{"origin_ward"}\NormalTok{ =}\StringTok{ "ward_name"}\NormalTok{) }\OperatorTok{%>%}\StringTok{ } \StringTok{ }\KeywordTok{left_join}\NormalTok{(wards }\OperatorTok{%>%}\StringTok{ }\KeywordTok{select}\NormalTok{(msoa, ward_name), }\DataTypeTok{by =} \KeywordTok{c}\NormalTok{(}\StringTok{"destination"}\NormalTok{ =}\StringTok{ "msoa"}\NormalTok{)) }\OperatorTok{%>%}\StringTok{ } \StringTok{ }\KeywordTok{rename}\NormalTok{(}\StringTok{"destination_ward"}\NormalTok{ =}\StringTok{ "ward_name"}\NormalTok{)} \CommentTok{# Add the geometry information for MSOA's} \NormalTok{leeds_OD_matrix <-}\StringTok{ }\NormalTok{leeds_OD_matrix }\OperatorTok{%>%} \StringTok{ }\KeywordTok{left_join}\NormalTok{(leeds_boundaries }\OperatorTok{%>%}\StringTok{ }\KeywordTok{select}\NormalTok{(msoa11cd, geometry),} \DataTypeTok{by =} \KeywordTok{c}\NormalTok{(}\StringTok{"origin"}\NormalTok{ =}\StringTok{ "msoa11cd"}\NormalTok{)) }\OperatorTok{%>%} \StringTok{ }\KeywordTok{rename}\NormalTok{(}\StringTok{"geometry_origin"}\NormalTok{ =}\StringTok{ "geometry"}\NormalTok{) }\OperatorTok{%>%}\StringTok{ } \StringTok{ }\KeywordTok{left_join}\NormalTok{(leeds_boundaries }\OperatorTok{%>%}\StringTok{ }\KeywordTok{select}\NormalTok{(msoa11cd, geometry),} \DataTypeTok{by =} \KeywordTok{c}\NormalTok{(}\StringTok{"destination"}\NormalTok{ =}\StringTok{ "msoa11cd"}\NormalTok{)) }\OperatorTok{%>%} \StringTok{ }\KeywordTok{rename}\NormalTok{(}\StringTok{"geometry_destination"}\NormalTok{ =}\StringTok{ "geometry"}\NormalTok{)} \KeywordTok{rm}\NormalTok{(leeds_boundaries)} \end{Highlighting} \end{Shaded} \hypertarget{which-areas-are-losinggaining-population-throughout-the-day}{% \subsubsection{Which areas are losing/gaining population throughout the day}\label{which-areas-are-losinggaining-population-throughout-the-day}} The following section examines which areas of Leeds experience population inflows or outflows. Having such insight can be valuable for infrastructure planners or city councils to effectively distribute resources. \begin{Shaded} \begin{Highlighting}[] \CommentTok{# Table of the most frequent origin wards} \NormalTok{o_table <-}\StringTok{ }\NormalTok{leeds_OD_matrix }\OperatorTok{%>%} \StringTok{ }\KeywordTok{group_by}\NormalTok{(origin_ward) }\OperatorTok{%>%} \StringTok{ }\KeywordTok{summarize}\NormalTok{(}\DataTypeTok{o_count =} \KeywordTok{n}\NormalTok{()) }\OperatorTok{%>%} \StringTok{ }\KeywordTok{arrange}\NormalTok{(}\KeywordTok{desc}\NormalTok{(o_count))} \CommentTok{# Table of the most frequent destination wards} \NormalTok{d_table <-}\StringTok{ }\NormalTok{leeds_OD_matrix }\OperatorTok{%>%} \StringTok{ }\KeywordTok{group_by}\NormalTok{(destination_ward) }\OperatorTok{%>%} \StringTok{ }\KeywordTok{summarize}\NormalTok{(}\DataTypeTok{d_count =} \KeywordTok{n}\NormalTok{()) }\OperatorTok{%>%} \StringTok{ }\KeywordTok{arrange}\NormalTok{(}\KeywordTok{desc}\NormalTok{(d_count))} \CommentTok{# Create a table which deducts the count of origins from destinations } \NormalTok{final_table <-}\StringTok{ }\NormalTok{o_table }\OperatorTok{%>%}\StringTok{ }\KeywordTok{left_join}\NormalTok{(d_table, }\DataTypeTok{by =} \KeywordTok{c}\NormalTok{(}\StringTok{"origin_ward"}\NormalTok{ =}\StringTok{ "destination_ward"}\NormalTok{))} \NormalTok{final_table}\OperatorTok{$}\NormalTok{inflows_count <-}\StringTok{ }\NormalTok{final_table}\OperatorTok{$}\NormalTok{d_count }\OperatorTok{-}\StringTok{ }\NormalTok{final_table}\OperatorTok{$}\NormalTok{o_count } \CommentTok{# Display the table of inflows } \NormalTok{final_table }\OperatorTok{%>%}\StringTok{ } \StringTok{ }\KeywordTok{select}\NormalTok{(origin_ward,inflows_count) }\OperatorTok{%>%}\StringTok{ } \StringTok{ }\KeywordTok{arrange}\NormalTok{(}\KeywordTok{desc}\NormalTok{(inflows_count))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## # A tibble: 33 x 2 ## origin_ward inflows_count ## <fct> <int> ## 1 Burmantofts and Richmond Hill 41 ## 2 City and Hunslet 37 ## 3 Morley South 35 ## 4 Beeston and Holbeck 28 ## 5 Hyde Park and Woodhouse 25 ## 6 Morley North 20 ## 7 Armley 19 ## 8 Calverley and Farsley 16 ## 9 Killingbeck and Seacroft 15 ## 10 Farnley and Wortley 14 ## # ... with 23 more rows \end{verbatim} The table above shows that the Burmantofts and Richmond Hill had 41 more recorded incoming flows than outflows. The areas with the highest outflows were Wetherby, Kippax and Methley, and Ardsley and Robin Hood. \begin{Shaded} \begin{Highlighting}[] \CommentTok{# Join inflows dataframe with wards } \NormalTok{wards <-}\StringTok{ }\NormalTok{wards }\OperatorTok{%>%}\StringTok{ }\KeywordTok{left_join}\NormalTok{(final_table, }\DataTypeTok{by =} \KeywordTok{c}\NormalTok{ (}\StringTok{"ward_name"}\NormalTok{ =}\StringTok{ "origin_ward"}\NormalTok{))} \CommentTok{# Add spatial information about the wards} \NormalTok{wards <-}\StringTok{ }\NormalTok{wards }\OperatorTok{%>%}\StringTok{ }\KeywordTok{left_join}\NormalTok{(boundaries }\OperatorTok{%>%}\StringTok{ }\KeywordTok{select}\NormalTok{(msoa11cd, geometry), }\DataTypeTok{by =} \KeywordTok{c}\NormalTok{(}\StringTok{"msoa"}\NormalTok{ =}\StringTok{ "msoa11cd"}\NormalTok{))} \NormalTok{wards <-}\StringTok{ }\NormalTok{wards }\OperatorTok{%>%}\StringTok{ }\KeywordTok{st_sf}\NormalTok{(}\DataTypeTok{sf_column_name =} \StringTok{"geometry"}\NormalTok{)} \CommentTok{# Map of the inflows counts} \KeywordTok{tm_shape}\NormalTok{(wards) }\OperatorTok{+} \StringTok{ }\KeywordTok{tm_fill}\NormalTok{(}\DataTypeTok{col =} \StringTok{"inflows_count"}\NormalTok{, }\DataTypeTok{style =} \StringTok{"jenks"}\NormalTok{, }\DataTypeTok{midpoint =} \OtherTok{NA}\NormalTok{, }\DataTypeTok{alpha =} \FloatTok{0.7}\NormalTok{, }\DataTypeTok{title =} \StringTok{"Inflows count"}\NormalTok{)}\OperatorTok{+} \StringTok{ }\KeywordTok{tm_borders}\NormalTok{() }\OperatorTok{+}\StringTok{ } \StringTok{ }\KeywordTok{tm_basemap}\NormalTok{(}\DataTypeTok{server =} \StringTok{"OpenStreetMap.BlackAndWhite"}\NormalTok{)} \end{Highlighting} \end{Shaded} \includegraphics{Markdown_analysis_files/figure-latex/unnamed-chunk-3-1.pdf} The code above sums previously obtained table into heatmap highlighting how different areas of Leeds are affected by urban mobility. It can be observed that most inflows are consolidated around the centre and highest outflows are within Wetherby and south-east parts of the Leeds. \hypertarget{estimating-travel-routes-from-od-flows}{% \subsubsection{Estimating travel routes from OD flows}\label{estimating-travel-routes-from-od-flows}} Knowing where a journey began and ended we can estimate the most likely route taken. For example, a journey from Wetherby to Otley depicted as a straight line can be transformed into a traffic route as seen in the picture below. \begin{figure} \includegraphics[width=1\linewidth]{maps\Example_line2route} \caption{Estimation of route taken between Wetherby and Otley}\label{fig:pressure} \end{figure} Having calculated the routes with OSRM, the route network can be then visualized to show where the routes overlap each other. The darker colour shows a higher frequency of individuals movement and indicates places of possible traffic congestion. \end{document}
{ "alphanum_fraction": 0.7288266366, "avg_line_length": 44.3394736842, "ext": "tex", "hexsha": "1a39a809313c09141d7a175bb7dcdb4cf1f83cc6", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "34f3bd20de7a69739b742bd8425533d8fe7dbb63", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "IvanaKocanova/Community_detection_with_Complex_Networks", "max_forks_repo_path": "Markdown_analysis.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "34f3bd20de7a69739b742bd8425533d8fe7dbb63", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "IvanaKocanova/Community_detection_with_Complex_Networks", "max_issues_repo_path": "Markdown_analysis.tex", "max_line_length": 342, "max_stars_count": null, "max_stars_repo_head_hexsha": "34f3bd20de7a69739b742bd8425533d8fe7dbb63", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "IvanaKocanova/Community_detection_with_Complex_Networks", "max_stars_repo_path": "Markdown_analysis.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5306, "size": 16849 }
\documentclass[11pt,letterpaper]{article} \usepackage[pdftex]{graphicx} \usepackage{natbib} \usepackage{fullpage} \usepackage{lineno} \usepackage{multirow} \usepackage{wrapfig} \usepackage{amsmath} \usepackage{amssymb} \usepackage{sidecap} \usepackage{hyperref} \begin{document} \setlength{\parindent}{0mm} \setlength{\parskip}{0.4cm} \bibliographystyle{apalike} %\modulolinenumbers[5] %\linenumbers \title{ARCS scenario 1 data inversions} \maketitle \tableofcontents \pagebreak This document describes the mathematical formulation of the ARCS scenario 1 data inversion problem. \section{Summary} Measurements being made for ARCS scenario 1 crossings will result in production of maps of magnetic field fluctuations $\delta \mathbf{B}$, and flows $\mathbf{v}$ as described in the various instrument and data processing section. These will be converted, as part of standard data processing, into parallel current density and electric field: \begin{equation} \mathbf{J} = \nabla \times \left( \frac{\delta \mathbf{B}}{\mu_0} \right); \qquad \mathbf{E} = -\mathbf{v} \times \mathbf{B} \end{equation} Because this is a swarm of satellites we will have \emph{datamaps} (2D images) of these parameters; additionally we note that the measurements may be directly used to also produce datamaps of Poynting flux: \begin{equation} \mathbf{S} = \mathbf{E} \times \frac{\delta \mathbf{B}}{\mu_0} \end{equation} In the case of scenario 1 data, key unknown physical parameters are datamaps of ionospheric Pedersen and Hall conductances - without these we do not have a full picture of electrodynamics and energy flow in the auroral system and cannot, then, fully unlock the scientific potential of the large amount of scenario 1 data. Scenario 1 current density and Poynting flux Datamaps are related to each other through electromagnetic conservation laws: current continuity and the Poynting theorem. Under assumptions of steady state conditions and equipotential field lines (appropriate at scales relevant to our science questions), these can be reduced to: \begin{eqnarray} J_\parallel &=& \Sigma_P \nabla \cdot \mathbf{E}_\perp + \nabla \Sigma_P \cdot \mathbf{E}_\perp - \nabla \Sigma_H \cdot \left( \mathbf{E}_\perp \times \hat{\mathbf{b}} \right) \\ S_{\parallel} &=& - \Sigma_P E^2 \end{eqnarray} with unknown Pedersen and Hall conductances. These conductances may, in principle, be solved by simply inverting this system of equations. In practice there are many different methods that could work to perform this inversion. A first glance the system appears even-determined on account of the fact that we have two unknown fields (conductances) and two input datamaps (conserved variables current density and Poynting flux). It is worth noting however, that the component of the Hall conductance gradient along the electric field direction is explicitly not part of this equation and so is in the null space of the problem. Thus in addition to conservation laws additional prior information is needed. For ARCS data processing this will come in two forms (both of which may not always be necessary depending on noise conditions): (1) regularization that places constrains on solution norm or smoothness (i.e. Tikhonov regularization), and/or (2) inclusion of model information that further correlates Pedersen conductance (which is well-constrained by laws above) to Hall conductance (which requires further constraints). This could come in the form of model-based inversions (e.g. using GEMINI-GLOW) or simply parameerizations based, e.g. on the Robinson formulas or updated version of these formulas. \section{Physical constraints: simplification of general conservation laws} Conservation of charge in electromagnetic system is described by the current continuity equation: \begin{equation} \frac{\partial \rho}{\partial t} + \nabla \cdot \mathbf{J} = 0 \end{equation} In a steady state this reduces to: \begin{equation} \nabla \cdot \mathbf{J} = \frac{\partial J_\parallel}{\partial z} + \nabla_\perp \cdot \mathbf{J}_\perp = 0, \end{equation} where the $z-$ direction represent altitude in a locally Cartesian coordinate system. Integrating with respect to altitude: \begin{equation} \int \frac{\partial J_\parallel}{\partial z} dz + \int \nabla_\perp \cdot \mathbf{J}_\perp dz = J_\parallel(\max(z)) - J_\parallel(\min(z)) + \nabla_\perp \cdot \left( \Sigma \cdot \mathbf{E}_\perp \right) = 0 \end{equation} Which can be expanded out and solved for the parallel current at the top of the domain, if the bottom current is assumed to be zero: \begin{equation} J_\parallel = - \nabla_\perp \cdot \left( \Sigma \cdot \mathbf{E}_\perp \right) = - \Sigma_P \nabla \cdot \mathbf{E}_\perp - \nabla \Sigma_P \cdot \mathbf{E}_\perp + \nabla \Sigma_H \cdot \left( \mathbf{E}_\perp \times \hat{\mathbf{b}} \right) \end{equation} The current continuity equation to be used in the ARCS analysis is then: \begin{equation} \boxed{ J_\parallel = -\Sigma_P \nabla \cdot \mathbf{E}_\perp - \nabla \Sigma_P \cdot \mathbf{E}_\perp + \nabla \Sigma_H \cdot \left( \mathbf{E}_\perp \times \hat{\mathbf{b}} \right) } \label{eqn:continuity} \end{equation} Note that this equation effectively has two unknown fields $\Sigma_P,\Sigma_H$, but represents only one physical constraint; hence additional information is needed. This is provided by conservation of electromagnetic energy, viz. the Poynting theorem: \begin{equation} \frac{\partial w}{\partial t} + \nabla \cdot \mathbf{S} = - \mathbf{J} \cdot \mathbf{E} \end{equation} Similar to the assumptions made to produce Equation \ref{eqn:continuity} we neglect time-dependent terms and proceed to integrate the equation along a geomagnetic field line: \begin{equation} S_{\parallel,top} - S_{\parallel,bottom} + \nabla_\perp \cdot \mathbf{\mathcal{S}}_\perp = - \Sigma_P E^2 \end{equation} where $\mathbf{\mathcal{S}}_\perp$ is the column integrated perpendicular Poynting flux. If we further assume that there is no Poynting flux through the bottom of the ionosphere or the lateral sides of our volume of interest (i.e. net incoming D.C. Poynting flux is dissipated) a simple relation between parallel Poynting flux and Pedersen conductance. \begin{equation} \boxed{ S_{\parallel} = - \Sigma_P E^2 } \label{eqn:poynting} \end{equation} \section{Estimating conductances} Several different procedures can be developed for converting the maps of electric field and Poynting flux into conductances. Two approach are discussed here. Equation \ref{eqn:poynting} fully specifies the Pedersen conductance given quantities that are measurable by scenario 1 experiments, so the most obvious path would be to then provide the Pedersen conductance to Equation \ref{eqn:continuity}. Superficially, the equation allows solution for the gradient of the Hall conductance and in principle one would need to compute a line integral of this quantity to solve for Hall conductance: \begin{equation} \Sigma_H(\mathbf{r}_2)-\Sigma_H(\mathbf{r}_1) = \int_{\mathbf{r}_1}^{\mathbf{r}_2} \nabla \Sigma_H \cdot d \mathbf{r} \end{equation} Moreover, one would also need the value of the Hall condutance at some reference point $\mathbf{r}_1$ to complete the solution for Hall conductance. While it may be possible to choose a point with low density and assume zero Hall conductance at that reference point there is a more serious issue with this approach and with the set of physical constraints being used, more generally. Equation \ref{eqn:continuity} only provides constraints on the derivative of the Hall conductance \emph{in the direction of the $\mathbf{E} \times \mathbf{B}$ drift}. Thus, there is information about the Hall conductance (namely the variation in the direction of the electric field) that is completely unconstrained by current continuity. As a result, the Hall conductance lies partly in the null space of the problem defined by Equations \ref{eqn:continuity} and \ref{eqn:poynting} and some additional assumptions/information/regularization will be required to solve the inverse problem. Another approach to the inverse problem would be to view the conservation laws as constraints to be combined together with other prior information in the form of, e.g., smoothness constraints. Here we rewrite the physical constraints in a matrix form to facilitate application of results from linear inverse theory. Field quantities can be ``flattened'' into vectors using column major ordering and then operators can be represented through matrix operations. The latter step can be understood as a decomposition of the derivative operations into finite difference matrices: \begin{equation} \underline{j} = - \underline{\underline{I}} ~ \underline{p} \left( \nabla \cdot \mathbf{E}_\perp \right) - \underline{\underline{L}}_x \underline{p} E_x - \underline{\underline{L}}_y \underline{p} E_y + \underline{\underline{L}}_{E \times B} \underline{h} E_\perp \end{equation} \begin{equation} \underline{s} = - E^2 \underline{\underline{I}} ~ \underline{p} \end{equation} Concatenating the unknown conductances into a single vector we get: \begin{equation} \underline{x} \equiv \left[ \begin{array}{c} \underline{p} \\ \underline{h} \end{array} \right] \end{equation} The left-hand sides of each conservation law (i.e. measurements) are similarly stacked: \begin{equation} \underline{b} \equiv \left[ \begin{array}{c} \underline{j} \\ \underline{s} \end{array} \right] \end{equation} Finally the right-hand side operations may be expressed in block diagonal form: \begin{equation} \underline{\underline{A}} \equiv \left[ \begin{array}{cc} -\underline{\underline{I}} \left( \nabla \cdot \mathbf{E}_\perp \right) - \underline{\underline{L}}_x E_x - \underline{\underline{L}}_y E_y & ~ \underline{\underline{L}}_{E \times B} E_\perp \\ -E^2 \underline{\underline{I}} & \underline{\underline{0}} \end{array} \right] \end{equation} Yielding our full set of constrains as: \begin{equation} \underline{\underline{A}} ~ \underline{x} = \underline{b} \end{equation} As discussed previously this system will not be full-rank, but serves as a starting point for a suitable generalized inverse for this problem. As a final note the full system has size $2 \cdot N \cdot M \times 2 \cdot N \cdot M$; where $N,M$ are the $x,y$ size of the data maps provided by instrument teams. \section{Maximum likelihood estimators} The maximum likelihood estimator, assuming Gaussian-distributed noise is (note we drop the underline notation here for brevity): \begin{equation} \hat{x}_{ML} = \left( A^T A \right)^{-1} A^T b \end{equation} The matrix to be inverted here is singular for reasons noted previously; we adopt a Tikhonov regularization scheme to mitigate this: \begin{equation} \hat{x} = \left( A^T A + \lambda I \right)^{-1} A^T b \end{equation} where $\lambda$ is a regularization parameter. This approach regularizes the norm of the solution and coerces it to favor small norms. One could also enforce any other conditions that can be expressed as a linear operation, yielding: \begin{equation} \hat{x} = \left( A^T A + \lambda \Gamma^T \Gamma \right)^{-1} A^T b \end{equation} where $\Gamma$ is an operator describing smoothness (e.g. Laplacian) or variation (gradient). We find that the laplacian works well to keep the reconstructions as smooth as possible. We can add in an offset term, i.e. solve a problem of the form: \begin{equation} \hat{x} = \min_x \left\{ || Ax -b ||^2 + || \Gamma x||^2 + || x - x_0 ||^2 \right\} \end{equation} Which solves the least squares problem subject to constraints on smoothness and some expected value for the solution. The solution is then given by: \begin{equation} \hat{x} = \left( A^T A + \lambda \Gamma^T \Gamma \right)^{-1} \left( A^T b - \Gamma^T \Gamma x_0\right) \end{equation} In the case of the problem of estimating the ionospheric conductances the Hall conductance could constrained to not vary too far from the Pedersen conductivity. Lastly it may be advantageous to recast the current continuity in terms of the ratio of Hall to Pedersen conductance. This retains the linearity of the problem only if the Pedersen conductance is known \emph{a priori}. \begin{equation} \nabla \left( \frac{\Sigma_H}{\Sigma_P} \right) \cdot \mathbf{E} \times \hat{\mathbf{b}} + \left( \frac{\Sigma_H}{\Sigma_P} \right) \frac{\nabla \Sigma_P}{\Sigma_P} \cdot \mathbf{E} \times \hat{\mathbf{b}} = \frac{J_\parallel}{\Sigma_P} + \nabla_\perp \cdot \mathbf{E}_\perp + \frac{\nabla \Sigma_P}{\Sigma_P} \cdot \mathbf{E}_\perp \end{equation} This problem can be expressed in matrix form, similar to the approaches described above, and also solve via regularized inverses. Doing so in a linear fashion does require one to first solve for the Pedersen conductance using the Poynting theorem. Such a formulation has the benefit that you can regularize deviations from a set conductance ratio. If a ratio of 1 is used this is equivalent to assuming an average energy of 2.5 keV for the precipitating particles. \section{Error Covariance} \section{Connections to precipitating electrons} Conductances are ultimately driven by electron precipitation, here encapsulated in terms of total energy flux $Q$ and average energy $E_{av}$ - these precipitation parameters are what is needed to ultimately drive the GEMINI simulations. One of the simplest parameterizations of conductance is the Robinson formulas: \begin{equation} \Sigma_P = \frac{40 E_{av}}{16+E_{av}^2} \sqrt{Q} \end{equation} \begin{equation} \frac{\Sigma_H}{\Sigma_P} = 0.45 E_{av}^{0.85} \end{equation} Using these to constrain conductances creates a physical correlations between them that does not exist using just the constraints from conservation laws. \end{document}
{ "alphanum_fraction": 0.7625888841, "avg_line_length": 74.4972972973, "ext": "tex", "hexsha": "ea81f136ff4457d9ae2ab8503f74f5eb6a49e92e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c9cdb2a5f183551dd8317ae7ba361ffe5fb1d909", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "mattzett/arcs_scen1", "max_forks_repo_path": "docs/inverse_formulation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c9cdb2a5f183551dd8317ae7ba361ffe5fb1d909", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "mattzett/arcs_scen1", "max_issues_repo_path": "docs/inverse_formulation.tex", "max_line_length": 1315, "max_stars_count": null, "max_stars_repo_head_hexsha": "c9cdb2a5f183551dd8317ae7ba361ffe5fb1d909", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "mattzett/arcs_scen1", "max_stars_repo_path": "docs/inverse_formulation.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3825, "size": 13782 }
\section*{Appendix 1 -- Manual bibliography} \addcontentsline{toc}{section}{Appendix 1 -- Manual bibliography} Appendices are purely optional. All appendices must be referred to in the body text. You can append to your thesis, for example, lengthy mathematical derivations,an important algorithm in a programming language, input and output listings, an extract of a standard relating to your thesis, a user manual, empirical knowledge produced while preparing the thesis, the results of a survey, lists, pictures, drawings, maps, complex charts (conceptual schema, circuit diagrams, structure charts) etc. List of references \begin{enumerate} \item Articleauthor,~A. (year) Article title in regular font. \textit{Journal title in italic.} {\bf Volume in bold}, pages. \item Bookauthor,~B., Anotherauthor,~A. \& Yetanotherauthor, Y. (year) \textit{Book title in italic.} Publishing House. \item Bookauthor2,~B., \& Bookauthor,~B. (year) \textit{Another book (in italic).} Another Publishing House. \item Internetauthor,~I. Title in regular font. \textit{Available:} \url{http://...} (\textit{last visited:} date) \item Hogg,~R. \& Klugman,~S. (1984) \textit{Loss distributions.} Wiley, New York. \item Pigeon,~M.\& Denuit.,~M. (2010) Composite lognormal-Pareto model with random threshold. \textit{Scandinavian Actuarial Journal}, \textbf{10}, 49--64. \item R Core Team. Documentation for package 'stats'. \textit{Available:} \url{http://stat.ethz.ch/R-manual/R-patched/library/stats/html/00Index.html} (\textit{last visited:} 12.11.2014) \end{enumerate}
{ "alphanum_fraction": 0.7570512821, "avg_line_length": 82.1052631579, "ext": "tex", "hexsha": "9f33e93f25ed311f04030ecd073629245e56176d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "09ea8ee128f7ea9c22949cd9cdf2ae4cf67ede78", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "klumiste/UTMatStat-thesis-template", "max_forks_repo_path": "10_1_appendix1.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "09ea8ee128f7ea9c22949cd9cdf2ae4cf67ede78", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "klumiste/UTMatStat-thesis-template", "max_issues_repo_path": "10_1_appendix1.tex", "max_line_length": 409, "max_stars_count": null, "max_stars_repo_head_hexsha": "09ea8ee128f7ea9c22949cd9cdf2ae4cf67ede78", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "klumiste/UTMatStat-thesis-template", "max_stars_repo_path": "10_1_appendix1.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 434, "size": 1560 }
\subsection{Introduction} The {\tt{}subnetize} file includes Lua code related to turning an ordinary function into a subnet. Internally, the code segment \begin{verbatim} subnet foo(A, material, l) -- stuff -- end \end{verbatim} is equivalent to \begin{verbatim} function foo(args) local A = args[1] or args.A or (args.material and args.material.A) local material = args[2] or args.material or (args.material and args.material.material) local l = args[3] or args.l or (args.material and args.material.l) end foo = subnetize(foo) \end{verbatim} Thus, almost all the work of making a subnet be a subnet goes into the {\tt{}subnetize} function. This module defines the standard version of {\tt{}subnetize}, which maintains a heirarchical namespace for nodes declared in subnets and maintains the stack of nested local coordinate systems defined in the {\tt{}xformstack} module. \subsection{Implementation} \nwfilename{subnetize.nw}\nwbegincode{1}\sublabel{NWsubC-subD-1}\nwmargintag{{\nwtagstyle{}\subpageref{NWsubC-subD-1}}}\moddef{subnetize.lua~{\nwtagstyle{}\subpageref{NWsubC-subD-1}}}\endmoddef use("xformstack.lua") \LA{}initialize data structures~{\nwtagstyle{}\subpageref{NWsubC-iniQ-1}}\RA{} \LA{}functions~{\nwtagstyle{}\subpageref{NWsubC-fun9-1}}\RA{} \nwnotused{subnetize.lua}\nwendcode{}\nwbegindocs{2}\nwdocspar The {\tt{}namestack} is a stack of prefixes for names declared in the current scope. For the global scope, the stack entry is an empty string (when you say ``foo'', you mean ``foo''). Inside of a subnet instance ``bar,'' though, when you make a node named ``foo'' you actually get ``bar.foo.'' Therefore, the prefix ``bar.'' is stored on the stack while processing the subnet instance. The {\tt{}localnames} is a table of names for local nodes (nodes in the current scope). \nwenddocs{}\nwbegincode{3}\sublabel{NWsubC-iniQ-1}\nwmargintag{{\nwtagstyle{}\subpageref{NWsubC-iniQ-1}}}\moddef{initialize data structures~{\nwtagstyle{}\subpageref{NWsubC-iniQ-1}}}\endmoddef -- Stack of node name prefixes _namestack = \{""; n = 1, nanon = 0\}; -- Table of node names in the current scope _localnames = \{\}; \nwused{\\{NWsubC-subD-1}}\nwendcode{}\nwbegindocs{4}\nwdocspar The {\tt{}subnetize} function transforms a function that creates a substructure into a function that creates a substructure in a nested coordinate frame. \nwenddocs{}\nwbegincode{5}\sublabel{NWsubC-fun9-1}\nwmargintag{{\nwtagstyle{}\subpageref{NWsubC-fun9-1}}}\moddef{functions~{\nwtagstyle{}\subpageref{NWsubC-fun9-1}}}\endmoddef function subnetize(f) return function(p) \LA{}add name of current subnet instance to stack~{\nwtagstyle{}\subpageref{NWsubC-addi-1}}\RA{} local old_localnames = _localnames _localnames = \{\}; \LA{}form $T$ from \code{}ox\edoc{}, \code{}oy\edoc{}, and \code{}oz\edoc{}~{\nwtagstyle{}\subpageref{NWsubC-fore-1}}\RA{} xform_push(T) %f(p) xform_pop() _localnames = old_localnames _namestack.n = _namestack.n - 1 end end \nwalsodefined{\\{NWsubC-fun9-2}}\nwused{\\{NWsubC-subD-1}}\nwendcode{}\nwbegindocs{6}\nwdocspar When we add a new subnet instance, we store its extended name on the stack. If it has no name, it is assigned a name beginning with the tag ``anon.'' \nwenddocs{}\nwbegincode{7}\sublabel{NWsubC-addi-1}\nwmargintag{{\nwtagstyle{}\subpageref{NWsubC-addi-1}}}\moddef{add name of current subnet instance to stack~{\nwtagstyle{}\subpageref{NWsubC-addi-1}}}\endmoddef local name = p.name if not name then name = "anon" .. _namestack.nanon _namestack.nanon = _namestack.nanon + 1 end _namestack[_namestack.n + 1] = _namestack[_namestack.n] .. name .. "." _namestack.n = _namestack.n + 1 if p.name then p.name = _namestack[_namestack.n-1] .. name end \nwused{\\{NWsubC-fun9-1}}\nwendcode{}\nwbegindocs{8}\nwdocspar When setting up coordinate transformations, we rotate about the $y$ axis, then $z$, then $x$. I am unsure why that convention was chosen, but it is the same convention used in SUGAR 2.0. \nwenddocs{}\nwbegincode{9}\sublabel{NWsubC-fore-1}\nwmargintag{{\nwtagstyle{}\subpageref{NWsubC-fore-1}}}\moddef{form $T$ from \code{}ox\edoc{}, \code{}oy\edoc{}, and \code{}oz\edoc{}~{\nwtagstyle{}\subpageref{NWsubC-fore-1}}}\endmoddef local T = xform_identity() if p.oy then T = xform_compose(T, xform_oy(p.oy)) p.oy = nil end if p.oz then T = xform_compose(T, xform_oz(p.oz)) p.oz = nil end if p.ox then T = xform_compose(T, xform_ox(p.ox)) p.ox = nil end \nwused{\\{NWsubC-fun9-1}}\nwendcode{}\nwbegindocs{10}\nwdocspar \nwenddocs{}\nwbegincode{11}\sublabel{NWsubC-fun9-2}\nwmargintag{{\nwtagstyle{}\subpageref{NWsubC-fun9-2}}}\moddef{functions~{\nwtagstyle{}\subpageref{NWsubC-fun9-1}}}\plusendmoddef -- Create a new node. If something with the same name -- already exists in the current scope, use that. -- function node(args) if type(args) == "string" then args = \{name = args\} end local name = args.name if not name then name = "anon" .. _namestack.nanon _namestack.nanon = _namestack.nanon + 1 end if not _localnames[name] then args.name = _namestack[_namestack.n] .. name _localnames[name] = %node(args) end return _localnames[name] end \nwendcode{} \nwixlogsorted{c}{{add name of current subnet instance to stack}{NWsubC-addi-1}{\nwixu{NWsubC-fun9-1}\nwixd{NWsubC-addi-1}}}% \nwixlogsorted{c}{{form $T$ from \code{}ox\edoc{}, \code{}oy\edoc{}, and \code{}oz\edoc{}}{NWsubC-fore-1}{\nwixu{NWsubC-fun9-1}\nwixd{NWsubC-fore-1}}}% \nwixlogsorted{c}{{functions}{NWsubC-fun9-1}{\nwixu{NWsubC-subD-1}\nwixd{NWsubC-fun9-1}\nwixd{NWsubC-fun9-2}}}% \nwixlogsorted{c}{{initialize data structures}{NWsubC-iniQ-1}{\nwixu{NWsubC-subD-1}\nwixd{NWsubC-iniQ-1}}}% \nwixlogsorted{c}{{subnetize.lua}{NWsubC-subD-1}{\nwixd{NWsubC-subD-1}}}% \nwbegindocs{12}\nwdocspar \nwenddocs{}
{ "alphanum_fraction": 0.7141168524, "avg_line_length": 37.2452830189, "ext": "tex", "hexsha": "528a6665e437815890cdf25177755fba34e62c9c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "699534852cb37fd2225a8b4b0072ebca96504d23", "max_forks_repo_licenses": [ "BSD-2-Clause" ], "max_forks_repo_name": "davidgarmire/sugar", "max_forks_repo_path": "sugar30/src/tex/subnetize.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "699534852cb37fd2225a8b4b0072ebca96504d23", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-2-Clause" ], "max_issues_repo_name": "davidgarmire/sugar", "max_issues_repo_path": "sugar30/src/tex/subnetize.tex", "max_line_length": 237, "max_stars_count": null, "max_stars_repo_head_hexsha": "699534852cb37fd2225a8b4b0072ebca96504d23", "max_stars_repo_licenses": [ "BSD-2-Clause" ], "max_stars_repo_name": "davidgarmire/sugar", "max_stars_repo_path": "sugar30/src/tex/subnetize.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2036, "size": 5922 }
\chapter{Recovering from errors} \section{Recovering from errors} \label{sec:recovering-from-errors} \sysname{} offers extensive support for recovering from many syntax errors, continuing to read from the input stream and return a result that somewhat resembles what would have been returned in case the syntax had been valid. To this end, a restart named \texttt{eclector.reader:recover} is established when recoverable errors are signaled. Like the standard \commonlisp{} restart \texttt{cl:continue}, this restart can be invoked by a function of the same name: \Defun {recover} {\optional condition} This function recovers from an error by invoking the most recently established applicable restart named \texttt{eclector.reader:recover}. If no such restart is currently established, it returns \texttt{nil}. If \textit{condition} is non-\texttt{nil}, only restarts that are either explicitly associated with \textit{condition}, or not associated with any condition are considered. When a \texttt{read} call during which error recovery has been performed returns, \sysname{} tries to return an object that is similar in terms of type, numeric value, sequence length, etc. to what would have been returned in case the input had been well-formed. For example, recovering after encountering the invalid digit in \texttt{\#b11311} returns either the number \texttt{\#b11011} or the number \texttt{\#b11111}. \section{Recoverable errors} \label{sec:recoverable-errors} A syntax error and a corresponding recovery strategy are characterized by the type of the signaled condition and the report of the established \texttt{eclector.reader:recover} restart respectively. Attempting to list and describe all examples of both would provide little insight. Instead, this section describes different classes of errors and corresponding recovery strategies in broad terms: \newcommand{\RecoverExample}[2]{\texttt{#1} $\rightarrow$ \texttt{#2}} \begin{itemize} \item Replace a missing numeric macro parameter or ignore an invalid numeric macro parameter. Examples: \RecoverExample{\#=1}{1}, \RecoverExample{\#5P"."}{\#P"."} \item Add a missing closing delimiter. Examples: \RecoverExample{"foo}{"foo"}, \RecoverExample{(1 2}{(1 2)}, \RecoverExample{\#(1 2}{\#(1 2)}, \RecoverExample{\#C(1 2}{\#C(1 2)} \item Replace an invalid digit or an invalid number with a valid one. This includes digits which are invalid for a given base but also things like $0$ denominator. Examples: \RecoverExample{\#12rc}{1}, \RecoverExample{1/0}{1}, \RecoverExample{\#C(1 :foo)}{\#C(1 1)} \item Replace an invalid character with a valid one. Example: \RecoverExample{\#\textbackslash{}foo}{\#\textbackslash{}?} \item Invalid constructs can sometimes be ignored. Examples: \RecoverExample{(,1)}{(1)}, \RecoverExample{\#S(foo :bar 1 2 3)}{\#S(foo :bar 1)} \item Excess parts can often be ignored. Examples: \RecoverExample{\#C(1 2 3)}{\#C(1 2)}, \RecoverExample{\#2(1 2 3)}{\#2(1 2)} \item Replace an entire construct by some fallback value. Example: \RecoverExample{\#S(5)}{nil}, \RecoverExample{(\#1=)}{(nil)} \end{itemize} \section{Potential problems} \label{sec:potential-problems} Note that attempting to recover from syntax errors may lead to apparent success in the sense that the \texttt{read} call returns an object, but this object may not be what the caller wanted. For example, recovering from the missing closing \texttt{"} in the following example \begin{Verbatim}[frame=single] (defun foo (x y) "My documentation string (+ x y)) \end{Verbatim} results in \verb!(DEFUN FOO (X Y) "My documentation string<newline> (+ x y))")!, not \verb!(DEFUN FOO (X Y) "My documentation string" (+ x y))!.
{ "alphanum_fraction": 0.7499330656, "avg_line_length": 41.043956044, "ext": "tex", "hexsha": "f6847b439a627b1733c01584254706aba804c6d2", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-06-09T22:09:54.000Z", "max_forks_repo_forks_event_min_datetime": "2020-06-09T22:09:54.000Z", "max_forks_repo_head_hexsha": "fa652c5d9750c4cbdc43082a3e07243bd2e265e4", "max_forks_repo_licenses": [ "BSD-2-Clause" ], "max_forks_repo_name": "clasp-developers/Eclector", "max_forks_repo_path": "documentation/chap-recovering-from-errors.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "fa652c5d9750c4cbdc43082a3e07243bd2e265e4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-2-Clause" ], "max_issues_repo_name": "clasp-developers/Eclector", "max_issues_repo_path": "documentation/chap-recovering-from-errors.tex", "max_line_length": 81, "max_stars_count": 1, "max_stars_repo_head_hexsha": "fa652c5d9750c4cbdc43082a3e07243bd2e265e4", "max_stars_repo_licenses": [ "BSD-2-Clause" ], "max_stars_repo_name": "clasp-developers/Eclector", "max_stars_repo_path": "documentation/chap-recovering-from-errors.tex", "max_stars_repo_stars_event_max_datetime": "2020-05-03T04:16:00.000Z", "max_stars_repo_stars_event_min_datetime": "2020-05-03T04:16:00.000Z", "num_tokens": 1022, "size": 3735 }
\documentclass[11pt]{article} \newcommand{\blind}{1} % DON'T change margins - should be 1 inch all around. \addtolength{\oddsidemargin}{-.5in}% \addtolength{\evensidemargin}{-1in}% \addtolength{\textwidth}{1in}% \addtolength{\textheight}{1.7in}% \addtolength{\topmargin}{-1in}% \usepackage{amsmath, amsthm, amssymb, bbm, bm} \usepackage[ruled]{algorithm2e} \newtheorem{lemma}{Lemma} \newtheorem{theorem}{Theorem} \newtheorem{corollary}{Corollary} \newtheorem{proposition}{Proposition} \theoremstyle{definition} \newtheorem{example}{Example} \newtheorem{assumption}{Assumption} %\usepackage[margin=1in]{geometry} \usepackage{graphicx} \usepackage{subcaption} \graphicspath{{../figures/}} %\setcitestyle{authoryear} \usepackage[]{natbib} \bibliographystyle{aer} \usepackage{setspace} \usepackage[utf8]{inputenc} % allow utf-8 input \usepackage[T1]{fontenc} % use 8-bit T1 fonts \usepackage{hyperref} \hypersetup{colorlinks,citecolor=blue,urlcolor=blue,linkcolor=blue}% hyperlinks \usepackage{url} % simple URL typesetting \usepackage{booktabs} % professional-quality tables \usepackage{amsfonts} % blackboard math symbols \usepackage{nicefrac} % compact symbols for 1/2, etc. \usepackage{microtype} % microtypography \begin{document} %\bibliographystyle{natbib} \def\spacingset#1{\renewcommand{\baselinestretch}% {#1}\small\normalsize} \spacingset{1} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \if1\blind { \title{\bf Learning to Personalize Treatments When Agents Are Strategic} \author{Evan Munro \thanks{ I thank Mohammad Akbarpour, Susan Athey, Anirudha Balasubramanian, Martino Banchio, Alex Frankel, Guido Imbens, Stefan Wager, Bob Wilson, and Kuang Xu for helpful comments and discussions. Replication code and data for the analysis in the paper is available at \url{http://github.com/evanmunro/personalized-policy}.}\hspace{.2cm}\\ Graduate School of Business, Stanford University\\ } \maketitle } \fi \if0\blind { \bigskip \bigskip \bigskip \begin{center} {\LARGE\bf Optimal Policy Targeting with Strategic Agents} \end{center} \medskip } \fi \bigskip \begin{abstract} There is increasing interest in allocating treatments based on observed individual data: examples include targeted marketing, individualized credit offers, and heterogenous pricing. Treatment targeting introduces incentives for individuals to modify their behavior to obtain a better treatment. We show standard CATE-based cutoff rules are sub-optimal when observed covariates are endogenous to the treatment allocation rule. We propose a dynamic experiment that randomizes how treatments are allocated, and converges to the optimal treatment allocation function without parametric assumptions on individual strategic behavior. We prove that the experiment has regret that decays at a linear rate. \end{abstract} \noindent% {\it Keywords:} Design of Experiments, Robustness, Treatment Rules \vfill \newpage \spacingset{1.4} %A key implication for experiment design is that random variation in how treatment assignment depends on observed characteristics is required. \section{Introduction} \label{sec:intro} There is a variety of evidence that treating individuals heterogeneously can improve outcomes compared to a uniform policy. \cite{rossi1996value} estimate a demand model to show that targeting consumers with different coupons depending on their purchase history can improve revenue compared to allocating the same coupon to everyone. The increasing collection of individual-level data has increased the feasibility of personalizing treatments in a wide variety of settings. Other examples include allocating credit based on conventional and unconventional data like phone usage \citep{bjorkegren2019behavior}, targeting marketing promotions based on browser activity, and blocking profiles from social media platforms based on posting activity. In many of these settings, personalization of treatments introduces incentives for individuals to change their observed behavior, in order to improve their chances of receiving a better treatment. In the coupon setting, a profit-maximizing seller would like to allocate a coupon only to reluctant buyers, which are customers who would buy the product only if they receive a coupon. A heterogenous treatment rule designed to target reluctant buyers, however, may incentivize always-buyers, those who would buy the product with or without the coupon, to change their behavior to mimic reluctant buyers and save on their purchase. As a result, the distribution of observed behavior is dependent on how treatments are allocated. In this paper, we study how to optimally allocate treatments conditional on observed covariates, when agents respond strategically to the treatment rule. We formalize this problem using a potential outcomes model of treatment allocation as a Stackelberg game. The planner announces a treatment rule that a sample of $n$ agents best-respond to. The first result of the paper is that the planner's equilibrium strategy is in general distinct from the cutoff rules that have been shown to be optimal in the literature on allocating treatments without strategic behavior \citep{manski2004statistical, kitagawa2018should, hirano2009asymptotics, kallus2020minimax}. In the absence of a strategic behavior, data from a traditional randomized experiment can be used to estimate the conditional average treatment effect for each individual in a sample. Then, the Conditional Empirical Success Rule of \citep{manski2004statistical} assigns treatment with probability one to individuals with a positive Conditional Average Treatment Effect (CATE); this rule is shown to be optimal in \citet{hirano2009asymptotics} for maximizing average outcomes. We show that when there is strategic behavior, the distribution of CATEs becomes endogenous to the treatment rule. So, we define a new version of the CATE that is also conditional on how treatments are allocated, which we call the Strategic Conditional Average Treatment Effect (SCATE). Even under this new definition, a cutoff rule is not optimal. Instead, the optimal rule can allocate the treatment to those with a positive SCATE induced by the rule with probability less than one. Those with negative SCATEs induced by the rule can receive the treatment with probability greater than zero. Adding some randomization to the treatment rule reduces strategic incentives and can result in conditional distributions of treatment effects that lead to more effective targeting than any cutoff rule. Depending on how individual are strategic, the parameters of the optimal rule can vary greatly. A traditional A/B test that randomizes treatment does not provide sufficient information to identify the optimal rule. The other contribution of the paper is to design a randomized experiment that allows the planner to learn the optimal treatment rule over time without any parametric assumptions on agent strategic behavior. In order to estimate the optimal rule, we assume that the treatment rule, the potential outcomes function, and the covariate reporting rule for each individual are differentiable and that the objective function is convex. Under these assumptions, a variety of stochastic optimization methods can be used to find the optimal targeting rule. We choose an experiment design based on zero-th order optimization, as in \citet{wager2019experimenting}. At each step, the gradient of the objective function is estimated by using small perturbations to randomize how the treatment depends on observed characteristics, and observing how the objective for each individual is affected by the perturbations. Over time, the planner uses these gradient estimates to take steps towards the optimal parametric targeting rule. In Theorem \ref{thm:regret}, we prove that the regret of the proposed experiment decays at a linear rate. We show that in a simulation of a simple structural model of demand, the iterative experimental approach converges to the optimal coupon allocation policy for a profit-maximizing seller. \paragraph{Related Work} This paper introduces a Stackelberg model that generates potential outcomes in a non-parametric causal setting. A Stackelberg model for strategic behavior in the prediction setting was introduced by \citet{hardt2016strategic} and sparked a recent literature in computer science \citep{dong2018strategic, perdomo2020performative} and in economics. In theoretical analyses, \citet{frankel2019improving} and \citet{ball2020scoring} show that the optimal linear prediction rule underweights manipulable characteristics. In methodological work, \citet{bjorkegren2020manipulation} propose using a randomized experiment that varies the coefficients of a prediction function to estimate a parametric structural model of manipulation.% In contrast, we make the agent behavior non-parametric, and use random variation in the parameters of the targeting rule to optimize a more general objective directly. The motivation for this paper is that in many settings, the planner's goal is not necessarily to minimize prediction error, but is instead to allocate an intervention to optimize some outcome, like profit or platform engagement. Predictions are one kind of intervention \citep{miller2020strategic}, but this is the first paper that considers how strategic behavior affects more general decision rules that allocate a binary treatments in a causal setting. In this setting, approximation algorithms that are popular in the prediction literature, such as repeated risk minimization \citep{brown2020performative}, are not feasible, and new analyses are required. A contribution of this paper is to bridge the gap between the literature on strategic classification and the literature on policy learning \citep{manski2004statistical, kitagawa2018should, hirano2009asymptotics, kallus2020minimax, athey2020policy, viviano2019policy}, which has previously assumed that the distribution of observed data is fixed. There is a growing literature that shows that a traditional A/B test is not sufficient to estimate causal quantities of interest under spillover or equilibrium effects, and designs new forms of experiments for these more complex settings \citep{ vazquez2017identification, viviano2020experimental, munro2021treatment}. This paper shows that under strategic effects, new experiment designs are also needed to target policy effectively. %The dynamic experiment design in this paper is related to the approach in \citet{dong2018strategic}, which uses a different form of gradient-free optimization \citep{spall2005introduction, duchi2015optimal} to estimate the optimal prediction rule in a bandit setting with a single agent arriving at each step. The experiment design is closest to \citet{wager2019experimenting} who use local randomization and gradient steps to find the optimal uniform price in a market with equilibrium effects. % This example shows how the method in this paper applies to a variety of more general optimal targeting problems beyond the strategic classification setting. % Empirical risk minimization approaches, which are standard in industry, assume that observed characteristics are exogeneous, and fail to model that the distribution of observed data will shift in response to changes in the treatment allocation function. %New methodology is required for these problems since standard statistical approaches used to estimate treatment allocation rules fall short. \citet{frankel2019improving} and \citet{ball2020scoring} prove that empirical risk minimization approaches estimate a sub-optimal prediction rule when agents are strategic. We show this result also applies to the treatment allocation problem when agents are strategic. For the proof of this result, we rely on non-parametric monotonicity assumptions on individual strategic behavior rather than parametric assumptions used in the proofs of the existing results. %This paper formalizes a type of Lucas Critique \citep{lucas1976econometric} for causal inference. An online seller may be interested in exploiting correlation between browsing history and willingness to pay for a consumer good. However, switching from a uniform pricing policy to one that varies pricing for different individuals based on their browsing history introduces incentives for individuals to mimic the browsing history of an individual with low willingness to pay, and receive a lower price. As a result, a policy that switches from uniform to heterogeneous pricing may not raise as much revenue as expected, unless the impact of strategic responses on price discrimination is taken into account. \section{Models of Treatment Allocation} \label{sec:model} \subsection{Treatment Allocation with Exogenous Covariates} Each of $i \in 1, \ldots, n$ individuals have exogenously characteristics and potential outcomes jointly drawn from some unknown distribution: $X_i, \{Y_i(1), Y_i(0) \} \sim \mathcal G$. $X_i \in \mathcal X$ is discrete, with $| \mathcal X| = d$. The treatment allocation proceeds as follows: \begin{enumerate} \item The planner specifies $\delta(x) = Pr(W_i = 1 | X_i = x)$ for each $x \in \mathcal X$. \item A binary treatment $W_i$ is sampled from $\mbox{Bernoulli}(\delta_i)$, where $\delta_i = \delta(X_i)$. \item The observed outcome is $Y_i = Y_i(W_i)$. \end{enumerate} Since $X_i$ is discrete, we can represent the function $\delta: \mathcal X \rightarrow [0, 1]^d$ as the $d$-length vector $\bm \delta$. The planner would like to choose $\bm \delta \in [0, 1]^d$ to maximize expected outcomes, $\mathbb E[Y_i(W_i)]$. Let $\tau(x) = \mathbb E[Y_i(1) - Y_i(0) | X_i = x]$ be the CATE, the average treatment effect among individuals who have covariate value $x$. We can rewrite the objective, which shows that $\delta(x)$ enters the objective linearly, and as a result we can use the CATE to construct a simple optimal treatment rule that takes a cutoff form. Using Bayes' rule, \begin{equation} \label{eqn:exog} \mathbb E[Y_i(W_i)] = \sum \limits_{x \in \mathcal X} \delta(x) \mathbb E[Y_i(1) | X_i = x] + ( 1- \delta(x)) \mathbb E[Y_i(0) | X_i =x ]. \end{equation} With exogenous covariates, then $\delta(x)$ enters the objective in a linear way. As a result, the optimal rule takes a simple cutoff form described in Proposition \ref{prop:cesgood}. \begin{proposition} \label{prop:cesgood} Assume that the density $f(x) >0$ for all $x \in \mathcal X$. The policy that maximizes expected outcomes is defined by $\bm \delta^0(x) = \mathbbm{1} ( \tau(x) > 0)$ for $x \in \mathcal{X}$. \end{proposition} In order to estimate this rule based on a finite sample of data, we require only an estimate of $\tau(x)$, which can be constructed using data from a Bernoulli randomized experiment, \[ \hat \tau(x) = \frac{\sum \limits_{i=1}^n \mathbbm{1}(X_i = x, W_i = 1) Y_i }{ \sum \limits_{i=1}^n \mathbbm{1}(X_i = x, W_i = 1) } - \frac{\sum \limits_{i=1}^n \mathbbm{1}(X_i = x, W_i = 0) Y_i }{ \sum \limits_{i=1}^n \mathbbm{1}(X_i = x, W_i = 0) }. \] $\hat {\bm \delta}^0(x) = \mathbbm {1} ( \hat \tau(x) >0)$ for $x \in \mathcal {X}$ is the Conditional Empirical Success Rule of \citet{manski2004statistical}. In the next section, we will show how under strategic behavior then $\delta(x)$ no longer enters the planner's objective linearly, leading to a more complex structure for the optimal rule. \subsection{Treatment Allocation with Strategic Agents} We are still in a setting with $i \in \{ 1, \ldots, n \}$ individuals and $X_i \in \mathcal X$ discrete, with $| \mathcal X| = d$. The covariate for individual $i$, $X_i$, is no longer exogenous to the treatment allocation rule, however. We have $X_i = X_i(\bm \delta)$, where the function $X_i: [0, 1]^d \rightarrow \mathcal X$ determines how agents are strategic in response to different treatment rules. Both the potential covariate function and potential outcomes are jointly drawn from some unknown distribution $X_i(\cdot), \{Y_i(1), Y_i(0) \} \sim \mathcal G$. The treatment allocation procedure can now be described as a Stackelberg game: \begin{enumerate} \item The planner specifies $\delta(x) = Pr(W_i = 1 | X_i = x)$ for each $x \in \mathcal X$. \item For $ i \in [n]$, agent $i$ reports covariates $X_i(\delta) \in \mathcal X$. In many settings, we can interpret the potential covariates as the result of utility maximization of randomly drawn $U_i(\cdot)$: \[ X_i(\delta) = \arg \max_{x} \delta(x) U_i(x, 1) + (1 - \delta(x)) U_i(x, 0) \] \item For $i \in [n]$, $W_i$ is sampled from $\delta(X_i)$. \item The outcome $Y_i = Y_i(W_i)$ is observed. \end{enumerate} In this more complex environment, the planner would still like to maximize expected outcomes, $\Pi(\bm \delta) = \mathbb E[Y_i(W_i)]$, so that the optimal rule is defined as \[ \bm \delta^* = \arg \max_{ \bm \delta \in [0, 1]^d} \mathbb E[Y_i(W_i)]. \] With strategic behavior, the CATE is endogenous to the treatment rule. The correlation between the observed data $X_i$ and the individual treatment effect $Y_i(1) - Y_i(0)$ changes depending on the agent's strategy for reporting $X_i(\bm \delta)$. Thus, we need to define a separate CATE for each possible treatment rule: \[ \tau(x, \bm \delta) = \mathbb E[Y_i(1) - Y_i(0)| X_i(\bm \delta ) = x ]. \] A natural extension of the cutoff rule from Proposition \ref{prop:cesgood} to the strategic agents setting is one that meets the condition \begin{equation} \label{eq:cutoff} \delta^c(x) = \mathbbm{1} ( \tau(x, \bm \delta^c ) >0 ). \end{equation} This rule allocates treatments only to individual who have a positive CATE, where the CATE is defined based on the distribution of $X_i$ induced by $\bm \delta^c$. Depending on the type of strategic behavior, we will show that the optimal sometimes does, and sometimes does not have this form. With strategic behavior, the dependence of the objective on $\bm \delta$ is much more complex than in the exogenous setting of Equation \ref{eqn:exog}. Let $f(x, \bm \delta) = Pr(X_i(\delta) = x)$. We can use Bayes rule to expand $\Pi(\bm \delta)$ as \[ \Pi(\bm \delta) = \sum \limits_{x \in \mathcal X} f(x, \bm \delta) \big ( \delta(x) \mu(1, x, \bm \delta) + ( 1- \delta(x)) \mu(0, x, \bm \delta) \big ), \] where $\mu(w, x, \bm \delta) = \mathbb E[Y_i(w) | X_i(\delta) = x]$. The treatment rule now enters into the objective in a non-linear way, and it is no longer as straightforward to derive the form of the optimal rule. We start with providing some conditions that the optimal rule must satisfy in Theorem \ref{thm:first}. \begin{theorem} \label{thm:first} Let $\bar{\mu}(x, \bm \delta) = \delta(x) \mu(1, x, \bm \delta) + ( 1- \delta(x)) \mu(0, x, \bm \delta)$. The optimal treatment allocation rule $\bm \delta^*$ meets the following conditions: There exists $d$-length vectors $\bm \lambda^1 \geq 0$ and $\bm \lambda^0 \geq 0$ such that for each $x \in \mathcal X$, \begin{equation} \label{eqn:strat} \begin{split} & f(x, \bm \delta^*) [ \tau(x, \bm \delta^*) ] \\ & + \sum \limits_{z \in \mathcal X} \left [ \frac{\partial f(z, \bm \delta^*)}{\partial \delta^*(x)} \bar{\mu}(z, \bm \delta^*) + f(z, \bm \delta) \left (\delta^*(z) \frac{\partial \mu(1, z, \bm \delta^*) }{\partial \delta^*(x)} +( 1- \delta^*(z)) \frac{\partial \mu(0, z, \bm \delta^*) }{\partial \delta^*(x)} \right) \right] \\& -\lambda^1_x + \lambda^0_x =0 \end{split} \end{equation} $( \delta^*(x) - 1) \lambda^1_x = 0,$ and $\delta^*(x) \lambda^0_x = 0$, and $0 \leq \bm \delta^* \leq 1$. If $\Pi(\bm \delta)$ is concave, then any treatment rule meeting these conditions is $\bm \delta^*$, the global maximizer of $\Pi(\bm \delta)$. \end{theorem} From Equation \ref{eqn:strat} we can define the strategic effect at a given value of $\bm \delta \in [0, 1]^d$ as \[ s(x, \bm \delta) = \sum \limits_{z \in \mathcal X} \left [ \frac{\partial f(z, \bm \delta)}{\partial \delta(x)} \bar{\mu}(z, \bm \delta) + f(z, \bm \delta) \left (\delta(z) \frac{\partial \mu(1, z, \bm \delta) }{\partial \delta(x)} +( 1- \delta(z)) \frac{\partial \mu(0, z, \bm \delta) }{\partial \delta(x)} \right) \right]. \] If there is no strategic behavior, then $s(x, \bm \delta) = 0$ for all $\bm \delta \in [0, 1]^d$ and we are back in the setting of Proposition \ref{prop:cesgood}. \begin{corollary} \label{corr:tilde} Assume that a cutoff rule of the form \[ \delta^c(x) = \mathbbm{1} (\tau(x, \bm \delta^c) >0) \] exists. If there is any $\tilde x \in \mathcal X$ such that $\mbox{sgn}( s(\tilde x, \bm \delta^c)) \neq \mbox{sgn} ( \tau(\tilde x, \bm \delta^c))$ and $|s (\tilde x, \bm \delta^c)| > | f(x, \bm \delta^c) \tau(\tilde x, \bm \delta^c) |$, then $\bm \delta^* \neq \bm \delta^c$ and the optimal rule does not have a cutoff form. If no such $\tilde x$ exists, and $\Pi(\bm \delta)$ is concave, then the cutoff rule is optimal even in the presence of strategic behavior. \end{corollary} \begin{proof} \end{proof} From the optimality conditions, we can no longer guarantee that a cutoff rule is optimal if the strategic effect $s(x, \delta^c)$ is large enough and of opposite sign to the CATE $\tau(x, \bm \delta^c)$. Under the conditions identified in Corollary \ref{corr:tilde}, the optimal rule is an interior solution where for certain values of $x \in \mathcal X$, we induce some randomization, with $0 < \delta^*(x) < 1$. When $X_i$ can take many possible values, the form of $s(x, \bm \delta^c)$ is complex and it is not immediately clear what kind of strategic behavior results in a rule with some randomization rather than a cutoff rule. In the binary case, when we have $X_i \in \{L, H\}$, then the form of $s(x, \bm \delta)$ simplifies and we can provide some intuitive conditions under which $s(x, \bm \delta)$ is guaranteed to have the same sign as $\tau(x, \bm \delta)$ so that the cutoff rule is optimal, and when it has the opposite sign, so that a cutoff rule may not be optimal. \begin{corollary} In the binary setting, with $X_i \in \{L, H\}$, assume that there exists $\delta^c(L)=0$ with $\tau(L, \bm \delta^c) <0$, and $\delta^c(H) = 1$ with $\tau(H, \bm \delta^c) >0$. We define the following conditions that can determine whether strategic behavior induced by treating individuals heterogenously is benign or harmful to the planner's objective. \begin{enumerate} \item Response of conditional outcomes to increasing $\delta^c(H)$, increasing discrimination: \begin{enumerate} \item $\frac{\partial \mu(1, H, \bm \delta^c) }{ \partial \delta^c(H)} < 0$ and $\frac{\partial \mu(0, L, \bm \delta^c) }{ \partial \delta^c(H)} < 0$ \item $\frac{\partial \mu(1, H, \bm \delta^c) }{ \partial \delta^c(H)} > 0$ and $\frac{\partial \mu(0, L, \bm \delta^c) }{ \partial \delta^c(H)} > 0$ \end{enumerate} \item Response of conditional outcomes to increasing $\delta^c(L)$, decreasing discrimination: \begin{enumerate} \item $\frac{\partial \mu(1, H, \bm \delta^c) }{ \partial \delta^c(L)} > 0$ and $\frac{\partial \mu(0, L, \bm \delta^c) }{ \partial \delta^c(L)} > 0$ \item $\frac{\partial \mu(1, H, \bm \delta^c) }{ \partial \delta^c(L)} < 0$ and $\frac{\partial \mu(0, L, \bm \delta^c) }{ \partial \delta^c(L)} < 0$ \end{enumerate} \item Response of marginal distribution of covariates: \[ \mbox{sgn}\left ( \frac{\partial f(H, \bm \delta^c) }{\partial \delta^c(H) } \right ) \neq \mbox {sgn} \left ( \mu(1, x, \bm \delta^c) - \mu(0, x, \bm \delta^c) \right ) \] \end{enumerate} If 1a and 3 hold, then $s(H, \delta) < 0 $. If 2a and 3 hold, then $s(L, \delta^c) <0$. As a result, $\delta^c$ may not be optimal. If 1b and 2b hold, and 3 does not hold, then if $\Pi(\bm \delta)$ is concave, then $\bm \delta^c$ is the optimal rule $\bm \delta^*$. \end{corollary} \begin{proof} \end{proof} This corollary allows us to evaluate what kind of strategic behavior can result in a treatment rule that is distinct from the cutoff rules that are always optimal in the exogenous setting. If in response to increasing discrimination on the basis of $X_i$, then a group of individuals with an increasingly positive CATE report $H$, then strategic behavior is beneficial, and when Condition 3 does not hold, then a cutoff rule is guaranteed to be optimal. If in response to increasing discrimination a group of individuals with an increasingly negative CATE report $H$ rather than $L$, then Condition 1a and 2a will hold. Under this kind of strategic behavior, a sufficient condition under which Condition 3 also holds is that individuals with a positive Individual Treatment Effect (ITE), despite benefiting from that treatment, still have worse outcomes in expectation than those with a negative ITE when they are not treated: $\mathbb E[Y_i(1) | Y_i(1) > Y_i(0) ] < \mathbb E[Y_i(0) | Y_i(1) < Y_i(0)]$. We can next show that under a simple structural model of coupon allocation and product demand with strategic behavior, that these conditions are met, and as a result the optimal rule does not take the form of a cutoff rule. \begin{example} {\textbf {Coupon Policy Model} } \label{ex:coupon} The planner is a profit-maximizing online store that would like to target some customers who have added a product to their cart with a discount coupon. Customers have an unobserved type $\theta_i \sim \mbox{Bernoulli}(0.5)$. Customers with $\theta_i =0$ will buy the product with or without the coupon. Customers with $\theta_i = 1$ will only complete the purchase with 75\% probability if they receive a coupon. The profit-maximizing coupon policy in this setting is to send a coupon only to customers with $\theta_i = 1$. However, rather than observing $\theta_i$ directly, the store instead observes $X_i \in \mathcal X$, which is behavior that may be correlated with their unobserved type. In our model, customers have a preferred behavior $Z_i$ when treatments are assigned uniformly, so there are no incentives to engage in strategies. $Z_i$ is tightly correlated with $\theta_i$. However, when coupon allocation is heterogenous, then the distribution of $X_i$ can shift from that of $Z_i$, as those with $\theta_i = 0$ attempt to mimic those with $\theta_i = 1$ in order to receive a coupon. This strategic behavior is influenced by an individual specific cost of behavior change $C_i \in \mbox{Uniform}(0, 10)$. The customers value the coupon at \$5. The resulting agent utility function is \[ U_i(x) = 5\cdot \delta(x; \beta) - C_i (x - Z_i)^2. \] The reporting function is \[ X_i(\beta) = \arg \max_{x} U_i(x). \] The store's profit per product is \$10 for a purchase without the coupon and \$5 with it. The resulting potential outcomes are defined as the potential profit for each individual as a function of whether or not they receive a coupon: \[ Y_i(W_i) = 5 \cdot (0.75 \theta_i + ( 1- \theta_i)) W_i+ 10 \cdot ( 1 - \theta_i) (1 - W_i). \] The optimal policy is the coupon allocation procedure that maximizes profit: \[ \beta^* = \arg \min_{\beta} \mathbb E_{\bm G_i \sim D(\beta)} \Big [ - Y_i \Big ]. \] We consider two variations on this model, one with discrete and one with continuous covariates. In the discrete setting, $Z_i = \theta_i$ and $X_i \in \{0, 1\}$. In the discrete setting, then strategic behavior depends only on the difference in $\delta(1)$ and $\delta(0)$, and lowering $\delta(1)$ is less costly to the planner than raising $\delta(0)$. So, we can normalize $\delta(0) = 0$ and the parameter to optimize for the planner is $\beta = \delta(1)$. With continuous covariates, we have $Z_i \sim \mbox{Normal}(0, 1)$, $\theta_i = \mathbbm{1}(Z_i > 0)$, and $X_i \in \mathbb R$. We restrict the treatment allocation rule to be a logit function, \[ \delta(X_i; \beta) = \frac{1}{1 + e^{-X_i \beta}}. \] The goal is to find the logit coupon allocation rule that optimizes profit when agents are strategic. \end{example} \textcolor{red}{More messy after this.} In the prediction setting, the treatment $W_i$ is a classification label that does not impact outcomes, so $Y_i(1) = Y_i(0)$. The classification function affects the distribution of covariates reported. The objective is a binary classification loss, $r(\hat Y_i, Y_i) = Y_i \log (\hat Y_i) + (1 - Y_i) \log(1 - \hat Y_i)$, where $\hat Y_i = \delta(X_i; \beta)$. In more general settings, such as promotion targeting or credit allocation, the outcome of interest is per-customer profit and the objective is simply maximizing the expected outcome , so $r(\delta(X_i; \beta), Y_i) = - Y_i$. The treatment rule impacts the objective directly through the heterogenous impacts of treatments on outcomes. It also influences the objective indirectly by affecting the distribution of $X_i$. Certain treatment rules may lead to distributions of $X_i$ which are more or less correlated with the individual treatment effect $\tau_i = Y_i(1) - Y_i(0)$. This affects how easy it is for a planner to distinguish between individuals who have a positive treatment effect and those with a negative treatment effect. We next introduce a parametric model that is a special case of the non-parametric model presented. Our example is inspired by \citet{rossi1996value}, who show that treating individuals with different coupons can improve on a uniform coupon allocation strategy. We evaluate how strategic behavior can affect the optimal heterogenous coupon allocation strategy. In the next subsection, we explore how strategic behavior influences the form of the optimal treatment rule in settings like that of Example \ref{ex:coupon}. \subsection{Continuous Covariates} CATE-based cutoff rules have been shown to be optimal under a variety of objectives in the literature on statistical treatment rules and policy learning without strategic behavior. For this section, the objective of the planner is to maximize the expected outcome, and we assume that $X_i \in \mathcal X$ is discrete, so that $\delta(x)$ takes on a finite number of values, and $\bm \beta$ is a vector representing $\delta(x)$ at each possible $x \in \mathcal X$. We use interchangeably $\beta_x = \delta(x)$. When the treatment rule assigns treatments with a uniform probability, so that $\delta(x) = \pi$ is a constant and does not vary with $x$, then there are no incentives to engage in strategic behavior. We can define the conditional average treatment effect that is estimable via a Bernoulli randomized experiment as \[ \tau^0(x) = \mathbb E[Y_i(1) - Y_i(0) | X_i = x, \delta(x) = \pi]. \] In the absence of strategic behavior, then the treatment rule that maximizes expected outcomes assigns those with a positive CATE to treatment with probability 1. In practice, $\tau^0(x)$ is not known by the researcher in advance, so the Conditional Empirical Success (CES) rule of \citet{manski2004statistical} uses data from a Bernoulli randomized experiment to estimate CATEs. \begin{proposition} When the distribution of $X_i$ depends on $\beta$, then the optimal rule is not necessarily a cutoff rule $\bm \beta^c$, so we can have $J(\bm \beta^c) < J(\bm \beta^*)$. Instead, it can involve randomization, such as \[ \delta(x; \beta^*) = \alpha_x \cdot \mathbbm{1}( \tau^{\beta^*}(x) > 0), \] with $\alpha_x < 1$. \label{prop:cutoffbad} \end{proposition} \begin{proof} To prove Proposition \ref{prop:cutoffbad}, we derive $\bm \beta^0$, $\bm \beta^c$, and $\bm \beta^*$ in Example \ref{ex:coupon} with discrete $X_i$. Under a uniform treatment assignment policy, $\tau^0(1) = 3.75 $ and $\tau^0(0) = -5$. As a result, $\delta(1, \beta^0) = 1$. However, implementing $\beta^0$ induces strategic behavior from those with $\theta_i= 0$. From the utility function in Example \ref{ex:coupon}, we can derive \[ Pr(X_i(\beta) = 1 | \theta_i = 0) = \frac{1}{2} \beta. \] This leads to an expected profit function of $J(\bm \beta) = -\frac{5}{4} \beta^2 + 1.875 \beta + 5$. Under $\bm \beta^0$, half of those with $\theta_i = 0$ report $X_i = 1$. As a result of this strategic behavior, the SCATE under the treatment rule is $\tau^{\beta^0}(1) =0.83 < \tau^0(1)$. In this example, $\bm \beta^0$ meets Condition \ref{eq:cutoff}, since even under the strategic behavior induced by $\beta^0$, those with $X_i=1$ have a positive SCATE. The expected profit is $J(\bm \beta^0) = J(\bm \beta^c) = 5.625$. Taking the derivative of the concave $J (\bm \beta)$ with respect to $\beta$, we find that $\bm \beta^* = 0.75$. The optimal rule leads to $J(\bm \beta^*) = 5.70 > J (\bm \beta^0) = J(\bm \beta^c)$. \end{proof} In the discrete setting, we assign a group with a positive average treatment effect to the treatment with less than 100\% probability. A natural question is how this result extends to the continuous covariate setting. We simulate the structural model of Example \ref{ex:coupon} when the agents report a continuous covariate, and generate the plot in Figure \ref{fig:contmodel}. \begin{figure}[ht] \centering \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{nonstrategic} \caption{CES Rule without Strategic Behavior} \label{fig:nonstrat} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{strategic_ces} \caption{CES Rule with Strategic Behavior} \label{fig:stratces} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{strategic_optimal.pdf} \caption{Optimal Rule with Strategic Behavior} \label{fig:stratopt} \end{subfigure} \caption{These figures plot the probability of being assigned a coupon conditional on reporting $X_i$ from -3 to 3 under different scenarios. The reported $X_i$ for each individual, colored by their SCATE, is plotted using a jitter underneath the allocation rules. \label{fig:contmodel} } \end{figure} Without strategic behavior, in Figure \ref{fig:nonstrat}, then the optimal logit function is effectively a cutoff function at $X_i = 0$. Everyone who is assigned to treatment has a positive Individual Treatment Effect (ITE), and the average profit per agent is \$7.00. If this cutoff function is implemented in a population with strategic behavior, however, as in Figure \ref{fig:stratces}, then the distribution of $X_i$ shifts. Those with $\theta_i = 0$ but with inherent behavior $Z_i$ that is close to zero will shift their behavior to cross the threshold and receive the valuable coupon. As a result, the profit drops to below \$6.00 per agent. Taking into account the strategic behavior in Figure \ref{fig:stratopt}, the optimal logit function is no longer a cutoff rule. Instead, there is a fuzzy region where treatments are assigned with nonzero probability to those with negative $X_i$ and below one to those with positive $X_i$. There is still some strategic behavior induced (some agents with a negative CATE report $X_i >0$) and some individuals with a negative ITE receive treatment, but the average profit of $\$6.00$ per agent is improved compared to the cutoff rule under strategic behavior which receives $\$5.80$ per agent. We can conclude from our two examples that the optimal rule in the strategic setting may not assign the treatment with less 100\% probability to groups of agents with a positive conditional average treatment effect under the rule, and to those with a negative SCATE with more than 0\% probability. With strategic agents, the planner has to take into account how the targeting rule impacts the distribution of SCATEs. A rule that assigns treatment with 100\% probability to groups with a positive SCATE may induced a distribution where targeting is far less effective than one that commits to treating individuals in a more uniform way. %An important implication of this proposition is that in the setting with strategic agents, the prediction and optimization problem can't be decoupled. In a decoupled problem the planner would first learn the optimal estimator of the CATE, taking into account strategic behavior. Following the results of \citet{frankel2019improving}, this may result in underweighting characteristics that are susceptible to manipulation when estimating the CATE. Knowing that we can no longer use a Bernoulli randomized experiment to estimate treatment rules when agents are strategic, our next goal is coming up with estimation methods for $\bm \beta$. In the next section, we show how estimating $\bm \beta^*$ is a type of stochastic zero-th order optimization, and design experiments that recover $\bm \beta^*$ in practice without parametric assumptions on how the distribution of $X_i$ responds to $\bm \beta$. \section{Experiment Design} \label{sec:est} The planner's optimization problem introduced in the previous section, which is $\arg \min_{\beta^*} R(\bm \beta),$ without further restrictions is a non-convex optimization problem that can be NP-hard. Furthermore, the planner does not observe $R(\bm \beta)$ directly in finite samples, but can compute noisy evaluations, for example $ R_n(\bm \beta) = \frac 1n \sum \limits_{i=1}^n \mathbbm{1}(\bm \beta_i = \bm \beta) r(\delta(X_i; \bm \beta_i), Y_i)$. In order to make some progress, we need to restrict $R(\bm \beta)$. First, we require that it is continuously differentiable in $\beta$. \begin{assumption} \label{ass:diff} $\delta(x; \beta)$ is continuously differentiable in $\bm \beta$ and the risk $r(\delta, y)$ is continuously differentiable in $y$ and in $\delta$. Both $X_i$ and $Y_i$ are bounded. If $X_i$ is continuous, then each $X_i(\beta)$ is continuously differentiable in $\beta$ and $\delta(x; \beta)$ is continuously differentiable in $x$. If $X_i$ is discrete, then $Pr(X_i = x)$ is continuously differentiable in $\beta$ for each $x \in \mathcal X$. \end{assumption} The planner specifies $\delta(x; \beta)$ and $r(\delta, y)$. The assumption on the strategic behavior is not verifiable, but requires that the distribution of $X_i$ varies smoothly with changes in $\beta$. \begin{proposition} Under assumption \ref{ass:diff}, $R(\bm \beta)$ is continuously differentiable. \end{proposition} Next, we assume that the planner observes the Stackelberg game introduced in the previous section repeatedly over time. At each time $t = 1, \ldots, T$, a batch of $n$ agents arrive and are treated by the planner. The agent's decision problem is still static, so we ignore dynamic considerations if an agent arrives repeatedly over time. In the coupon example, we can think of the batch of agents that arrive at each time $t$ as the customers who arrive to the seller's website each day to potentially purchase a product. Differentiability implies that $R(\bm \beta)$ is Lipschitz. If we can restrict the candidate range of $\bm \beta$ to a compact set, it is possible to find a near optimal point by grid search. This strategy would lead to a simple experiment design, where the planner partitions the sample of individuals that arrive at each time step into groups, each of which would randomly receive a different value of $\beta$. As the sample size and the number of time steps grows large, then $R(\bm \beta)$ can be computed for many different $\beta$s and the one that leads to the lowest risk can be chosen. This strategy, however, requires searching over an exponential number of candidate $\bm \beta$ to perform well, and would not perform well in finite samples. Instead, we can take advantage of the differentiability of $R(\bm \beta)$ to take gradient steps towards an optimal point. In order to have global guarantees on the performance of stochastic gradient-based methods, we need to assume convexity of $R(\bm \beta)$. This assumption is not possible to verify in advance, since it depends on the unknown form of strategic behavior. \begin{assumption} \label{ass:convex} $R(\bm \beta)$ is $\sigma-$strongly convex. \end{assumption} Even if the objective is not convex, the stochastic algorithm proposed can still lead to a good choice of a treatment rule in the presence of strategic behavior. Although we do not describe them in detail in this paper, weaker guarantees, such that gradient descent always converges to a local minimum, are available \citep{lee2016gradient}. The next issue that we encounter is that the gradient $\nabla R(\bm \beta)$ cannot be computed from a sample of data without some variation in $\bm \beta$. When $X_i$ is a continuous-valued scalar, \[ \nabla R(\bm \beta) = \mathbb E \Big [ \big( \frac{\partial r(\pi_i, Y_i)}{\partial \delta} + \frac{\partial r(\pi_i, Y_i)}{\partial y} [Y_i(1) - Y_i(0)] \big) \big( \frac{\partial \delta(X_i; \beta)}{ \partial X_i} \frac{\partial X_i(\beta)}{\beta} + \frac{\partial \delta(X_i; \beta)}{\beta} \big)\Big ], \] where $\pi_i = \delta(X_i; \beta)$. Note we exchange the derivative and expectation using the dominated derivatives theorem. Under Assumption \ref{ass:convex}, then $\bm \beta^*$ is unique, so it is the unique solution to $\nabla \Pi(\bm \beta^*) = 0 $. We don't observe enough data, however, to set an estimate version of this derivative to zero directly. The derivatives of $r(\cdot)$ and $\delta(\cdot)$ are known by the planner, since they specify both functions. However, both the individual treatment effect $Y_i(1) - Y_i(0)$ and the derivative of $X_i(\bm \beta)$ are unknown to the planner. Instead, we can solve for $\bm \beta^*$ by taking estimating the gradient $\nabla R(\bm \beta^*)$ directly, and taking gradient steps towards toward the optimal policy over time. We start from an initial naive estimate of $\hat {\bm \beta}^0$, and estimate the gradient directly by perturbing $\bm \beta$ in a zero-mean way and observing how the individual risk $R_i = r(\delta(\bm X_i; \beta), Y_i)$ correlate with the perturbations of $\bm \beta$. \citet{wager2019experimenting} call this type of experiment a local experiment. We describe it formally in Algorithm \ref{algo:exp}. \begin{algorithm}[!ht] \caption{Dynamic Experiment for Optimizing Treatment Rules} \label{algo:exp} \KwIn{Initial estimate $\hat {\bm \beta}^0$, sample size $n$, step size $\eta$, perturbation $h$, and steps $T$} \KwOut{Updated estimate $\hat {\bm \beta}^T$} $t =1 $ ; $K = \mbox{dim}{(\hat {\bm \beta}^0)}$\; \While{$t \leq T $} { New batch of $n$ agents arrive\; \For{$i \in \{1, \ldots, n\}$}{ Sample $\bm \epsilon^t_i$ randomly from $\{-1, 1\}^K$ \; Announce $\bm \beta_i = \hat {\bm \beta}^{t-1} + h \bm \epsilon^t_i $ \; Agent reports $\bm X_i = X_i(\bm \beta_i)$\; Sample $W_i$ from $\mbox{Bernoulli}(\pi_i)$ where $\pi_i = \delta(\bm X_i; \bm \beta_i)$\; Agent reports outcome $Y_i = Y(W_i)$ \; Calculate loss $R_i = r(\pi_i, Y_i)$\; } $\bm Q_t = h \epsilon^t$ is the $n \times K$ matrix of perturbations\; $\bm R$ is the $n$-length vector of risk evaluations\; Run OLS of $\bm R$ on $\bm Q_t$: $\hat {\bm \Gamma}^t = (\bm Q_t' \bm Q_t)^{-1}(\bm Q_t' \bm R) $ \; $\hat {\bm \beta}^{t} = \hat {\bm \beta}^{t-1} - 2 \eta \frac{\hat {\bm \Gamma}^t} {t+1}$\; $t \gets t+1 $\; } \Return{$\hat {\bm \beta}^T$} \end{algorithm} This algorithm estimates $\hat {\bm \beta}^T$ in $T$ steps without relying on any functional form assumptions on the strategic behavior $X_i(\bm \beta)$ or potential outcomes $Y_i(W_i)$. In contrast to traditional experiment approaches, the design perturbs $\bm \beta_i$, rather than $W_i$. The first result is that the estimate $\hat {\bm \Gamma}^t$ from each step of the perturbation experiment converges in probability to the true gradient $\nabla R(\bm \beta)$ as the sample size in each step grows large. This result relies on the perturbation size going to zero as $n \rightarrow \infty$ at a sufficiently slow rate. \begin{lemma} Fix some $\hat{\bm \beta}^t$. If the perturbation size $h = c n^{-\alpha}$ for $0 < \alpha < 0.5$, then $\hat {\bm \Gamma}^t$ from Algorithm \ref{algo:exp} converges to the $k$-dimensional gradient of the objective: \[ \lim_{n \rightarrow \infty} \mathbb P \left( \left |\hat {\bm \Gamma}^t - \nabla R( \hat {\bm \beta}^t) \right| > \epsilon \right) =0 \] for any $\epsilon>0$. \label{thm:consist} \end{lemma} The proof is in Appendix \ref{sec:proofs}. The average regret of a policy in place for $T$ time periods is the average difference in the objective function between the realized policy path and the policy that maximizes the average objective value over the $T$ time periods. In Example \ref{ex:coupon}, the average regret of Algorithm \ref{algo:exp} corresponds to the average loss in profit of a coupon allocation policy that learns through an iterative experiment compared to the profit of a planner with full information who implements the optimal allocation immediately in the first step. \begin{theorem} \label{thm:regret} Under Assumption \ref{ass:convex}, also assume the norm of the gradient of $R(\bm \beta)$ is bounded by $M$, so $||\nabla R(\bm \beta)||_2 \leq M$, and the step size $\eta > \sigma^{-1}$. If a planner runs Algorithm \ref{algo:exp} for $T$ time periods, then we have that the regret decays at rate $O(1/t)$, so for any $\bm \beta \in \mathbb R^k$: \[ \lim_{n \rightarrow \infty} P\left[ \frac{1}{T} \sum \limits_{t=1}^T t(R (\hat {\bm \beta}^t ) - R (\bm \beta)) \leq \frac{\eta M^2}{2} \right] = 1 \] \end{theorem} A corollary of this is that the the procedure converges to $\bm \beta^*$ in probability as the sample size at each time step grows large. \begin{corollary} \label{cor:convg} Under the conditions of Theorem \ref{thm:regret}, \[ \lim_{n \rightarrow \infty} P \left [ || \bm \beta^* - \hat {\bm \beta}^T||^2_2 \leq \frac{2 \eta M^2}{\sigma T} \right] = 1 \] \end{corollary} The proof of Theorem \ref{thm:regret} and Corollary \ref{cor:convg} is in Appendix \ref{sec:proofs}. Theorem \ref{thm:consist} shows that gradient estimate is consistent. The proof of Theorem \ref{thm:regret} then applies results for convergence of gradient descent when a consistent, but not necessarily unbiased, gradient oracle is available. The combination of these two results indicates that the suggested dynamic experiment successfully recovers $\bm \beta^*$. One benefit of the experiment design that we choose is that the asymptotic regret rate does not depend on the dimension of $\bm \beta$. Standard techniques for zero-th order optimization assume that only one noisy function evaluation is available at each step, which leads to regret rates that depend on the dimension of the parameter space, as in \citet{dong2018strategic}. In our setting, we take advantage of cross-sectional information to average over multiple noisy evaluations of the risk, which leads to rates that are independent of the dimension of $\bm \beta$. A key assumption in the model introduced in Section \ref{sec:model} is that each individual knows and reacts to the treatment rule $\delta(X_i; \bm \beta_i)$ that they are assigned by the time the outcome $Y_i$ is collected. In some settings, it is feasible to explain to agents how treatments are assigned, and to assign slightly different treatment rules to different agents. If, due to regulation, or other constraints, this is not feasible, then we can no longer take advantage of cross-sectional variation. Different forms of zero-th order optimization would be needed, such as assigning the same rule to all individuals at each time step, and computing gradients across time steps, which would lead to slower regret rates. \section{Simulations} \label{sec:sim} We next simulate data based on the structural model in Example \ref{ex:coupon} for 200 periods with a sample of 2000 individuals in each period. We use this simulation to examine the per-period profit of Algorithm \ref{algo:exp} compared to the benchmark Conditional Empirical Success rule and to the optimal rule computed by solving the structural model ex-ante. For the CES Rule, we estimate CATEs under uniform treatment assignment using an A/B test, and assign treatments to individuals with a positive estimate CATEs from then on. For the dynamic experiment, we start with a uniform assignment rule, and then learn the optimal treatment rule using estimated gradient steps over time. The CES rule discriminates too much between different groups, not anticipating that the manipulation that occurs in response to the cutoff rule leads to sub-optimal profit. In contrast, although Algorithm \ref{algo:exp} starts from a uniform rule which is far from optimal, it quickly converges to the profit-optimal coupon allocation function, without requiring any knowledge of the parametric structure of agent behavior. The CES rule leads to noticeably less profit per customer in each period compared to the optimal rule. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{regret.pdf} \caption{Convergence of Dynamic Experiment to Optimal Coupon Allocation Policy} \label{fig:price} \end{figure} \section{Conclusion} When a planner treats individuals in a heterogeneous way based on some observed characteristics about that individual, incentives are introduced for individuals to change their behavior to receive a better treatment. We have shown theoretically, in simulations, and in practice that this impacts how treatments should be optimally allocated based on observed individual level data. We propose an iterative method that converges to the optimal treatment assignment function, without making parametric assumptions on the structure of individuals' strategic behavior. The key to the success of this method is the dynamic approach, and randomizing how the treatment depends on observed characteristics rather than randomizing the treatment itself. For future work, there is a wide variety of complex social and economic settings where there is a need for experiments that adjust policy optimally without strict assumptions on the environment. \newpage %\singlespacing \bibliography{sample.bib} \newpage %\onehalfspacing \appendix \section{Theory} \subsection{Proof of Proposition 1} \begin{proof} Using Bayes' rule, can rewrite the objective as: \[ \Pi(\bm \delta) = \sum \limits_{x \in \mathcal X} \delta(x) \mathbb E[Y_i(1) | X_i = x]f(x) + ( 1- \delta(x)) \mathbb E[Y_i(0) | X_i = x]f(x) \] Taking the derivative with respect to $\delta(x) $, we have \begin{equation} \frac{ \partial J(\bm \beta)}{\partial \beta_x } = \mathbb E[Y_i(1) - Y_i(0) | X_i = x]f(x) = \tau^0(x) f(x). \label{eq:deriv} \end{equation} We can finish via a proof by contradiction. Let's imagine there is some $\tilde {\bm \beta}$ which is distinct from $\bm \beta^*$ but maximizes the objective $J(\beta)$. If it is distinct, then it must have $\tilde \beta_x < 1$ for some $\tau^0(x) >0$ or $\tilde \beta_y >0$ for some $\tau^0(y) <0$. But, by the derivative in Equation \ref{eq:deriv}, since $f(x) >0$, we can either increase $\tilde \beta_x$ by some small $\epsilon$ or decrease $\tilde \beta_y$ towards zero, and raise the objective value $J(\bm \beta)$, which contradicts the optimality of $\tilde {\bm \beta}$. \end{proof} \section{Proofs} \label{sec:proofs} With a single sample of data $(W_i, Y_i, X_i)_{i=1, \ldots, n}$, the planner can evaluate the empirical version of the true objective function. We define this for a sample of $n$ individuals with treatments $W_i = w(X_i; \bm \beta)$ as: \[ R_n(\bm \beta) = \frac{1}{n}\sum \limits_{i=1}^n r(\delta(\bm X_i; \beta), Y_i) \] with $ R_n(\bm \beta) \rightarrow_p R(\bm \beta)$ by the Law of Large Numbers, since $X_i$ and $Y_i$ are independent given $\bm \beta$. \subsection{Proof of Theorem 1} \begin{proof} We can drop the $t$ subscripts since we are fixing $\hat {\bm \beta} = \hat {\bm \beta}^t$. Let $\bm Q \in \{-h,h\}^{n \times k}$ be the $n \times k$ matrix of experimental perturbations, where $ Q_{ik} = \epsilon^t_{ik} h$. Let $\bm R$ be the $n \times 1$ vector of objective values as a function of each individual's treatment and outcome, where $R_i = r(\delta(\bm X_i; \beta), Y_i)$. Then we have that \[ \hat {\bm \Gamma} = \frac{\bm Q' \bm R}{ \bm Q' \bm Q}. \] Since $Q_{ik}$ is drawn i.i.d. for each $i$ and each $k$, then $\mathbb E[Q_{ik}Q_{ij}]=0$ unless $j=k$, in which case $\mathbb E[Q^2_{ik}]=h^2$. Since $Q_{ik}$ is drawn randomly for each individual, by the Law of Large Numbers, \[ \frac{ \bm Q' \bm Q }{h^2 n} \rightarrow_p \mathbb E \left[{\bm Q'\bm Q}/h^2 \right] = I_k.\] The denominator $h n^2 = c n^{1-2\alpha}$. $\bm Q' \bm R $ is a $k \times 1$ vector, where \[ \frac{ [\bm Q' \bm R]_j }{h^2}= \sum \limits_{i=1}^n \frac{R_i Q_{ij}}{h^2} = \frac{\sum \limits_{i:Q_{ij}= h} R_{i} - \sum \limits_{i:Q_{ij}=-h } R_i }{2h} \] Let $\bm {e_j}$ be the length $k$ basis vector with 1 at position $j$ and 0 everywhere else. Let $\bm A_j$ be the length $k$ vector with position $j$ fixed at $h$ and the other entries drawn randomly from $\{+h, -h\}$. From the LLN and the CLT, we have that \[ \frac{\sum \limits_{i:Q_{ij}= h} R_{i} - \sum \limits_{i:Q_{ij}=-h } R_i }{h} = \frac{\mathbb E[R(\bm \beta + A_j)] - \mathbb E[R(\bm \beta - A_j)] }{2h} + O_p(n^{-1/2 + \alpha} ) \] The remainder term is $o_p(1)$ since $\alpha < \frac{1}{2}$. Via a Taylor Expansion of $R(\bm \beta + \bm A_j)$ and $R(\bm \beta - A_j)$, we have that \[ \frac{\mathbb E[R(\bm \beta + A_j)] - \mathbb E[R(\bm \beta - A_j)] }{h} = \frac{R(\bm \beta + \bm e_j) - R(\bm \beta - \bm e_j)}{2h} + o_p(1). \] We can apply the result that the centered difference approximation converges to the partial derivative as $h \rightarrow 0$, which uses a Taylor expansion of $R(\bm \beta)$. \[ \frac{ R ( \bm \beta + h \bm e_j) -R(\bm \beta - h \bm e_j) }{2h} = \frac{\partial \Pi(\bm \beta)}{\partial \beta_j} + o_p(h^2) \] Since we have that $h= c n^{-\alpha}$ with $\alpha>0$, then as $n \rightarrow \infty$, $h \rightarrow 0$. As a result, we can now combine our results to show that: \[ \frac{ [\bm Q' \bm R ]_j }{h^2 n} = \frac{\partial R(\bm \beta)}{\partial \beta_j} + o_p(1) \] Now taking the expression for $\hat{ \bm \Gamma}$ and dividing both the numerator and denominator by $h^2 n$, and applying Slutsky's theorem, we have that: \[ \hat {\bm \Gamma} = \left (\frac{\bm Q' \bm Q}{h^2 n} \right)^{-1} \left(\frac{\bm Q' \bm R}{h^2 n} \right) \rightarrow_p (\bm I_K)^{-1} \nabla R (\hat {\bm \beta} ^t) = \nabla R (\hat {\bm \beta} ^t) \] \end{proof} \subsection{Proof of Theorem 2} \begin{proof} Define $\hat {\bm \Lambda}^t = - \hat {\bm \Gamma}^t$ and $\Pi(\bm \beta) = -R(\bm \beta)$. Follows the approach of the proof of Theorem 7 in \cite{wager2019experimenting}. The first step is to use Lemma 1 from \citet{orabona2014generalized} to show that \[ \sum \limits_{t=1}^T t(\bm \beta - \hat {\bm \beta}^t)' \hat {\bm \Lambda}^t \leq \frac{1}{2\eta} \sum \limits_{t=1}^T t||\bm \beta -\hat {\bm \beta}^t||_2^2 + \frac{\eta}{2} \sum \limits_{t=1}^T || \hat {\bm \Lambda}^t ||_2^2. \] In order to use Lemma 1, we can define \[ \bm f_t(\bm \beta) = \frac{1}{2\eta} \sum \limits_{s=1}^t s || \bm \beta - {\hat {\bm \beta}}^s ||^2_2, \qquad {\bm \theta}_t = \sum \limits_{s=1}^t s \hat {\bm \Lambda}^s. \] We can also define the gradient of the Fenchel conjugate of $\bm f_t$ as \begin{equation} \nabla \bm f^*_t( \bm \theta_t) = \arg \min_{\beta} f_t(\bm \beta) - \bm \beta \bm \theta_t. \label{eq:fench} \end{equation} The next step is to show that $ \bm {\hat \beta}^{t+1} = \bm{ \hat \beta^t} + 2 \eta \hat {\bm \Lambda}^t /(t+1) = \nabla \bm f^*_t( \bm \theta_t)$. Setting the FOC of the term that is minimized in Equation \ref{eq:fench} to zero and evaluating at $\bm \beta = \bm \beta^{t+1}$, \[ \frac{1}{2\eta} \sum \limits_{s=1}^t s (\bm {\hat \beta}^{t+1} - \hat {\bm \beta}^s ) = \bm \theta_t. \] We can easily verify that the LHS of the equation is equal to the RHS of the equation when we have $\hat{\bm \beta}^{t+1} = \bm{ \hat \beta^t} + 2 \eta \hat {\bm \Lambda}^t /(t+1) $ so that $ \hat{ \bm \beta}^{t+1} - \hat{\bm \beta}^{s} = \sum \limits_{q=s}^t 2 \eta \hat {\bm \Lambda}^q/(q+1)$: \[ \frac{1}{2\eta} \sum \limits_{s=1}^t s ( \hat{ \bm \beta}^{t+1} - \bm {\hat \beta}^{s} ) = 2 \sum \limits_{q=1}^t \sum \limits_{b=1}^q b \hat {\bm \Lambda}^q/(q+1) = \sum \limits_{s=1}^t s \hat {\bm \Lambda}^s = \bm \theta_t. \] The previous derivation has now shown that the gradient step in Algorithm \ref{algo:exp} of this paper is in the form of Algorithm 1 (Online Mirror Descent) of \citet{orabona2014generalized}, where we can map notation with $\bm z_t$ replaced by $t \bm {\hat \Lambda}^t$, $\bm u$ replaced by $\bm \beta$, and $w_t$ replaced by $\hat {\bm \beta}^t$. We can now directly apply Lemma 1, where the third summand of Lemma 1 can be dropped since it is negative by Equation (4) of \citet{orabona2014generalized}, to show that \[ \sum \limits_{t=1}^T t(\bm \beta - \hat {\bm \beta}^t)' \hat {\bm \Lambda}^t \leq \frac{1}{2\eta} \sum \limits_{t=1}^T t||\bm \beta -\hat {\bm \beta}^t||_2^2 + \frac{\eta}{2} \sum \limits_{t=1}^T ||\hat { \bm \Lambda}^t||_2^2. \] Then, we can replace the gradient estimate $\hat {\bm \Lambda}^t$ with its limit value $\nabla \Pi (\hat {\bm \beta} ^t)$ and add an appropriate error term: \begin{align*} \sum \limits_{t=1}^T t(\bm \beta - \hat { \bm \beta } ^t)' \nabla \Pi(\hat { \bm \beta } ^t) & \leq \frac{1}{2\eta} \sum \limits_{t=1}^T t||{ \bm \beta } -\hat { \bm \beta } ^t||_2^2 + \frac{\eta}{2} \sum \limits_{t=1}^T ||\nabla R (\hat { \bm \beta } ^t)||_2^2 \\ &+ \frac{\eta}{2} \sum \limits_{t=1}^T (|| \hat { \bm \Lambda}^t||_2^2 - || \nabla \Pi (\hat { \bm \beta } ^t)||_2^2 ) + \sum \limits_{t=1}^T t({ \bm \beta } - \hat { \bm \beta } ^t)' (\nabla \Pi (\hat { \bm \beta } ^t) - \hat { \bm \Lambda}^t).\end{align*} From Lemma 1, we know that with probability approaching 1 as $n \rightarrow \infty$, we have that for any $\epsilon >0$ that \[ \sum \limits_{t=1}^T t({ \bm \beta } - \hat { \bm \beta } ^t)' \nabla \Pi (\hat { \bm \beta } ^t) \leq \frac{1}{2\eta} \sum \limits_{t=1}^T t||{ \bm \beta } -\hat { \bm \beta } ^t||_2^2 + \frac{\eta}{2} \sum \limits_{t=1}^T ||\nabla \Pi( \hat { \bm \beta } ^t) ||_2^2 + \epsilon. \] Then, given that the gradient is bounded by $M$, we have that \begin{equation} \label{eq:proof1} \sum \limits_{t=1}^T t({ \bm \beta } - \hat { \bm \beta } ^t)' \nabla \Pi(\hat { \bm \beta } ^t) - \frac{1}{2\eta} \sum \limits_{t=1}^T t||{ \bm \beta } -\hat { \bm \beta } ^t||_2^2 \leq \frac{\eta T M^2}.{2} \end{equation} Next, we use the $\sigma$-strong convexity of $R(\bm \beta)$, which implies that $\Pi(\bm \beta)$ is strongly concave, so for any $\bm \beta$, \begin{equation*} \Pi({ \bm \beta } ) \leq \Pi(\hat { \bm \beta } ^t) + ({ \bm \beta } - \hat { \bm \beta } ^t)' \nabla \Pi(\hat { \bm \beta } ^t) - \frac{\sigma}{2}|| { \bm \beta } - \hat { \bm \beta } ^t||^2_2. \end{equation*} Summing over $t=1, \ldots T$, \begin{align*} \sum \limits_{t=1}^T t (\Pi({ \bm \beta } ) - \Pi(\hat { \bm \beta } ^t)) & \leq \sum \limits_{t=1}^T t({ \bm \beta } - \hat { \bm \beta } ^t)' \nabla \Pi(\hat { \bm \beta } ^t) - \frac{\sigma}{2}\sum \limits_{t=1}^T t || { \bm \beta } - \hat { \bm \beta } ^t||^2_2 \\& \leq \sum \limits_{t=1}^T t({ \bm \beta } - \hat { \bm \beta } ^t)' \nabla \Pi(\hat { \bm \beta } ^t) - \frac{1}{2 \eta}\sum \limits_{t=1}^T t || { \bm \beta } - \hat { \bm \beta } ^t||^2_2 \end{align*} Where the last line is from $\sigma > \eta^{-1}$. We can then substitute this inequality in Equation \ref{eq:proof1} to get \[ \frac{1}{T} \sum \limits_{t=1}^T t ( \Pi( { \bm \beta } ) - \Pi (\hat{ \bm \beta } ^t)) \leq \frac{\eta M^2}{2}.\] Substituting $\Pi(\bm \beta) = -R(\bm \beta)$, we now have the result. For any $\bm \beta$, \[ \frac{1}{T} \sum \limits_{t=1}^T t ( R( \hat { \bm \beta } ^t) - R( { \bm \beta } )) \leq \frac{\eta M^2}{2}.\] with probability approaching 1 as $n \rightarrow \infty$. \end{proof} \subsection{Proof of Corollary \ref{cor:convg}} From Theorem \ref{thm:regret}, we have that for any $\bm \beta$, \[ \frac{1}{T} \sum \limits_{t=1}^T t ( R(\hat { \bm \beta } ^t) - R({ \bm \beta } ) ) \leq \frac{\eta M^2}{2} \] with probability 1 as $n \rightarrow \infty.$ Using $\sigma$- strong-convexity of $R(\bm \beta)$, and the fact that $\nabla R ({ \bm \beta } ^*) = 0$, we have that $R(\bm \beta^t) - R(\bm \beta^*) \geq \frac{\sigma}{2} ||\bm \beta^t - \bm \beta^*||^2_2$, we can rewrite this as \[ \frac{\sigma}{2} \frac{1}{T} \sum \limits_{t=1}^T t|| { \bm \beta } ^* - \hat { \bm \beta } ^t||^2_2 \leq \frac{\eta M^2}{2}. \] Then, note that \[ \frac{T^2}{2} || { \bm \beta } ^* - \hat { \bm \beta } ^T || \leq \sum \limits_{t=1}^T t || { \bm \beta } ^* - \hat { \bm \beta } ^T||^2_2 \leq \sum \limits_{t=1}^T t || { \bm \beta } ^* - \hat { \bm \beta } ^t||. \] We can substitute this, which implies the result, with probability 1 as $n \rightarrow \infty$, \[ \frac{\sigma}{4} T || { \bm \beta } ^* - \hat { \bm \beta } ^T||^2_2 \leq \frac{ \eta M^2}{2}.\] %\section{Structural Models} %The maximum possible value of the objective is when the planner gives the coupon only to those with $\theta_i = 1$ and the expected profit is $\frac{1}{2} \cdot 10 + \frac{1}{2} \cdot 3.75 = 6.875$. %Expected profit is $Pr( \theta_i = 0) \Big ( Pr(X_i = 1 | \theta_i = 0) \beta \cdot 5 + [ Pr(X_i = 1 | \theta_i = 0) (1- \beta) + Pr(X_i = 0 | \theta_i = 0)] \cdot 10 \Big ) \qquad + 3.75 Pr(\theta_i = 1) \beta $. \end{document}
{ "alphanum_fraction": 0.7166901557, "avg_line_length": 100.8676470588, "ext": "tex", "hexsha": "0cdc81ad67885744e4aed999faee2cc936af7ef3", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-12-19T02:51:44.000Z", "max_forks_repo_forks_event_min_datetime": "2020-12-19T02:51:44.000Z", "max_forks_repo_head_hexsha": "9361ad4c0220ac279a5e5193cc0698726d67f4aa", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "evanmunro/personalized-policy", "max_forks_repo_path": "writing/paper_draft.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9361ad4c0220ac279a5e5193cc0698726d67f4aa", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "evanmunro/personalized-policy", "max_issues_repo_path": "writing/paper_draft.tex", "max_line_length": 1527, "max_stars_count": null, "max_stars_repo_head_hexsha": "9361ad4c0220ac279a5e5193cc0698726d67f4aa", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "evanmunro/personalized-policy", "max_stars_repo_path": "writing/paper_draft.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 17920, "size": 61731 }
\chapter{Additional Kolmogorov-Smirnov Two Sample Test Tables} \label{appendix:ks-test} This appendix lists the Kolmogorov-Smirnov tables comparing the features for intensity and texture. \begin{table}[H] \label{table:blob-texture-ks} \centering \primitiveinput{tables/texture_features_ks.tex} \caption{Comparison of the Kolmogorov-Smirnov test results for each texture feature derived from patches of images defined by blobs across ten scales.} \end{table} \primitiveinput{tables/intensity_features_ks.tex} \begin{table}[H] \label{table:line-texture-ks} \centering \primitiveinput{tables/texture_features_ks_lines.tex} \caption{Comparison of the Kolmogorov-Smirnov test results for each texture feature derived from patches of images defined by lines.} \end{table} \begin{table}[H] \label{table:line-intensity-ks} \centering \primitiveinput{tables/line_intensity_features_ks.tex} \caption{Comparison of the Kolmogorov-Smirnov test results for each intensity feature derived from patches of images defined by lines.} \end{table}
{ "alphanum_fraction": 0.8180058083, "avg_line_length": 38.2592592593, "ext": "tex", "hexsha": "163ecaba7a7e136b83ce52bae8579ef725e7392d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5d82b875944fcf1f001f9beb5e5419ba60be3bf1", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "samueljackson92/major-project", "max_forks_repo_path": "documents/final-report/Appendix3/appendix3.tex", "max_issues_count": 64, "max_issues_repo_head_hexsha": "5d82b875944fcf1f001f9beb5e5419ba60be3bf1", "max_issues_repo_issues_event_max_datetime": "2015-05-03T15:46:49.000Z", "max_issues_repo_issues_event_min_datetime": "2015-02-05T06:34:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "samueljackson92/major-project", "max_issues_repo_path": "documents/final-report/Appendix3/appendix3.tex", "max_line_length": 151, "max_stars_count": 8, "max_stars_repo_head_hexsha": "5d82b875944fcf1f001f9beb5e5419ba60be3bf1", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "samueljackson92/major-project", "max_stars_repo_path": "documents/final-report/Appendix3/appendix3.tex", "max_stars_repo_stars_event_max_datetime": "2020-03-17T00:57:42.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-26T16:23:29.000Z", "num_tokens": 271, "size": 1033 }
\section{Fonts} Fonts have these attributes: family, series, shape, size, and encoding. The first four attributes have some convenient high-level commands that are useful. These are the declarations and corresponding inline commands (defaults are out of the box defaults and can be overridden in the preamble): \begin{verbatim} % family \rmfamily \textrm{<text>} % roman; default \sffamily \textsf{<text>} % sans serif; (why not ss or ssf?) \ttfamily \texttt{<text>} % monospace typewriter % series \mdseries \textmd{<text>} % normal weight and expansion; % default \bfseries \textbf{<text>} % bold \end{verbatim} \begin{verbatim} %%% shape % upright; \textnormal defaults to this \upshape \textup{<text>} \textnormal{<text>} % italics \itshape \textit{<text>} \emph{<text>} {\em ...} % \emph and \em toggle back and forth when nested % \itshape and \emph use italic space correction, % \textit does not % slanted \slshape \textsl{<text>} % small caps \scshape \textsc{<text>} % size declarations \normalsize % 10 pt by default; can be set to 11 or 12 \large % 12 pt \Large % 14.4 pt \small % 9 pt \footnotesize % 8 pt \end{verbatim} The following text is an example of an unnamed environment. See the source code to understand how the commands are scoped: \ind{\Large this text is Large {\slshape and now slanted}} The preceding commands are actually built upon other lower level commands. These other commands have the form \verb2\font<attr>2, where \verb2<attr>2 is one of the font attributes.\footnote{Pages 362-4 of Kopka and Daly cover these in detail.} The encoding selects the lookup table for the font. In essence, this sets the fundamental character forms that are printed. The family sets the font properties such as serif (r for roman), sans serif (s/ss), or equal spacing (tt for typewriter). There are various names for the families, with the letters r, s, and t often giving hints as to what the family looks like. The series sets the weight and the width. The letter \q l\q\ for a weight means lighter than normal. The letter \q b\q\ means heavier than normal. The letter \q m\q\ means normal (think median). For \q l\q\ and \q b\q, these prefixes further modify the weight: s = semi (means less than \q l\q\ or \q b\q\ by themselves), e = extra, u = ultra. Width uses similar conventions, with \q m\q\ meaning normal, \q c\q\ meaning compressed, and \q x\q\ meaning expanded. Same modifiers as for weight apply. The shape sets the angle or small caps. The letter \q n\q\ is normal, \q sl\q\ is slanted, \q it\q\ is italic, and \q sc\q\ is small caps.\footnote{The book also shows a \q u\q\, which possibly means upright.} The size sets the height of the letter \q x\q\ in points, along with the vertical separation between lines in points. There are 12 preset values the pitch can take out of the box.\footnote{Other values may be added, but the book doesn't mention how.} If a font with all the specified attributes cannot be found, the tool issues a warning and says which attributes are used. As an example: \begin{verbatim} % defaults are OT1, cmr, mm, n, 10, and the default spacing % OT1 is a font class. cmr is Computer Modern Roman, % mm is probably median, n is upright, 10 is 10 pt. % use Cork encoding; this expands on Knuth's original table \fontencoding{T1} \fontfamily{cmss} % computer modern sans serif \fontseries{sbm} % semibold normal width \fontshape{n} % upright \fontsize{14.4}{16} % 14.4 pt height, 16 pt space between lines \selectfont % activates everything in preceding scope \end{verbatim} A more abbreviated command set is this: \begin{verbatim} \usefont{T1}{cmss}{sbm}{n} \fontsize{14.4}{16} \selectfont % remains in effect until next \selectfont command \end{verbatim}
{ "alphanum_fraction": 0.7091419816, "avg_line_length": 50.8571428571, "ext": "tex", "hexsha": "1cdbd34eed25f8df665d5b5619e35241e673bc40", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "85cb0e41f080c9189bb8ce860003dbf29ebb0e21", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "liddell-d/LaTeX_primer", "max_forks_repo_path": "Inputs/fonts.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "85cb0e41f080c9189bb8ce860003dbf29ebb0e21", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "liddell-d/LaTeX_primer", "max_issues_repo_path": "Inputs/fonts.tex", "max_line_length": 499, "max_stars_count": null, "max_stars_repo_head_hexsha": "85cb0e41f080c9189bb8ce860003dbf29ebb0e21", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "liddell-d/LaTeX_primer", "max_stars_repo_path": "Inputs/fonts.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1095, "size": 3916 }
%&context \section[sct_match]{Finding subtrees in other trees} \match{} tries to match a (typically smaller) "pattern" tree to one or more "target" tree(s). If the pattern matches the target, the target tree is printed. Intuitively, a pattern matches a target if one can superimpose it onto the target without "breaking" either. More accurately, the following happens (in both trees): \startitemize[n] \item leaves with labels found in both trees are kept, the other ones are pruned \item inner labels are discarded \item both trees are ordered (as done by \order{}, see \in{}[sct_order]) \item branch lengths are discarded \stopitemize At this point, the modified pattern tree is compared to the modified target, and if the \nw{} strings are identical, the match is successful. \subsubsection{Example: finding trees with a specified subtree topology} File \filename{hominoidea.nw} contains seven trees corresponding to successive theories about the phylogeny of apes (these were taken from \from[URL:Hominoidea]). Let us see which of them group humans and chimpanzees as a sister clade of gorillas (which is the current hypothesis). \page[no] Here are small images of each of the trees in \filename{hominoidea.nw}: \\ \startcombination[2*4] {\externalfigure[homino_0][scale=700]} {1 (until 1960)} {\externalfigure[homino_1][scale=700]} {2 (Goodman, 1964)} {\externalfigure[homino_2][scale=700]} {3 (gibbons as outgroup)} {\externalfigure[homino_3][scale=700]} {4 (Goodman, 1974: orangs as outgroup)} {\externalfigure[homino_4][scale=700]} {5 (resolving trichotomy)} {\externalfigure[homino_5][scale=700]} {6 (Goodman, 1990: gorillas as outgroup)} {\externalfigure[homino_6][scale=700]} {7 (split of {\em Hylobates})} \stopcombination Trees \#6 and \#7 match our criterion, the rest do not. To look for matching trees in \filename{hominoidea.nw}, we pass the pattern on the command line: \typefile{match_1_txt.cmd} \page[no] \typefile{match_1_txt.out} Note that only the pattern tree's topology matters: we would get the same results with pattern \code{((Homo,Pan),Gorilla);}, \code{((Pan,Homo),Gorilla);}, etc., but not with \code{((Gorilla,Pan),Homo);} (which would select trees \#1, 2, 3, and 5. In future versions I might add an option for strict matching. The behaviour of \match{} can be reversed by passing option \code{-v} (like \code{grep -v}): it will print trees that {\em do not} match the pattern. Finally, note that \match{} only works on leaf labels (for now), and assumes that labels are unique in both the pattern and the target tree.
{ "alphanum_fraction": 0.7531104199, "avg_line_length": 44.3448275862, "ext": "tex", "hexsha": "b18bdb4156ba260be250d2028b674703a35fcb7f", "lang": "TeX", "max_forks_count": 26, "max_forks_repo_forks_event_max_datetime": "2022-03-24T02:43:50.000Z", "max_forks_repo_forks_event_min_datetime": "2015-05-07T09:23:34.000Z", "max_forks_repo_head_hexsha": "da121155a977197cab9fbb15953ca1b40b11eb87", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "Cactusolo/newick_utils", "max_forks_repo_path": "doc/c-match.tex", "max_issues_count": 24, "max_issues_repo_head_hexsha": "da121155a977197cab9fbb15953ca1b40b11eb87", "max_issues_repo_issues_event_max_datetime": "2021-12-27T10:53:41.000Z", "max_issues_repo_issues_event_min_datetime": "2015-01-22T19:34:50.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "Cactusolo/newick_utils", "max_issues_repo_path": "doc/c-match.tex", "max_line_length": 152, "max_stars_count": 62, "max_stars_repo_head_hexsha": "da121155a977197cab9fbb15953ca1b40b11eb87", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "Cactusolo/newick_utils", "max_stars_repo_path": "doc/c-match.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-07T09:12:51.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-08T22:22:08.000Z", "num_tokens": 731, "size": 2572 }
\RequirePackage[l2tabu, orthodox]{nag} \RequirePackage{silence} \documentclass[french,english]{beamer} \input{preamble/packages} \input{preamble/math_basics} \input{preamble/math_mine} \input{preamble/redac} \input{preamble/draw} \input{preamble/acronyms} \title[Automatic argumentation]{Towards automatic argumentation about voting rules} \subject{Social choice} \keywords{empirical, theorem proving, automatic proofs} \author[Michael Kirsten, \emph{Olivier Cailloux}]{Michael Kirsten \inst{1} \and \emph{Olivier Cailloux} \inst{2}} \institute[KIT, LAMSADE]{\inst{1} Dept. of Informatics, Karlsruhe Institute of Technology (KIT) \and \inst{2} LAMSADE, Université Paris-Dauphine} \date{\formatdate{3}{7}{2018}} \begin{document} \begin{frame}[plain] \tikz[remember picture,overlay]{ \path (current page.south west) node[anchor=south west, inner sep=0] { \includegraphics[height=1cm]{LAMSADE95.jpg} }; \path (current page.south) ++ (0, 1mm) node[anchor=south, inner sep=0] { \includegraphics[height=9mm]{Dauphine.jpg} }; \path (current page.south east) node[anchor=south east, inner sep=0] { \includegraphics[height=1cm]{PSL.png} }; \path (current page.south) ++ (0, 4em) node[anchor=south, inner sep=0] { \scriptsize\url{https://github.com/oliviercailloux/voting-rule-argumentation-pres} }; } \titlepage \end{frame} \addtocounter{framenumber}{-1} \begin{frame} \frametitle{Introduction} \begin{block}{Context} \begin{itemize} \item Voting rule: a systematic way of aggregating different opinions and decide \item Multiple reasonable ways of doing this \item Different voting rules have different interesting properties \item None satisfy all desirable properties \end{itemize} \end{block} \begin{block}{Our goal} We want to easily communicate about strengths and weaknesses of voting rules \end{block} \end{frame} \begin{frame} \frametitle{Outline} \tableofcontents[hideallsubsections, sectionstyle=shaded/show] \end{frame} \AtBeginSection{ \begin{frame} \frametitle{Outline} \tableofcontents[currentsection, hideallsubsections] \end{frame} } \section{Context} \subsection{Introduction} \begin{frame}[fragile] \frametitle{Voting rule} \begin{description}[Profile (on $A$)] \item[Alternatives] $\allalts = \set{a, b, c, d, \ldots}$; $\card{\allalts}=m$ \item[Possible voters] $\allvoters = \set{1, 2, \ldots}$ \item[Voters] $\emptyset \subset \voters \subseteq \allvoters$ % \item[Linear orders on $A \subseteq \allalts$] $\linors$. \item[Profile] Partial function $\prof$ from $\allvoters$ to linear orders on $\allalts$. \item[Voting rule] Function $f$ mapping each $\prof$ to winners $\emptyset \subset A \subseteq \allalts$. \end{description} \vfill \begin{center} \begin{tikzpicture} \path node[profile matrix] (profile) { R_1& R_2 \\ | (profile11) | a& b \\ b& a \\ c& | (profile32) | c \\ }; \path ($(profile.south west)!.5!(profile.south east)$) ++ (0, -5mm) node {$\prof$}; \path node[draw, rectangle, fit=(profile11) (profile32), outer xsep=2mm, outer ysep=1mm] (justprofile) {}; \path (justprofile.east) ++ (2.5cm, 0) node[inner sep=0] (winners) {\mbox{} $A = \Set{a, b}$}; \path[draw, ->] (justprofile.east) to[bend left=35] node[anchor=south] {$f$} (winners.west); \path[draw, decorate, decoration={brace, mirror}] (justprofile.south west) -- (justprofile.south east); \end{tikzpicture} \end{center} \end{frame} \begin{frame}[fragile] \frametitle{Example profile} \begin{equation} \begin{array}{lrrrrrr} &\multicolumn{6}{c}{\text{nb voters}}\\ \cmidrule{2-7} &33 &16 &3 &8 &18 &22 \\ \midrule 1 &a &b &c &c &d &e \\ 2 &b &d &d &e &e &c \\ 3 &c &c &b &b &c &b \\ 4 &d &e &a &d &b &d \\ 5 &e &a &e &a &a &a \\ \end{array} \end{equation} Who wins?\pause \begin{itemize} \item Most top-1: $a$ \item $c$ is in the top 3 for everybody \item Delete worst first, lowest nb of pref: $c$, $b$, $e$, $a$ ⇒ $d$ \item Delete worst first, from bottom: $a$, $e$, $d$, $b$ ⇒ $c$ \item Borda: $b$ % \item Condorcet: $c$ \end{itemize} \end{frame} \subsection{Two voting rules} \begin{frame} \frametitle{Borda} Given a profile $\prof$: \begin{itemize} \item Score of $a \in \allalts$: number of alternatives it beats \item The highest scores win \end{itemize} \begin{equation} \prof = \begin{array}{rrrrr} a & a & a & b & b\\ b & b & b & c & c\\ c & c & c & a & a \end{array} \end{equation} \begin{itemize} \item Score $a$ is~\dots? \pause $2 + 2 + 2 = 6$ \item Score $b$ is $1 + 1 + 1 + 2 + 2 = 7$ \item Score $c$ is $1 + 1 = 2$ \end{itemize} Winner: $b$. \end{frame} \begin{frame} \frametitle{Copeland} Given a profile $\prof$: \begin{itemize} \item Score of $a \in \allalts$: number of alternatives against which it obtains a strict majority~… \item … minus: number of alternatives that obtains a strict majority against $a$ \item The highest scores win \end{itemize} \begin{equation} \prof = \begin{array}{rrrrr} a & a & a & b & b\\ b & b & b & c & c\\ c & c & c & a & a \end{array} \end{equation} \begin{itemize} \item Score $a$ is~\dots? \pause $\card{\set{b, c}} - \card{\emptyset} = 2$ \item Score $b$ is $\card{\set{c}} - \card{\set{a}} = 0$ \item Score $c$ is $\card{\emptyset} - \card{\set{a, b}} = -2$ \end{itemize} Winner: $a$. \end{frame} \subsection{Axiomatic analysis} \begin{frame} \frametitle{\subsecname} \begin{quote} Rather than dream up a multitude of arbitration schemes and determine whether or not each withstands the best of plausibility in a host of special cases, let us invert the procedure. Let us examine our subjective intuition of fairness and formulate this as a set of precise desiderata that any acceptable arbitration scheme must fulfil. Once these desiderata are formalized as axioms, then the problem is reduced to a mathematical investigation of the existence of and characterization of arbitration schemes which satisfy the axioms. \end{quote} \citet[p. 121]{luce_games_1957}\par \end{frame} \begin{frame} \frametitle{What is an axiom?} \begin{itemize} \item An axiom (for us) is a principle \item Expressed formally \item That dictates some behavior of a voting rule \item In some conditions \item Usually seen as something to be satisfied \item Ideally, some combination of axioms defines exactly one rule \item Some axioms can be shown to be incompatible \end{itemize} \end{frame} \begin{frame} \frametitle{Unanimity} \begin{definition}[Unanimity] We may not select as winner someone who has some unanimously preferred alternative. \end{definition} \begin{equation} \prof = \begin{array}{rrr} a & a & b\\ b & b & c\\ c & c & a \end{array} \end{equation} Constraint? \pause Do not take $c$, as $b$ is unanimously preferred to it. \pause \begin{equation} \prof = \begin{array}{rrr} a & a & b\\ b & c & c\\ c & b & a \end{array} \end{equation} Constraint? \pause No constraint. \end{frame} \begin{frame} \frametitle{Condorcet’s principle} \begin{block}{Condorcet’s principle} We ought to take the Condorcet winner as sole winner if it exists. \begin{itemize} \item $a$ \emph{beats} $b$ iff more than half the voters prefer $a$ to $b$. \item $a$ is a \emph{Condorcet winner} iff $a$ beats every other alternative. \end{itemize} \end{block} \vfill \begin{equation} \prof = \begin{array}{rrrrr} a & a & a & b & b\\ b & b & b & c & c\\ c & c & c & a & a \end{array} \end{equation} Who wins? \pause $a$. \end{frame} \begin{frame} \frametitle{Borda does not satisfy Condorcet} \begin{equation} \prof = \begin{array}{rrrrr} a & a & a & b & b\\ b & b & b & c & c\\ c & c & c & a & a \end{array} \end{equation} \begin{itemize} \item Borda winner? \pause $b$. \item Condorcet winner? \pause $a$. \end{itemize} \end{frame} \begin{frame} \frametitle{Cancellation} \begin{definition}[Cancellation] When all pairs of alternatives $(a, b)$ in a profile are such that $a$ is preferred to $b$ as many times as $b$ to $a$, we ought to select all alternatives as winners. \end{definition} \begin{example} \begin{equation} f\left(% \begin{array}{rrrr} a&b&c&c\\ b&a&a&b\\ c&c&b&a\\ \end{array}\right) = \allalts \end{equation} \end{example} \end{frame} \begin{frame} \frametitle{Reinforcement} \begin{definition}[Reinforcement] When joining two sets of voters, exactly those winners that each set accepts should be selected, if possible. \end{definition} \begin{example} \begin{equation} \prof_1 = \begin{array}{cc} a&b\\ b&a\\ c&c\\ \end{array}, A_1 = \set{a, b}, \prof_2 = \begin{array}{ccc} a&b&a\\ b&a&c\\ c&c&b\\ \end{array}, A_2 = \set{a}, \end{equation} \begin{equation} \prof = \begin{array}{ccccc} a&b&a&b&a\\ b&a&b&a&c\\ c&c&c&c&b\\ \end{array}. \text{ Winners? } \pause \set{a} \end{equation} \end{example} \end{frame} \subsection{Objective} \begin{frame} \frametitle{Our objective} Automatically produce “arguments” of the kind: Voting rule $f$ does not satisfy axiom $a$ on profile $R$. \begin{itemize} \item To better understand their differences \item To help debate and choose a voting rule \item To empirically investigate attitudes towards given voting rules \end{itemize} \end{frame} \section{Approach} \subsection{Overview} \begin{frame} \frametitle{Overview} \begin{itemize} \item Given a voting rule $f$ and an axiom $a$ \item $a$ indicates, given $\prof$ and winners $W$, if $(\prof, W)$ fails the axiom \end{itemize} \centering \begin{tikzpicture} \path node (R) {$\prof$}; \path (R.east) ++ (2cm, 0) node[anchor=west] (W) {$W$}; \path[draw, ->] (R.east) to[bend left=35] node[anchor=south] {$f$} (W.west); \path (W.east) ++ (1cm, 0) node[anchor=west] (RW) {$(\prof, W)$}; \path (RW.east) ++ (2cm, 0) node[anchor=west] (out) {pass / fail}; \path[draw, ->] (RW.east) to[bend left=35] node[anchor=south] {$a$} (out.west); \end{tikzpicture} \begin{block}{Objective} Find $\prof$ such that $(\prof, f(\prof))$ fails $a$ \end{block} \begin{block}{Example} \begin{itemize} \item $f$ = Borda \item $a$ = Condorcet \item $f(\prof) = \set{b}$ (with $\prof$ as used before) \item $a(\prof, \set{b})$ fails \end{itemize} \end{block} \end{frame} \begin{frame} \frametitle{Overview} \begin{itemize} \item Given implementations \texttt{algo\_$f$} and \texttt{algo\_$a$} \end{itemize} \centering \begin{tikzpicture} \path node (R) {$\prof$}; \path (R.east) ++ (2cm, 0) node[anchor=west] (W) {$W$}; \path[draw, ->] (R.east) to[bend left=35] node[anchor=south] (af) {\texttt{algo\_$f$}} (W.west); \path (W.east) ++ (1cm, 0) node[anchor=west] (RW) {$(\prof, W)$}; \path (RW.east) ++ (2cm, 0) node[anchor=west] (out) {pass / fail}; \path[draw, ->] (RW.east) to[bend left=35] node[anchor=south] (aa) {\texttt{algo\_$a$}} (out.west); \onslide<2>{ \path node[draw, color=red, rectangle, fit={(af) (W) (RW) (aa)}] (algo) {}; \path (algo.south) node[anchor=north] {\color{red}{\texttt{algo}}}; } \end{tikzpicture} \begin{itemize} \item We view it as a whole program \texttt{algo} \item We use SBMC, a software for checking program properties \item We let SBMC search for an input $\prof$ that fails \texttt{algo} \item Similar to searching for existence of a bug \end{itemize} \end{frame} \subsection{Software Bounded Model Checking} \begin{frame} \frametitle{Checking properties} % \begin{lstlisting} \texttt{assume (x > 0);}\\ \texttt{i = 0;}\\ \texttt{x0 = x;}\\ \texttt{while (x < y) \{}\\ \phantom{\texttt{whil}}\texttt{x += y;}\\ \phantom{\texttt{whil}}\texttt{i += 1;}\\ \}\\ \texttt{assert (x0 + y*i >= x);} % % \end{lstlisting} \begin{itemize} \item Given an algorithm with parameters (e.g., $x$, $y$) \item Check that some property holds \item For all possible parameters \item … that satisfy given assumptions \item[⇒] Search for $(x, y)$ that satisfy assumptions and fail assertion \end{itemize} \end{frame} \begin{frame}{Software Bounded Model Checking (SBMC)} \begin{block}{Specification} \begin{itemize} \item Properties specified using \texttt{assume} and \texttt{assert} statements \item A program \texttt{Prog} is \textbf{correct} iff: \[ \texttt{Prog} \wedge \bigwedge \texttt{assume} \Rightarrow \bigwedge \texttt{assert} \] \item \texttt{Prog} is automatically generated logical encoding of the program \end{itemize} \end{block} \begin{itemize} \item SBMC tool converts program into SAT \item Exhaustive check by unwinding the control flow graph \item Bounded in number of loop unwindings and recursions \item Special “unwinding assertion” claims added to check whether longer program paths may be possible \end{itemize} \end{frame} \begin{frame} \frametitle{Taking care of loops in SBMC} \texttt{while(x < y) x = x + y;} \begin{tikzpicture} \node (start) at (0,5.75) {}; \node[ellipse, draw] (threea) at (-1.5,4) {\dots}; \node[ellipse, draw] (threeb) at (1.5,4) {\texttt{x1 = x0 + y0;}}; \node[ellipse, draw] (fivea) at (0, 2.5) {\texttt{x2 = x1 + y0;}}; \node[ellipse, draw] (fiveb) at (3, 2.5) {\dots}; \node[ellipse, draw] (seven) at (0, 1.5) {\dots}; \path[-latex,draw] (start) -- (threea) node [midway, red, left] {\texttt{\textbf{!(x0 < y0)}}}; \path[-latex,draw] (start) -- (threeb) node [midway,red, right] {\texttt{\textbf{x0 < y0}}}; \path[-latex,draw] (threeb) -- (fivea) node [midway, red, left] {\texttt{\textbf{x1 < y0}}}; \path[-latex,draw] (threeb) -- (fiveb) node [midway, red, right] {\texttt{\textbf{!(x1 < y0)}}}; \path[-latex,draw] (fivea) -- (seven); \end{tikzpicture} \end{frame} \begin{frame}{Specifying and Verifying Properties in SBMC\hspace*{-1.8em}} \begin{block}{Verification} \begin{itemize} \item Checking properties for programs generally undecidable \item SBMC analyses only program runs up to \textbf{bounded} length \item Property checking becomes decidable by logical encoding \item Can be decided using SAT- or SMT-solver \end{itemize} \end{block} \end{frame} \section{Empirical results} \subsection{Borda} \begin{frame} \frametitle{Borda fails Condorcet} A minimal counter-example (found in less than one second): \begin{equation} \prof = \begin{array}{ccc} c & c & b\\ b & b & a\\ a & a & c \end{array} \end{equation} Borda rule elects $\{a,c\}$ instead of the Condorcet winner $c$. The example can be easily inspected manually. \end{frame} \begin{frame} \frametitle{Borda fails Weak Majority} A minimal counter-example in nb alternatives (< 1 sec): \begin{equation} \prof = \begin{array}{ccccc} a & a & a & b & b\\ b & b & b & c & c\\ c & c & c & a & a \end{array} \end{equation} Borda elects $b$ instead of the majority winner $a$. A minimal counter-example in nb voters (< 1 sec): \begin{equation} \prof = \begin{array}{ccc} d & d & c\\ c & c & a\\ a & b & b\\ b & a & d \end{array} \end{equation} Borda elects $c$ instead of the majority winner $d$. \end{frame} \subsection{Copeland} \begin{frame}[fragile] \frametitle{Copeland fails Reinforcement} \begin{center} \begin{tikzpicture}[baseline] \begin{axis}[ title={Run-times for {\color{red}{2}}, {\color{green}{3}}, {\color{blue}{4}} and {\color{brown}{5}} alternatives}, height=0.75*\textheight, xlabel={Voters}, ylabel={Run-time [minutes]}, xmin=2,xmax=10, ymin=0,ymax=1800, xtick={0,1,2,3,4,5,6,7,8,9,10}, xlabel near ticks, ylabel near ticks, scaled y ticks={real:60}, ytick scale label code/.code={}, ytick distance = 180 ] \addplot[red] table[x index=1,y index=2] {plots/copeland/reinforcement/cand_2.dat} node[pos=0.7,yshift=0.2cm,sloped] {2}; \addplot[green] table[x index=1,y index=2] {plots/copeland/reinforcement/cand_3.dat} node[pos=0.5,yshift=0.2cm,sloped] {3}; \addplot[blue] table[x index=1,y index=2] {plots/copeland/reinforcement/cand_4.dat} node[pos=0.6,yshift=0.2cm,sloped] {4}; \addplot[brown] table[x index=1,y index=2] {plots/copeland/reinforcement/cand_5.dat} node[pos=0.3,yshift=0.2cm,sloped] {5}; \addplot[green, mark=*, only marks] table[x index=1,y index=2] {plots/copeland/reinforcement/cand_3_cexp.dat}; \addplot[blue, mark=*, only marks] table[x index=1,y index=2] {plots/copeland/reinforcement/cand_4_cexp.dat}; \addplot[brown, mark=*, only marks] table[x index=1,y index=2] {plots/copeland/reinforcement/cand_5_cexp.dat}; \end{axis} \end{tikzpicture} \end{center} \end{frame} \begin{frame} \frametitle{Copeland fails Reinforcement} A minimal counter-example (found in 32 seconds): \begin{equation} \prof_1 = \begin{array}{cc} b & a\\ a & c\\ c & b \end{array}, \hspace{1em} \prof_2 = \begin{array}{cc} a & b\\ b & a\\ c & c \end{array} \end{equation} \begin{itemize} \item Elected for $\prof_1$ and $\prof_2$: $a$ and $\{a,b\}$ respectively. \item For the joined profile $\prof_1 \cup \prof_2$, Copeland elects $\{a,b\}$ instead of $a$. \end{itemize} \end{frame} \begin{frame}[plain] \addtocounter{framenumber}{-1} \begin{center} \huge \textit{Thank you for your attention!} \end{center} \end{frame} \appendix \AtBeginSection{ } \clearpage\pdfbookmark[2]{\refname}{\refname} \begin{frame}%[allowframebreaks] \frametitle{\refname} \bibliography{zotero} \end{frame} \clearpage\pdfbookmark{License}{License} \begin{frame}[plain] \frametitle{License} This presentation, and the associated \LaTeX{} code, are published under the \href{http://opensource.org/licenses/MIT}{MIT license}. Feel free to reuse (parts of) the presentation, under condition that you cite the authors.\\ Credits are to be given to \href{http://www.lamsade.dauphine.fr/~ocailloux/}{Olivier Cailloux} (Université Paris-Dauphine) and \href{https://formal.iti.kit.edu/~kirsten/?lang=en}{Michael Kirsten} (Karlsruhe Institute of Technology). \end{frame} \addtocounter{framenumber}{-1} \end{document} \begin{frame} \frametitle{\subsecname} \begin{itemize} \item \end{itemize} \end{frame}
{ "alphanum_fraction": 0.6622440622, "avg_line_length": 30.7298657718, "ext": "tex", "hexsha": "8205aa4c90d487ff0143477c93f11dd814811f74", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "235731e3ed4e4a2b665c714a77bb7518fcd6aac0", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "oliviercailloux/voting-rule-argumentation-pres", "max_forks_repo_path": "autoarg.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "235731e3ed4e4a2b665c714a77bb7518fcd6aac0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "oliviercailloux/voting-rule-argumentation-pres", "max_issues_repo_path": "autoarg.tex", "max_line_length": 536, "max_stars_count": 1, "max_stars_repo_head_hexsha": "235731e3ed4e4a2b665c714a77bb7518fcd6aac0", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "oliviercailloux/voting-rule-argumentation-pres", "max_stars_repo_path": "autoarg.tex", "max_stars_repo_stars_event_max_datetime": "2020-01-09T23:19:11.000Z", "max_stars_repo_stars_event_min_datetime": "2020-01-09T23:19:11.000Z", "num_tokens": 6839, "size": 18315 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Written By Michael Brodskiy % Class: Analytic Geometry & Calculus III (Math-292) % Professor: V. Cherkassky %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \documentclass[12pt]{article} \usepackage{alphalph} \usepackage[utf8]{inputenc} \usepackage[russian,english]{babel} \usepackage{titling} \usepackage{amsmath} \usepackage{graphicx} \usepackage{enumitem} \usepackage{amssymb} \usepackage[super]{nth} \usepackage{everysel} \usepackage{ragged2e} \usepackage{geometry} \usepackage{fancyhdr} \usepackage{cancel} \geometry{top=1.0in,bottom=1.0in,left=1.0in,right=1.0in} \newcommand{\subtitle}[1]{% \posttitle{% \par\end{center} \begin{center}\large#1\end{center} \vskip0.5em}% } \usepackage{hyperref} \hypersetup{ colorlinks=true, linkcolor=blue, filecolor=magenta, urlcolor=blue, citecolor=blue, } \urlstyle{same} \title{Lecture XXII Notes} \date{\today} \author{Michael Brodskiy\\ \small Professor: V. Cherkassky} % Mathematical Operations: % Sum: $$\sum_{n=a}^{b} f(x) $$ % Integral: $$\int_{lower}^{upper} f(x) dx$$ % Limit: $$\lim_{x\to\infty} f(x)$$ \begin{document} \maketitle \section{Green's Theorem $-$ 16.4} For this theorem, let $C$ be a curve that is: \begin{enumerate} \item positively oriented (moving counterclockwise) \item piecewise smooth \item simple \item closed \end{enumerate} $D$ is the region enclosed by the curve. If $P$ and $Q$ have continuous first-order partial derivatives on $D$, then: $$\int_C P\,dx+Q\,dy=\iint_D \left(\frac{\partial Q}{\partial x}-\frac{\partial P}{\partial y}\right)\,dA$$ For Green's Theorem, $D$ must have boundaries. Notation (If $C$ is closed): $$\int_C P\,dx+Q\,dy=\oint_C P\,dx+Q\,dy$$ If $\frac{\partial Q}{\partial x}-\frac{\partial P}{\partial y}=1$, then Green's theorem yields the area of the region $D$: $$\iint_D 1\,dA=A(D)$$ \end{document}
{ "alphanum_fraction": 0.5804833561, "avg_line_length": 24.3666666667, "ext": "tex", "hexsha": "a8623f70ce46c67fdf8149dac062fde62fc7dbbb", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d4820f31c0c585ae65e6d61249d8c725077005eb", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "MDBrodskiy/Vector_Calculus", "max_forks_repo_path": "Lecture Notes/Lecture22.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d4820f31c0c585ae65e6d61249d8c725077005eb", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "MDBrodskiy/Vector_Calculus", "max_issues_repo_path": "Lecture Notes/Lecture22.tex", "max_line_length": 188, "max_stars_count": 1, "max_stars_repo_head_hexsha": "d4820f31c0c585ae65e6d61249d8c725077005eb", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "MDBrodskiy/Vector_Calculus", "max_stars_repo_path": "Lecture Notes/Lecture22.tex", "max_stars_repo_stars_event_max_datetime": "2021-01-15T15:51:52.000Z", "max_stars_repo_stars_event_min_datetime": "2021-01-15T15:51:52.000Z", "num_tokens": 643, "size": 2193 }
\section{The Type Theory} \label{sec:type-theory} One challenge in writing this paper is to extricate our account of datatypes from what else is new in Epigram 2. In fact, we demand relatively little from the setup, so we shall start with a `vanilla' theory and add just what we need. The reader accustomed to dependent types will recognise the basis of her favourite system; for those less familiar, we try to keep the presentation self-contained. \subsection{Base theory} \begin{wstructure} <- Presentation of the formalism <- Standard presentation -> No novelty here <- 3 judgments [equation] -> Context validity -> Typing judgements -> Equality judgements \end{wstructure} We adopt a traditional presentation for our type theory, with three mutually defined systems of judgments: \emph{context validity}, \emph{typing}, and \emph{equality}, with the following forms: % \[ \begin{array}{ll} \G \vdash \Valid & \mbox{\(\G\) is a valid context, giving types to variables} \\ \G \vdash \Bhab{\M{t}}{\M{T}} & \mbox{term \(\M{t}\) has type \(\M{T}\) in context \(\G\)} \\ \G \vdash \Bhab{\M{s} \equiv \M{t}}{\M{T}} & \mbox{\(\M{s}\) and \(\M{t}\) are equal at type \(\M{T}\) in context \(\G\)} \\ \end{array} \] \begin{wstructure} <- Invariants [equation] -> By induction on derivations \end{wstructure} The rules are formulated to ensure that the following `sanity checks' hold by induction on derivations % \[ \begin{array}{l@{\;\Rightarrow\;\;}l} \G \vdash \Bhab{\M{t}}{\M{T}} & \G \vdash \Valid\; \wedge\; \G\vdash\Type{\M{T}} \\ \G \vdash s \equiv \Bhab{\M{t}}{\M{T}} & \G \vdash \Bhab{s}{T} \;\wedge\; \G\vdash \Bhab{\M{t}}{\M{T}} \end{array} \] % and that judgments \(\M{J}\) are preserved by well-typed instantiation. % \[ \G ; \xS ; \Delta \vdash \M{J} \;\Rightarrow\; \G \vdash \Bhab{\M{s}}{\M{S}} \;\Rightarrow\; \G ; \Delta[\M{s}/\x] \vdash \M{J}[\M{s}/\x] \] %We are not going to prove the validity of these invariants. They %follow rather straightforwardly from the induction rules. For formal %proofs, we refer the reader to standard presentations of type theory, %such as Luo's seminal work \cite{luo:utt}. \begin{wstructure} <- Judgemental equality <- Presentation independent of particular implementation choice -> Model in Agda, intensional -> Used in Epigram, OTT \end{wstructure} We specify equality as a judgment, leaving open the details of its implementation, requiring only a congruence including ordinary computation (\(\beta\)-rules), decided, e.g., by testing \(\alpha\)-equivalence of \(\beta\)-normal forms~\cite{DBLP:journals/jfp/Adams06}. Coquand and Abel feature prominently in a literature of richer equalities, involving \(\eta\)-expansion, proof-irrelevance and other attractions~\cite{DBLP:journals/scp/Coquand96,DBLP:conf/tlca/AbelCP09}. Agda and Epigram 2 support such features, Coq currently does not, but they are surplus to requirements here. % Therefore, we are not tied to a particular implementation %choice. In particular, our system has been modelled in Agda, which %features an intensional equality. On the other hand, it is used in %Epigram, whose equality has a slightly extensional %flavor~\cite{altenkirch:ott}. We expect users of fully extensional %systems to also find their way through this presentation. \begin{wstructure} <- Context validity [figure no longer] <- Not much to be said \end{wstructure} Context validity ensures that variables inhabit well-formed sets. % \[ %% Empty context validity \Axiom{\vdash \Valid} \qquad %% Extend context \Rule{\G \vdash \Type{\M{S}}} {\G ; \xS \vdash \Valid}\;\x\not\in\G \] % \begin{wstructure} <- Typing judgements [figure] <- Set in Set -> For simplicity of presentation -> Assume that a valid stratification can be inferred <- Harper-Pollack, Luo, Courant -> See later discussion <- Standard presentation of Pi and Sigma types \end{wstructure} % The basic typing rules % (Fig.~\ref{fig:typing-judgements}) for tuples and functions are also standard, save that we locally adopt \(\Set:\Set\) for presentational purposes. Usual techniques to resolve this \emph{typical ambiguity} apply~\cite{harper:implicit-universe, luo:utt, courant:explicit-universe}. A formal treatment of stratification for our system is a matter of ongoing work. %% putting presentation before paradox~\cite{girard:set-in-set}. %% The usual remedies apply, \emph{stratifying} %% \(\Set\)~\cite{harper:implicit-universe, luo:utt, %% courant:explicit-universe}. %% \input{figure_typing_judgements} \paragraph{Notation.} We subscript information needed for type synthesis but not type checking, e.g., the domain of a \(\LAMBINDER\)-abstraction, and suppress it informally where clear. Square brackets denote tuples, with a LISP-like right-nesting convention: \(\sqr{a\;b}\) abbreviates \(\pair{a}{\pair{b}{\void}{}}{}\). %We recognise the standard presentation of $\Pi$ and $\Sigma$ %types, respectively inhabited by lambda terms and dependent %pairs. Naturally, there are rules for function application and %projections of $\Sigma$-types. Equal types can be substituted, thanks %to the conversion rule. %For the sake of presentation, we postulate a $\Set$ in $\Set$ %rule. Having this rule makes our type theory inconsistent, by Girard's %paradox~\cite{girard:set-in-set}. However, it has been %shown~\cite{harper:implicit-universe, luo:utt, % courant:explicit-universe} that a valid stratification can be %inferred, automatically or semi-automatically. In the remaining of our %presentation, we will assume that such a stratification exists, even %though we will keep it implicit. We shall discuss this assumption in %Section~\ref{sec:discussion}. %\begin{figure} % %\input{figure_typing_judgements} % %\caption{Typing judgements} %\label{fig:typing-judgements} % %\end{figure} \begin{wstructure} <- Judgemental equality [figure] <- symmetry, reflexivity, and transitivity <- beta-rules for lambda and pair <- xi-rule for functions -> Agnostic in the notion of equality <- Doesn't rely on a ``propositional'' equality -> Key: wide applicability of our proposal \end{wstructure} The judgmental equality comprises the computational rules below, closed under reflexivity, symmetry, transitivity and structural congruence, even under binders. We omit the mundane rules which ensure these closure properties for reasons of space. \input{figure_judgemental_equality} Given a suitable stratification of \(\Set\), the computation rules yield a terminating evaluation procedure, ensuring the decidability of equality and thence type checking. %Finally, we define the rules governing judgemental equality in %Figure~\ref{fig:judgemental-equality}. We implicitly assume that %judgemental equality respects symmetry, reflexivity, and %transitivity. We capture the computational behavior of the language %through the $\beta$-rules for function application and pairs. Finally, %we implicitly assume that it respects purely syntactic and structural %equality. This includes equality under lambda ($\xi$-rule). %Crucially, being judgemental, this presentation is agnostic in the %notion of equality actually implemented. Indeed, our typing and %equality judgements do not rely on a ``propositional'' equality. This %freedom is a key point in favour of the wide applicability of our %proposal. This judgemental presentation must be read as a %\emph{specification}: our proposal works with any propositional %equality satisfying this specification. Moreover, our lightweight %requirements do not endanger decidability of equality-checking. %Obviously, when implementing our technology in an existing %type-theory, some opportunities arise. We will present some of them %along the course of the paper. %\begin{figure} %\input{figure_judgemental_equality} % %\caption{Judgemental equality} %\label{fig:judgemental-equality} % %\end{figure} \begin{wstructure} !!! Need Help !!! <- Meta-theoretical properties <- Assuming a stratified discipline <> The point here is to reassert that dependent types are not evil, there is no non-terminating type-checker, or such horrible lies <> -> Strongly normalising -> Every program terminates -> Type-checking terminates ??? \end{wstructure} %This completes our presentation of the type theory. Assuming a %stratified discipline of universe, the system we have described enjoy %some very strong meta-theoretical properties. Unlike simply typed %languages, such as Haskell, dependently-typed systems are %\emph{strongly normalising}: every program that type-checks %terminates. Moreover, type-checking is decidable and can therefore be %implemented by a terminating algorithm. %\note{Need some care here. Expansion would be good too. I wanted to % carry the intuition that we are not the bad guys with a % non-terminating type-checker.} \subsection{Finite enumerations of tags} \label{sec:finite-sets} \begin{wstructure} <- Motivation <- Finite sets could be encoded with Unit and Bool /> Hinder the ability to name things <- W-types considered harmful? ??? -> For convenience <- Named elements <- Referring by name instead of code -> Types as coding presentation /> Also as coding representation! \end{wstructure} It is time for our first example of a \emph{universe}. You might want to offer a choice of named constructors in your datatypes: we shall equip you with sets of tags to choose from. Our plan is to implement (by extending the theory, or by encoding) the signature % \[ \Type{\EnumU}\qquad \Type{\EnumT{(\Bhab{\M{E}}{\EnumU})}} \] % where some value \(E:\EnumU\) in the `enumeration universe' describes a type of tag choices \(\EnumT{E}\). We shall need some tags---valid identifiers, marked to indicate that they are data, not variables scoped and substitutable---so we hardwire these rules: % \[ %% UId \Rule{\Gamma \vdash \Valid} {\Gamma \vdash \Type{\UId}} \qquad %% Tag \Rule{\Gamma \vdash \Valid} {\Gamma \vdash \Bhab{\Tag{\V{s}}}{\UId}}\;\V{s}\: \mbox{a valid identifier} \] % Let us describe enumerations as lists of tags, with signature: % \[ \Bhab{\NilE}{\EnumU}\qquad \Bhab{\ConsE{(\Bhab{\M{t}}{\UId})}{(\Bhab{\M{E}}{\EnumU})}}{\EnumU} \] % What are the \emph{values} in \(\EnumT{E}\)? Formally, we represent the choice of a tag as a numerical index into \(E\), via new rules: % \[ %% Ze \Rule{\Gamma \vdash \Valid} {\Gamma \vdash \Bhab{\Ze}{\EnumT{(\ConsE{\M{t}}{\M{E}})}}} \qquad %% Su \Rule{\Gamma \vdash \Bhab{\M{n}}{\EnumT{\M{E}}}} {\Gamma \vdash \Bhab{\Su{\M{n}}}{\EnumT{(\ConsE{\M{t}}{\M{E}})}}} \] % However, we expect that in practice, you might rather refer to these values \emph{by tag}, and we shall ensure that this is possible in due course. %As a motivating example, we are now going to extend the type theory %with a notion of finite set. One could argue that there is no need for %such an extension: finite sets, just as any data-structure, can be %encoded inside the type theory. A well-known example of such encoding %is the Church encoding of natural numbers, which is isomorphic to %finite sets. \note{Shall we talk about W-types encoding?} %However, using encodings is impractical. In the case of finite sets, %for instance, we would like to name the elements of the sets. Then, we %need to be able to manipulate these elements by their name, instead of %their encoding. While we are able to give names to encodings, it is %extremely tedious to map the encodings back to a name. Whereas these %objects have a structure, the structure is lost during the encoding, %when they become anonymous inhabitants of a $\Pi$ or $\Sigma$-type. %In the simply-typed world, we are used to see types as a coding %presentation -- presentation of invariants, presentation of %properties. In the dependently-typed world, we also learn to use types %as a coding representation: finite sets being good citizens, they %ought to be democratically represented at the type level. As we will %see, this gives us the ability to name and manipulate them (this is %were the democracy analogy goes crazy, I think). %\note{Did I got the coding presentation vs. coding representation % story right? No.} \begin{wstructure} <- Implementation [figure] <- Tags -> Purely informational token <- EnumU -> Universe of finite sets <- EnumT e -> Elements of finite set e \end{wstructure} %The specification of finite sets is presented in %Figure~\ref{fig:typing-finite-set}. It is composed of three %components. First, we define tags as inhabitants of the $\UId$ type. A %tag is solely an informative token, used for diagnostic %purposes. Finite sets inhabits the $\EnumU$ type. Unfolding the %definition, we get that a finite set is a list of tags. Finally, %elements of a finite set $\V{u}$ belong to the corresponding $\EnumT{\V{u}}$ %type. Intuitively, it corresponds to an index -- a number -- pointing %to an element of $\V{u}$. %\begin{figure} %\input{figure_finite_sets} %\caption{Typing rules for finite sets} %\label{fig:typing-finite-set} %\end{figure} \begin{wstructure} <- Equipment <- \spi operator <- Equivalent of Pi on finite sets <- First argument: (finite) domain <- Second argument: for each element of the domain, a co-domain -> Inhabitant of \spi: right-nested tuple of solutions <- Skip code for space reasons <- switch operator <- case analyses over x <- index into the \spi tuple to retrieve the corresponding result \end{wstructure} Enumerations come with further machinery. Each \(\EnumT{E}\) needs an eliminator, allowing us to branch according to a tag choice. Formally, whenever we need such new computational facilities, we add primitive operators to the type theory and extend the judgmental equality with their computational behavior. However, for compactness and readability, we shall write these operators as functional programs (much as we model them in Agda). We first define the `small product' $\SYMBspi$ operator: % \[\stk{ %% spi \spi{}{}: \PITEL{\V{E}}{\EnumU}\PITEL{\V{P}}{\EnumT{\V{E}} \To \Set} \To \Set \\ \begin{array}{@{}l@{\:}l@{\;\;\mapsto\;\;}l} \spi{\NilE}{& \V{P}} & \Unit \\ \spi{(\ConsE{\V{t}}{\V{E}})}{& \V{P}} & \TIMES{\V{P}\: \Ze}{\spi{\V{E}}{\LAM{\V{x}} \V{P}\: (\Su{\V{x}})}} \end{array} }\] % This builds a right-nested tuple type, packing a $\V{P}\:\V{i}$ value for each $\V{i}$ in the given domain. The step case exposes our notational convention that binders scope rightwards as far as possible. These tuples are `jump tables', tabulating dependently typed functions. We give this functional interpretation---the eliminator we need---by the $\SYMBswitch$ operator, which, unsurprisingly, iterates projection: % \[\stk{ %% switch \begin{array}{@{}ll} \SYMBswitch : \PITEL{\V{E}}{\EnumU} \PITEL{\V{P}}{\EnumT{\V{E}} \To \Set} \To \spi{\V{E}}{\V{P}} \To \PITEL{\V{x}}{\EnumT{\V{E}}} \To \V{P}\: \x \end{array} \\ \begin{array}{@{}l@{\:\mapsto\:\:}l} \switch{(\ConsE{\V{t}}{\V{E}})}{\V{P}}{\V{b}}{\Ze} & \fst{\V{b}} \\ \switch{(\ConsE{\V{t}}{\V{E}})}{\V{P}}{\V{b}}{(\Su{\V{x}})} & \switch{\V{E}}{(\LAM{\V{x}} \V{P} (\Su{\V{x}}))}{(\snd{\V{b}})}{\V{x}} \end{array} }\] %Again, there is a clear equivalent in the full-$\Set$ world: function %application. The operational behaviour of $\F{switch}$ is %straightforward: $\V{x}$ is peeled off as we move deeper inside the nested %tuple $\V{b}$. When $\V{x}$ equals $\Ze$, we simply return the value we are %pointing to. \begin{wstructure} <- Equivalent to having a function space over finite sets /> Made non-obvious by low-level encodings <- General issue with codes -> Need to provide an attractive presentation to the user -> Types seem to obfuscate our reading <- Provide ``too much'' information /> False impression: information is actually waiting to be used more widely -> See next Section \end{wstructure} The $\SYMBspi$ and $\SYMBswitch$ operators deliver dependent elimination for finite enumerations, but are rather awkward to use directly. We do not write the range for a \(\LAMBINDER\)-abstraction, so it is galling to supply \(\V{P}\) for functions defined by $\SYMBswitch$. Let us therefore find a way to recover the tedious details of the encoding from types. %, they also %come with a notion of finite function space. However, we had to %extract that intuition from the type, by a careful reading. This seems %to contradict our argument in favour of types for coding %representation. Here, we are overflown by low-level, very precise type %information. %However, our situation is significantly different from the one we %faced with encoded data: while we were suffering from a crucial lack %of information, we are now facing too much information, hence losing %focus. This is a general issue with the usage of codes, as they convey %much more information than what the developer is willing to see. %As we will see in the following section, there exists a cure to this %problem. In a nutshell, instead of being overflown by typing %information, we will put it at work, automatically. The consequence is %that, in such system, working with codes is \emph{practical}: one %should not be worried by information overload, but how to use it as %much as possible. Therefore, we should not be afraid of using codes for %practical purposes. \subsection{Type propagation} \label{sec:type-propagation} \begin{wstructure} <- Bidirectional type-checking [ref. Turner,Pierce] -> Separating type-checking from type synthesis <- Type checking: push terms into types <- Example: |Pi S T :>: \ x . t| allows us to drop annotation on lambda <- Type synthesis: pull types out of terms <- Example: |x : S l- x :<: S| gives us the type of x \end{wstructure} Our approach to tidying the coding cruft is deeply rooted in the bidirectional presentation of type checking from Pierce and Turner~\cite{pierce:bidirectional-tc}. They divide type inference into two communicating components. In \emph{type synthesis}, types are \emph{pulled} out of terms. A typical example is a variable in the context: % \[ \Rule{\G ; \xS ; \Delta \vdash \Valid} {\G ; \xS ; \Delta \vdash \Bhab{\V{x}}{\M{S}}} \] % Because the context stores the type of the variable, we can extract the type whenever the variable is used. On the other hand, in the \emph{type checking} phase, types are \emph{pushed} into terms. We are handed a type together with a term, our task consists of checking that the type admits the term. In doing so, we can and should use the information provided by the type. Therefore, we can relax our requirements on the term. Consider \(\LAMBINDER\)-abstraction: % \[ \Rule{\G \vdash \Type{\M{S}} \quad \G ; \xS \vdash \Bhab{\M{t}}{\M{T}}} {\G \vdash \Bhab{\PLAM{\x}{\M{S}} \M{t}}{\PIS{\xS} \M{T}}} \] % The official rules require an annotation specifying the domain. However, in type \emph{checking}, the \(\Pi\)-type we push in determines the domain, so we can drop the annotation. \begin{wstructure} <- Formalisation: type propagation <- Motivation -> High-level syntax -> exprIn: types are pushed in <- Subject to type *checking* -> exprEx: types are pulled from <- Subject to type *synthesis* -> Translated into our low-level type theory -> Presented as judgements -> Presentation mirrors typing rule of [figure] -> Ignore identical judgements \end{wstructure} We adapt this idea, yielding a \emph{type propagation} system, whose purpose is to elaborate compact \emph{expressions} into the terms of our underlying type theory, much as in the definition of Epigram 1~\cite{mcbride.mckinna:view-from-the-left}. We divide expressions into two syntactic categories: $\exprIn$ into which types are pushed, and $\exprEx$ from which types are extracted. In the bidirectional spirit, the $\exprIn$ are subject to type \emph{checking}, while the $\exprEx$---variables and elimination forms---admit type \emph{synthesis}. We embed $\exprEx$ into $\exprIn$, demanding that the synthesised type coincides with the type proposed. The other direction---only necessary to apply abstractions or project from pairs---takes a type annotation. % As the presentation largely %mirrors the inference rules of the type theory, we will ignore the %judgments that are identical. We refer our reader to the associated %technical report~\cite{chapman:desc-tech-report} for the complete %system of rules. \begin{wstructure} <- Type synthesis [figure] <- Pull a type out of an exprEx <- Result in a full term, together with its type -> Do *not* need to specify types -> Extracting a term from the context -> Function application -> Projections \end{wstructure} Type synthesis (Fig.~\ref{fig:type-synthesis}) is the \emph{source} of types. It follows the \(\exprEx\) syntax, delivering both the elaborated term and its type. Terms and expressions never mix: e.g., for application, we instantiate the range with the \emph{term} delivered by checking the argument \emph{expression}. Hardwired operators are checked as variables. \begin{figure} \input{figure_type_synthesis} \caption{Type synthesis} \label{fig:type-synthesis} \end{figure} \begin{wstructure} <- Type checking [figure] <- Push a type in an exprIn <- Result in a full term -> *Use* the type to build the term! -> Domain and co-domain propagation for Pi and Sigma -> Translation of 'tags into EnumTs -> Translation of ['tags ...] into EnumUs -> Finite function space into switch \end{wstructure} Dually, type checking judgments (Fig.~\ref{fig:type-checking}) are \emph{sinks} for types. From an $\exprIn$ and a type pushed into it, they elaborate a low-level term, extracting information from the type. Note that we inductively ensure the following `sanity checks': % \[\stkc{ \Gamma\Vdash\propag{e}{\pull{t}{T}} \Rightarrow \Gamma\vdash t:T \\ \Gamma\Vdash\push{\propag{e}{t}}{T} \Rightarrow \Gamma\vdash t:T }\] Canonical set-formers are \emph{checked}: we could exploit \(\Set:\Set\) to give them synthesis rules, but this would prejudice our future stratification plans. Note that abstraction and pairing are free of annotation, as promised. Most of the propagation rules are unremarkably structural: we have omitted some mundane rules which just follow the pattern, e.g., for \(\UId\). \begin{figure} \input{figure_type_checking} \caption{Type checking} \label{fig:type-checking} \end{figure} However, we also add abbreviations. We write \(\spl{\M{f}}\), pronounced `uncurry \(\M{f}\)' for the function which takes a pair and feeds it to \(\M{f}\) one component at a time, letting us name them individually. Now, for the finite enumerations, we go to work. Firstly, we present the codes for enumerations as right-nested tuples which, by our LISP convention, we write as unpunctuated lists of tags \(\sqr{\etag{t_0}\ldots\etag{t_n}}\). Secondly, we can denote an element \emph{by its name}: the type pushed in allows us to recover the numerical index. We retain the numerical forms to facilitate \emph{generic} operations and ensure that shadowing is punished fittingly, not fatally. Finally, we express functions from enumerations as tuples. Any tuple-form, \(\void\) or \(\pair{\_}{\_}{}\), is accepted by the function space---the generalised product---if it is accepted by the small product. Propagation fills in the appeal to $\SYMBswitch$, copying the range information. Our interactive development tools also perform the reverse transformation for intelligible output. The encoding of any specific enumeration is thus hidden by these translations. Only, and rightly, in enumeration-generic programs is the encoding exposed. \begin{wstructure} <- Summary -> Not a novel technique [refs?] /> Used as a boilerplate scrapper -> Make dealing with codes *practical* <- Example: Finite sets/finite function space -> We should not restrain our self in using codes <- We know how to present them to the user -> Will extend this machinery in further sections \end{wstructure} %In this section, we have developed a type propagation system based on %bidirectional type-checking. Using bidirectional type-checking as a %boilerplate scrapper is a well-known %technique~\cite{pierce:bidirectional-tc, % xi:bidirectional-tc-bound-array, chlipala:strict-bidirectional-tc} %\note{Everybody ok with the citations?}. In our case, we have shown %how to instrument bidirectionality to rationalise the expressivity of %dependent types. We have illustrated our approach with finite sets. We %have abstracted away the low-level presentation of finite sets, %offering a convenient syntax instead. %This example teaches us that we should not be afraid of codes, as soon %as type information is available. We have shown how to rationalise %this information in a formal presentation. Hence, we have shown that %programming with such objects is practical. As we introduce more codes %in our theory, we will show how to extend the framework we have %developed so far. Our type propagation mechanism does no constraint solving, just copying, so it is just the thin end of the elaboration wedge. It can afford us this `assembly language' level of civilisation as \(\EnumU\) universe specifies not only the \emph{representation} of the low-level values in each set as bounded numbers, but also the \emph{presentation} of these values as high-level tags. To encode only the former, we should merely need the \emph{size} of enumerations, but we extract more work from these types by making them more informative. We have also, \emph{en passant}, distinguished enumerations which have the same cardinality but describe distinct notions: \(\EnumT{\sqr{\etag{\CN{red}}\,\etag{\CN{blue}}}}\) is not \(\EnumT{\sqr{\etag{\CN{green}}\,\etag{\CN{orange}}}}\).
{ "alphanum_fraction": 0.7193528786, "avg_line_length": 40.3246554364, "ext": "tex", "hexsha": "6c2668c13fb93844c8ca0d47c509a27aed0d66c9", "lang": "TeX", "max_forks_count": 12, "max_forks_repo_forks_event_max_datetime": "2022-02-11T01:57:40.000Z", "max_forks_repo_forks_event_min_datetime": "2016-08-14T21:36:35.000Z", "max_forks_repo_head_hexsha": "8c46f766bddcec2218ddcaa79996e087699a75f2", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mietek/epigram", "max_forks_repo_path": "papers/icfp-2010-desc/paper_type_theory.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8c46f766bddcec2218ddcaa79996e087699a75f2", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mietek/epigram", "max_issues_repo_path": "papers/icfp-2010-desc/paper_type_theory.tex", "max_line_length": 583, "max_stars_count": 48, "max_stars_repo_head_hexsha": "8c46f766bddcec2218ddcaa79996e087699a75f2", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mietek/epigram", "max_stars_repo_path": "papers/icfp-2010-desc/paper_type_theory.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-11T01:55:28.000Z", "max_stars_repo_stars_event_min_datetime": "2016-01-09T17:36:19.000Z", "num_tokens": 7069, "size": 26332 }
\documentclass[11pt, oneside]{article} % use "amsart" instead of "article" for AMSLaTeX format % \usepackage{draftwatermark} % \SetWatermarkText{Draft} % \SetWatermarkScale{5} % \SetWatermarkLightness {0.9} % \SetWatermarkColor[rgb]{0.7,0,0} \usepackage{geometry} % See geometry.pdf to learn the layout options. There are lots. \geometry{letterpaper} % ... or a4paper or a5paper or ... %\geometry{landscape} % Activate for for rotated page geometry %\usepackage[parfill]{parskip} % Activate to begin paragraphs with an empty line rather than an indent \usepackage{graphicx} % Use pdf, png, jpg, or eps� with pdflatex; use eps in DVI mode % TeX will automatically convert eps --> pdf in pdflat % TeX will automatically convert eps --> pdf in pdflatex \usepackage{amssymb} \usepackage{mathrsfs} \usepackage{hyperref} \usepackage{url} \usepackage{subcaption} \usepackage{authblk} \usepackage{amsmath} \usepackage{mathtools} \usepackage{graphicx} \usepackage[export]{adjustbox} \usepackage{fixltx2e} \usepackage{hyperref} \usepackage{alltt} \usepackage{color} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usepackage{float} \usepackage{bigints} \usepackage{braket} \usepackage{siunitx} % % so you can do e.g., \begin{bmatrix}[r] (or [c] or [l]) % \makeatletter \renewcommand*\env@matrix[1][c]{\hskip -\arraycolsep \let\@ifnextchar\new@ifnextchar \array{*\c@MaxMatrixCols #1}} \makeatother \newcommand{\argmax}{\operatornamewithlimits{argmax}} \newcommand{\argmin}{\operatornamewithlimits{argmin}} \title{A Few Notes on Bell States, Superdense Coding, and Quantum Teleportation} \author{David Meyer \\ dmm@\{1-4-5.net,uoregon.edu\}} \date{Last update: \today} % Activate to display a given date or no date \begin{document} \maketitle \section{Introduction} The Bell Circuit, shown in Figure \ref{fig:bell_circuit}, is comprised of two gates, $H$ and CNOT, which are defined as follows: \begin{flalign*} H = \frac{1}{\sqrt{2}} \begin{bmatrix}[r] 1 & 1 \\ 1 & -1 \end{bmatrix}, \text{CNOT} = \begin{bmatrix}[r] 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ \end{bmatrix} \end{flalign*} \bigskip \noindent and results in two maximally entangled qubits\footnote{This state is sometimes called an \emph{EPR} state.}. How does this work? \bigskip \noindent First, recall that \begin{flalign*} H \ket{0} &= \frac{1}{\sqrt{2}} \big ( \ket{0} + \ket{1} \big ) \text{ and } H \ket{1} = \frac{1}{\sqrt{2}} \big ( \ket{0} - \ket{1} \big ) \end{flalign*} \bigskip \noindent The Bell Circuit applies $H$ to $\ket{b_0}$ and then applies the CNOT gate to $H\ket{b_0}$ (control qubit) and $\ket{b_1}$ (target qubit). The inputs and evolution of the Bell Circuit are shown are Table \ref{tab:bell_state}. \bigskip \begin{figure} \center{\includegraphics[scale=0.45, frame] {images/bell_forward_circuit.png}} \caption{Bell Circuit} \label{fig:bell_circuit} \end{figure} \begin{table}[H] \centering \begin{tabular}{c | c | c | c | c} $b_{0} b_{1}$ & $H \ket{b_0}$ & $\ket{b_1}$ & Bell Circuit evolution with inputs $\ket{b_0}$ and $\ket{b_1}$ & Bell State\\ \hline 00 & $H\ket{0}$ & $\ket{0}$ & $\ket{0} \xrightarrow{\scriptsize H} \frac{1}{\sqrt{2}} (\ket{0} + \ket{1}) \xrightarrow{\otimes \ket{0}} \frac{1}{\sqrt{2}} (\ket{0} + \ket{1}) \ket{0} \xrightarrow{\scriptsize \text{CNOT}} \frac{1}{\sqrt{2}} (\ket{00} + \ket{11})$ & $\ket{\phi^+}$ \\ 01 & $H\ket{0}$ & $\ket{1}$ & $ \ket{0} \xrightarrow{\scriptsize H} \frac{1}{\sqrt{2}} (\ket{0} + \ket{1}) \xrightarrow{\scriptsize \otimes \ket{1}} \frac{1}{\sqrt{2}} (\ket{0} + \ket{1}) \ket{1} \xrightarrow{\scriptsize \text{CNOT}} \frac{1}{\sqrt{2}} (\ket{01} + \ket{10})$\ & $\ket{\psi^+}$ \\ 10 & $H\ket{1}$ & $\ket{0}$ & $\ket{1} \xrightarrow{\scriptsize H} \frac{1}{\sqrt{2}} (\ket{0} - \ket{1}) \xrightarrow{\scriptsize \otimes \ket{0}} \frac{1}{\sqrt{2}} (\ket{0} - \ket{1}) \ket{0} \xrightarrow{\scriptsize \text{CNOT}} \frac{1}{\sqrt{2}} (\ket{00} - \ket{11})$ & $\ket{\phi^-}$ \\ 11 & $H\ket{1}$ & $\ket{1}$ & $\ket{1} \xrightarrow{\scriptsize H} \frac{1}{\sqrt{2}} (\ket{0} - \ket{1}) \xrightarrow{\scriptsize \otimes \ket{1}} \frac{1}{\sqrt{2}} (\ket{0} - \ket{1}) \ket{1} \xrightarrow{\scriptsize \text{CNOT}} \frac{1}{\sqrt{2}} (\ket{01} - \ket{10})$ & $\ket{\psi^-}$ \end{tabular} \caption{Bell States} \label{tab:bell_state} \end{table} \bigskip \noindent What we can see from Table \ref{tab:bell_state} is that $b_0$ selects the "bit" ($\ket{\phi}$ or $\ket{\psi}$), and $b_1$ selects the "sign" ($\ket{+}$ or $\ket{-}$). Since there are four orthonormal states, the \emph{Bell basis}, we can encode two bits ($b_0$ and $b_1$) in the four Bell States. \bigskip \noindent Now, if Alice wants to send two classical bits to Bob (\emph{superdense coding}) using one qubit, she need only transform her qubit\footnote{Her half of the EPR pair, the two entangled qubits.} into the Bell State corresponding to the two bits she wants to send, then send her half to Bob (this requires a \emph{quantum} channel). Bob can then recover Alice's two bit message. \bigskip \noindent But how can Bob recover Alice's message? Recall that unitary quantum operations are reversible. So Bob can use the Reverse Bell Circuit shown in Figure \ref{fig:reverse_bell_circuit} to recover Alice's 2 bit message. \bigskip \begin{figure}[H] \center{\includegraphics[scale=0.45, frame] {images/bell_reverse_circuit.png}} \caption{Reverse Bell Circuit} \label{fig:reverse_bell_circuit} \end{figure} \section{Superdense Coding} Suppose Alice wants to send Bob the message $00$. Alice can perform one or more unitary operations on her qubit (her half of the entangled pair) that will allow Bob, when presented with Alice's qubit, to reconstruct Alice's message $b_0b_1$. If we run $\ket{\phi^+}$ through the circuit in Figure \ref{fig:reverse_bell_circuit}, that is, $\ket{\phi^+} \xrightarrow {\scriptsize \text{CNOT}} \xrightarrow {\scriptsize \text{ H }} \ket{b_0b_1}$, Bob will recover Alice's message ($b_0b_1 = 00$). Why is this? \begin{flalign*} \ket{\phi^+} &= \frac{1}{\sqrt{2}} (\ket{00} + \ket{11}) \longrightarrow \\ & \frac{1}{\sqrt{2}} (\ket{0} \ket{0} + \ket{1} \ket{1}) \xrightarrow {\scriptsize \text{CNOT}} \frac{1}{\sqrt{2}} (\ket{00} + \ket{10}) \longrightarrow \\ & \frac{1}{\sqrt{2}} (\ket{00} + \ket{10}) \xrightarrow {\scriptsize \text{ H }} \frac{1}{\sqrt{2}} \Big ( \frac{1}{\sqrt{2}} (\ket{0} + \ket{1}) \ket{0} + \frac{1}{\sqrt{2}} (\ket{0} - \ket{1}) \ket{0} \Big) \\ &= \frac{1}{\sqrt{2}} \frac{1}{\sqrt{2}} \Big ( (\ket{0} + \ket{1}) \ket{0} + (\ket{0} - \ket{1}) \ket{0} \Big ) \\ &= \frac{1}{2} \big (\ket{00} + \ket{10} + \ket{00} - \ket{10} \big ) \\ &= \frac{1}{2} \big (2 \ket{00} + (\ket{10} - \ket{10}) \big ) \\ &= \frac{1}{2} \cdot 2 \ket{00} \\ &= \ket{00} \end{flalign*} \bigskip \noindent Bob can now measure both qubits and recover Alice's message ($b_0b_1 = 00$). \bigskip \noindent In general, Alice notices that \begin{itemize} \item To send \textbf{00}, apply the Identity matrix $\mathbf{I} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$ to her half of the EPR pair \item To send \textbf{01}, apply the matrix $\mathbf{X} = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}$ to her half of the EPR pair \item To send \textbf{10}, apply the matrix $\mathbf{Z} = \begin{bmatrix}[r] 1 & 0 \\ 0 & -1 \end{bmatrix}$ to her half of the EPR pair \item To send \textbf{11}, apply $i\mathbf{Y} = i \begin{bmatrix}[r] 0 & -i \\ i & 0 \end{bmatrix}$, i.e. both $\mathbf{X}$ and $\mathbf{Z}$, to her half of the EPR pair \end{itemize} \bigskip \noindent where $\mathbf{I}$, $\mathbf{X}$, $\mathbf{Y}$ and $\mathbf{Z}$ are the \emph{Pauli} matrices \cite{wiki:pauli_matrices}. \bigskip \noindent This transforms the EPR pair $\ket{\phi^+}$ into the four Bell States $\ket{\phi^+}$, $\ket{\psi^+}$, $\ket{\phi^-}$ and $\ket{\psi^-}$ respectively: \begin{itemize} \item \textbf{00}: $\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \frac{1}{\sqrt{2}} (\ket{00} + \ket{11} \longrightarrow \frac{1}{\sqrt{2}} (\ket{00} + \ket{11}) = \frac{1}{\sqrt{2}} \begin{bmatrix} 1 \\ 0 \\ 0 \\ 1 \end{bmatrix} = \ket{\phi^+}$ \item \textbf{01:} $\begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \frac{1}{\sqrt{2}} (\ket{00} + \ket{11} \longrightarrow \frac{1}{\sqrt{2}} (\ket{01} + \ket{10}) = \frac{1}{\sqrt{2}} \begin{bmatrix} 0 \\ 1 \\ 1 \\ 0 \end{bmatrix} = \ket{\psi^+}$ \item \textbf{10:} $\begin{bmatrix}[r] 1 & 0 \\ 0 & -1 \end{bmatrix} \frac{1}{\sqrt{2}} (\ket{00} - \ket{11} \longrightarrow \frac{1}{\sqrt{2}} (\ket{00} - \ket{11}) = \frac{1}{\sqrt{2}} \begin{bmatrix}[r] 1 \\ 0 \\ 0 \\ -1 \end{bmatrix} = \ket{\phi^-}$ \item \textbf{11:} $i \begin{bmatrix}[r] 0 & -i \\ i & 0 \end{bmatrix} \frac{1}{\sqrt{2}} (\ket{00} + \ket{11} \longrightarrow \frac{1}{\sqrt{2}} (\ket{01} - \ket{10}) = \frac{1}{\sqrt{2}} \begin{bmatrix}[r] 0 \\ 1 \\ -1 \\ 0 \end{bmatrix} = \ket{\psi^-}$ \end{itemize} \bigskip \noindent The four Bell states $\ket{\phi^+}$, $\ket{\psi^+}$, $\ket{\phi^-}$ and $\ket{\psi^-}$ are orthonormal and are hence distinguishable by quantum measurement. Thus after receiving Alice's transformed qubit (her half of the EPR pair), Bob can measure both qubits and recover $b_0b_1$. Hence one qubit carries two classical bits of information; this is superdense coding. We saw an example of this above in which Bob recovered $\ket{00}$ from $\ket{\phi^+}$ using the Reverse Bell Circuit depicted in Figure \ref{fig:reverse_bell_circuit}. \subsection{Aside: Spectral Decomposition of Pauli Matrices} So far we've interpreted the Pauli matrices as a quantum gates. But note that a gate such \textbf{Z} is a Hermitean operator and as a result can be interpreted as an observable. Somewhat surprisingly (notice the symmetry), the spectral decomposition \cite{2014arXiv1405.5749S} of \textbf{Z} is \begin{flalign*} \mathbf{Z} = \ket{0}\bra{0} - \ket{1}\bra{1} \end{flalign*} \bigskip \noindent where $\ket{u}\bra{v}$ is Dirac notation \cite{2000RPPh...63.1893G} for the outer product $\mathbf{u} \otimes \mathbf{v} = \mathbf{u} \mathbf{v}^{\text{T}}$ of $m \times 1$ vector \textbf{u} and $n \times 1$ vector \textbf{v}, which yields a $m \times n$ matrix\footnote{The outer product is of vectors \textbf{u} and \textbf{v} is a special case of the tensor product $\mathbf{u} \otimes \mathbf{v}$. More generally, the outer product is an instance of a Kronecker product \cite{wiki:kronecker}.}. \bigskip \noindent We can see that the eigenvalues of \textbf{Z} are 1 and -1, corresponding to eigenvectors $\ket{0}$ and $\ket{1}$ respectively. So the measurement operators are the projectors $\ket{0}\bra{0}$ and $\ket{1}\bra{1}$, This means that a measurement of the Pauli observable \textbf{Z} is a measurement in the computational basis that has eigenvalue +1 corresponding to $\ket{0}$ and eigenvalue -1 corresponding to $\ket{1}$. \bigskip \noindent So ok, but why does $\mathbf{Z} = \ket{0}\bra{0} - \ket{1}\bra{1}$? Well, we know that the outer product $\mathbf{u} \otimes \mathbf{v}$ of a $m \times 1$ vector \textbf{u} and a $n \times 1$ vector \textbf{v} is defined to be the $m \times n$ matrix\footnote{Contrast with the scalar inner product $\langle \mathbf{u}, \mathbf{v} \rangle = \mathbf{u}^{\text{T}} \mathbf{v}$. Note also that $\langle \mathbf{u}, \mathbf{v} \rangle = \text{tr}(\mathbf{u} \otimes \mathbf{v})$, where $\text{tr}(\mathbf{A})$ is the "trace" of matrix \textbf{A}. } $\mathbf{u} \mathbf{v}^{\text{T}}$. \bigskip \noindent To see why $\mathbf{Z} = \ket{0}\bra{0} - \ket{1}\bra{1}$, first recall that $\ket{0} = \begin{bmatrix} 1 \\0 \end{bmatrix}$ and $\ket{1} = \begin{bmatrix} 0 \\ 1 \end{bmatrix}$. Then \begin{flalign*} \ket{0}\bra{0} &= \begin{bmatrix} 1 \\ 0 \end{bmatrix} \begin{bmatrix} 1 \\ 0 \end{bmatrix}^{\text{T}} = \begin{bmatrix} 1 \\ 0 \end{bmatrix} \begin{bmatrix} 1 & 0 \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \text{ and} \\ \ket{1}\bra{1} &= \begin{bmatrix} 0 \\ 1 \end{bmatrix}\begin{bmatrix} 0 \\ 1 \end{bmatrix}^{\text{T}} = \begin{bmatrix} 0 \\ 1 \end{bmatrix} \begin{bmatrix} 0 & 1 \end{bmatrix} = \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} \text{ so that} \\ \ket{0}\bra{0} - \ket{1}\bra{1} &= \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} - \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix}[r] 1 & 0 \\ 0 & -1 \end{bmatrix} = \mathbf{Z} \end{flalign*} \bigskip \subsection{Back to Alice wanting to send a message to Bob} Now suppose Alice want's to send Bob the message 01. Alice then applies Pauli matrix $X$ to $\ket{\phi+}$ to get $\ket{\psi^+}$: \begin{equation*} \mathbf{X} \ket{\phi^+} = \begin{bmatrix}[r] 0 & 1 \\ 1 & 0\end{bmatrix} \frac{1}{\sqrt{2}} (\ket{00} + \ket{11} = \frac{1}{\sqrt{2}} (\ket{01} + \ket{10}) = \ket{\psi^+} \end{equation*} \bigskip \noindent Bob can now recover Alice's message as follows using the Reverse Bell Circuit (Figure \ref{fig:reverse_bell_circuit}). That is, Bob can do the the unitary operations $\ket{\psi^+} \xrightarrow {\scriptsize \text{CNOT}} \xrightarrow {\scriptsize \text{H}} \ket{01}$, as follows: \begin{flalign*} \ket{\psi^+} &= \frac{1}{\sqrt{2}} (\ket{01} + \ket{10}) \longrightarrow \\ & \frac{1}{\sqrt{2}} (\ket{0} \ket{1} + \ket{1} \ket{0}) \xrightarrow {\scriptsize \text{CNOT}} \frac{1}{\sqrt{2}} (\ket{01} + \ket{11}) \longrightarrow \\ & \frac{1}{\sqrt{2}} (\ket{01} + \ket{11}) \xrightarrow {\scriptsize \text{ H }} \frac{1}{\sqrt{2}} \Big ( \frac{1}{\sqrt{2}} (\ket{0} + \ket{1}) \ket{1} + \frac{1}{\sqrt{2}} (\ket{0} - \ket{1}) \ket{1} \Big) \\ &= \frac{1}{\sqrt{2}} \frac{1}{\sqrt{2}} \Big ( (\ket{0} + \ket{1}) \ket{1} + (\ket{0} - \ket{1}) \ket{1} \Big ) \\ &= \frac{1}{2} \big (\ket{01} + \ket{11} + \ket{01} - \ket{11} \big) \\ &= \frac{1}{2} \big (2 \ket{01} + (\ket{11} - \ket{11}) \big) \\ &= \frac{1}{2} \cdot 2 \ket{01} \\ &= \frac{2}{2} \ket{01} \\ &= \ket{01} \end{flalign*} \bigskip \noindent Now Bob can measure the two qubits and recover Alice's message ($b_0b_1 = 01$). \bigskip \noindent Similarly, suppose Alice wants to send the message 10 to Bob. Alice first transforms her qubit as follows \begin{equation*} \mathbf{Z} \ket{\phi^+} = \begin{bmatrix}[r] 1 & 0 \\ 0 & -1 \end{bmatrix} \frac{1}{\sqrt{2}} (\ket{00} + \ket{11} = \frac{1}{\sqrt{2}} (\ket{00} - \ket{11}) = \ket{\phi^-} \end{equation*} \bigskip \noindent Alice now sends her qubit to Bob over a quantum channel. Bob can now recover Alice's message, again using the Reverse Bell Circuit ($\ket{\phi^-} \xrightarrow {\scriptsize \text{CNOT}} \xrightarrow {\scriptsize \text{ H }} \ket{10}$). Again, why is this? \begin{flalign*} \ket{\phi^-} &= \frac{1}{\sqrt{2}} (\ket{00} - \ket{11}) \longrightarrow \\ & \frac{1}{\sqrt{2}} (\ket{0} \ket{0} - \ket{1} \ket{1}) \xrightarrow {\scriptsize \text{CNOT}} \frac{1}{\sqrt{2}} (\ket{00} - \ket{10}) \longrightarrow \\ & \frac{1}{\sqrt{2}} (\ket{00} - \ket{10}) \xrightarrow {\scriptsize \text{ H }} \frac{1}{\sqrt{2}} \Big ( \frac{1}{\sqrt{2}} (\ket{0} + \ket{1}) \ket{0} - \frac{1}{\sqrt{2}} (\ket{0} - \ket{1}) \ket{0} \Big) \\ &= \frac{1}{\sqrt{2}} \frac{1}{\sqrt{2}} \Big ( (\ket{0} + \ket{1}) \ket{0} - (\ket{0} - \ket{1}) \ket{0} \Big ) \\ &= \frac{1}{2} \big (\ket{00} + \ket{10} - \ket{00} + \ket{10} \big ) \\ &= \frac{1}{2} \big (2 \ket{10} + (\ket{00} - \ket{00}) \big ) \\ &= \frac{1}{2} \cdot 2 \ket{10} \\ &= \ket{10} \end{flalign*} \bigskip \noindent Now Bob can measure the two qubits and recover Alice's message ($b_0b_1 = 10)$. \bigskip \noindent Finally, if Alice wants to send $11$ to Bob she first transforms her qubit \begin{equation*} i \mathbf{Y} \ket{\phi^+} = \begin{bmatrix}[r] 0 & -i \\ i & 0 \end{bmatrix} \frac{1}{\sqrt{2}} (\ket{00} + \ket{11} = \frac{1}{\sqrt{2}} (\ket{01} - \ket{10}) = \ket{\psi^-} \end{equation*} \bigskip \noindent Alice now transmits her qubit to Bob and Bob applies the Reverse Bell Circuit to recover Alice's message: \begin{flalign*} \ket{\psi^-} &= \frac{1}{\sqrt{2}} (\ket{01} - \ket{10}) \longrightarrow \\ & \frac{1}{\sqrt{2}} (\ket{0} \ket{1} - \ket{1} \ket{0}) \xrightarrow {\scriptsize \text{CNOT}} \frac{1}{\sqrt{2}} (\ket{01} - \ket{11}) \longrightarrow \\ & \frac{1}{\sqrt{2}} (\ket{01} - \ket{11}) \xrightarrow {\scriptsize \text{ H }} \frac{1}{\sqrt{2}} \Big ( \frac{1}{\sqrt{2}} (\ket{0} + \ket{1}) \ket{1} - \frac{1}{\sqrt{2}} (\ket{0} - \ket{1}) \ket{1} \Big) \\ &= \frac{1}{\sqrt{2}} \frac{1}{\sqrt{2}} \Big ( (\ket{0} + \ket{1}) \ket{1} - (\ket{0} - \ket{1}) \ket{1} \Big ) \\ &= \frac{1}{2} \big (\ket{01} + \ket{11} - \ket{01} + \ket{11} \big) \\ &= \frac{1}{2} \big (2 \ket{11} + (\ket{01} - \ket{01}) \big) \\ &= \frac{1}{2} \cdot 2 \ket{11} \\ &= \frac{2}{2} \ket{11} \\ &= \ket{11} \end{flalign*} \bigskip \noindent Now Bob can measure the two qubits and recover Alice's message ($b_0b_1 = 11)$. \section{Quantum Teleportation} Quantum teleportation can be thought of as the dual task to super dense coding. Whereas super dense coding is concerned with conveying classical information via a qubit, quantum teleportation is concerned with conveying quantum information with classical bits \cite{Bennett:1992tv}. \subsection{A high-level view of the quantum teleportation algorithm} \begin{enumerate} \item Alice and Bob share an entangled (EPR) pair $\ket{\phi^+}$ \item Alice chooses a qubit $\ket{\psi}$ as the message she wants to convey to Bob \item Alice performs operations on $\ket{\psi}$ and $\ket{\phi^{+}_A}$ (Alice's her half of $\ket{\phi^+}$) \item Alice measures $\ket{\psi}$ and her half of $\ket{\phi^+_A}$, destroying both of her qubits \item Alice sends the two classical bits that were the results of her measurements to Bob \item Bob uses the two classical bits to "correct" $\ket{\phi^{+}_B}$ (his half of $\ket{\phi^+}$) to be $\ket{\psi}$ \end{enumerate} \bigskip \noindent Alice uses the circuit in Figure \ref{fig:a_reverse_bell_circuit} to prepare her two qubits (step 3 above). How exactly does this work? First, notice that the input to the Reverse Bell Circuit shown in Figure \ref{fig:a_reverse_bell_circuit} is $\ket{\psi} \otimes \ket{\phi^{+}_A}$. To see how this works, first recall that $\ket{\psi} = \alpha \ket{0} + \beta \ket{1}$. Then \begin{figure}[t]] \center{\includegraphics[scale=0.45, frame] {images/a_reverse_bell_circuit.png}} \caption{Reverse Bell Circuit} \label{fig:a_reverse_bell_circuit} \end{figure} \begin{flalign*} \ket{\psi} \otimes \ket{\phi^{+}_A} &= (\alpha \ket{0} + \beta \ket{1}) \otimes \frac{1}{\sqrt{2}} (\ket{00} + \ket{11}) \\ &= \frac{1}{\sqrt{2}} \Big ( \alpha (\ket{000} + \alpha \ket{011}) + \beta (\ket{100} + \ket{111}) \Big ) \xrightarrow {\scriptsize \text{CNOT}} \quad\qquad \mathrel{\#} \ket{b_0b_1b_2}: b_0 \text{ is control, } b_1 \text{ is target} \\ & \frac{1}{\sqrt{2}} \Big ( \alpha \ket{000} + \alpha \ket{011})+ \beta (\ket{110} + \ket{101}) \Big ) \xrightarrow {\scriptsize \text{H}} \; \qquad \qquad\qquad \mathrel{\#} \text{apply $H$ to $b_0$} \\ & \frac{1}{\sqrt{2}} \bigg [ \alpha \Big ( \frac{1}{\sqrt{2}} \ket{0} + \ket{1} \Big ) \ket{00} + \alpha \Big ( \frac{1}{\sqrt{2}} \ket{0} + \ket{1}) \Big ) \ket{11} + \beta \Big ( \frac{1}{\sqrt{2}} \ket{0} - \ket{1} \Big ) \ket{10} + \beta \Big ( \frac{1}{\sqrt{2}} \ket{0} - \ket{1} \Big ) \ket{01} \bigg ]\\ &= \frac{1}{\sqrt{2}} \frac{1}{\sqrt{2}} \bigg [ \alpha \Big ( \big (\ket{0} + \ket{1} \big ) \ket{00} + \big (\ket{0} + \ket{1} \big ) \ket{11} \Big) + \beta \Big ( \big (\ket{0} - \ket{1} \big ) \ket{10} + \big ( \ket{0} - \ket{1} \big ) \ket{01} \Big ) \bigg] \\ &= \frac{1}{2} \bigg [ \alpha \Big (\ket{000} + \ket{100} + \ket{011} + \ket{111}) \Big) + \beta \Big (\ket{010} - \ket{110} + \ket{001} - \ket{101} \Big ) \bigg ] \\ &= \frac{1}{2} \bigg [ \alpha \ket{000} + \alpha \ket{100} + \alpha \ket{011} + \alpha \ket{111}) + \beta \ket{010} - \beta\ket{110} + \beta \ket{001} - \beta \ket{101} \bigg ] \\ \end{flalign*} \bigskip \noindent Now Alice measures her two qubits ($\ket{\psi} \otimes \ket{\phi^{+}_A} $) and observes $b_0b_1 \in \{00, 01, 10, 11\}$ with $P(b_0b_1) = \frac{1}{4}$. \bigskip \noindent Now here's the amazing thing. If Alice observes $00$, she communicates this to Bob (over a classical channel). As soon as Bob sees the value $00$, he knows that his qubit $\ket{\phi^{+}_{B}} = \alpha \ket{0} + \beta \ket{1}$. How does Bob know this? \bigskip \noindent First, as shown above \begin{flalign} \label{eqn:psi_otimes_phi+} \ket{\psi} \otimes \ket{\phi^+} = \frac{1}{2} \bigg [ \alpha \ket{000} + \alpha \ket{100} + \alpha \ket{011} + \alpha \ket{111}) + \beta \ket{010} - \beta\ket{110} + \beta \ket{001} - \beta \ket{101} \bigg ] \end{flalign} \bigskip \noindent Alice's measurement of the first two qubits collapses Bob's qubit to the third qubit\footnote{Recall that the original three qubits were $\ket{\psi} \otimes \ket{\phi^{+}_{AB}}$.}. The only terms in Equation \ref{eqn:psi_otimes_phi+} that are consistent with the first two qubits being $\ket{00}$ (resulting from Alice's measurement) are $\alpha \ket{000}$ and $\beta \ket{001}$. The "collapsed version" is $\alpha \ket{0}$ and $\beta \ket{1}$. Hence Bob knows that his qubit, $\ket{\phi^{+}_B}$, equals $\alpha \ket{0} + \beta \ket{1}$. \bigskip \noindent Since Alice sent the two bits she saw to Bob, he knows which operations to perform to transform $\ket{\phi^{+}_B} \rightarrow \ket{\psi}$. In particular, $b_0 = 1$ Bob should apply Pauli matrix $Z$ to his qubit and $I$ otherwise, and if $b_1 = 1$ he should apply $X$ and $I$ otherwise. This transforms $\ket{\phi^{+}_B}$, Bob's qubit, into $\ket{\psi}$. This is shown in Table \ref{tab:bob}. \bigskip \noindent Amazingly this procedure teleports Alice's qubit $\ket{\psi}$ to Bob using the two classical bits that Alice learned by measuring her two qubits ($\ket{\psi}$ and $\ket{\phi^{+}_A}$). \bigskip \begin{table}[H] \centering \begin{tabular}{c | c | r | r} $b_{0} b_{1}$ & $\ket{\phi^{+}_B}$ & \multicolumn{1}{c|}{Transformation} & \multicolumn{1}{c}{Computation} \\ \hline \textbf{00} & $\alpha \ket{0} + \beta \ket{1}$ & $\mathbf{I} \begin{bmatrix} \alpha \\ \beta \end{bmatrix}$ & $\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} \alpha \\ \beta \end{bmatrix} = \begin{bmatrix} \alpha \\ \beta \end{bmatrix} = \alpha \ket{0} + \beta \ket{1} = \ket{\psi}$ \\ \textbf{01} & $\beta \ket{0} + \alpha \ket{1}$ & $\mathbf{X}\begin{bmatrix} \beta \\ \alpha \end{bmatrix}$ & $ \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} \beta \\ \alpha \end{bmatrix} = \begin{bmatrix} \alpha \\ \beta \end{bmatrix} = \alpha \ket{0} + \beta \ket{1} = \ket{\psi}$ \\ \textbf{10} & $\alpha \ket{0} - \beta \ket{1}$ & $\mathbf{Z} \begin{bmatrix}[r] \alpha \\ -\beta \end{bmatrix}$ & $\begin{bmatrix}[r] 1 & 0 \\ 0 & -1 \end{bmatrix} \begin{bmatrix}[r] \alpha \\ -\beta \end{bmatrix} = \begin{bmatrix} \alpha \\ \beta \end{bmatrix} = \alpha \ket{0} + \beta \ket{1} = \ket{\psi}$ \\ \textbf{11} & $\beta \ket{0} - \alpha \ket{1} $ & $\mathbf{XZ} \begin{bmatrix}[r] \beta \\ - \alpha \end{bmatrix}$ & $ \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \begin{bmatrix}[r] 1 & 0 \\ 0 & -1 \end{bmatrix} \begin{bmatrix}[r] \beta \\ - \alpha \end{bmatrix} = \begin{bmatrix} \alpha \\ \beta \end{bmatrix} = \alpha \ket{0} + \beta \ket{1} = \ket{\psi}$ \\ \end{tabular} \caption{Bob's transformations on receiving classical bits $\mathbf{b_0b_1}$ from Alice} \label{tab:bob} \end{table} \subsection{Curious Entry for \textbf{11} in Table \ref{tab:bob}?} Note that the row for the result of Alice's measurement \textbf{11} in Table \ref{tab:bob} is curious. When Bob sees \textbf{11} from Alice he knows that his remaining qubit $\ket{\psi^{+}_B}$, equals $- \beta \ket{0} + \alpha \ket{1}$. Why does the table say $\beta \ket{0} - \alpha \ket{1}$? \bigskip \noindent Here is one way to look at this: First, recall that when Bob receives classical bits \textbf{11} from Alice he knows that his qubit, $\ket{\psi^{+}_B}$, is \begin{flalign*} \ket{\psi^{+}_B} &= - \beta \ket{0} + \alpha \ket{1} = \begin{bmatrix} -\beta \\ \alpha \end{bmatrix} \end{flalign*} \bigskip \noindent Now, if Bob now wants to transform $\ket{\psi^{+}_B} \rightarrow \ket{\psi}$, he would apply $\mathbf{ZX}$ as follows \begin{flalign*} \mathbf{ZX} \ket{\psi^{+}_B} &= \begin{bmatrix}[r] 1 & 0 \\ 0 & -1 \end{bmatrix} \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} -\beta \\ \alpha \end{bmatrix} \\ &= \begin{bmatrix}[r] 1 & 0 \\ 0 & -1 \end{bmatrix} \begin{bmatrix} \alpha \\ - \beta \end{bmatrix} \\ &= \begin{bmatrix} \alpha \\ \beta \end{bmatrix} \\ &= \alpha \ket{0} + \beta \ket{1} \\ &= \ket{\psi} \end{flalign*} \bigskip \noindent But our rule (Table \ref{tab:bob}) tells Bob to apply $\mathbf{XZ}$ when he sees \textbf{11} from Alice. Why? Notice the following: \begin{flalign*} \mathbf{ZX} \begin{bmatrix} x_0 \\ x_1 \end{bmatrix} &= \mathbf{Z} \begin{bmatrix} x_1 \\ x_0 \end{bmatrix} = \begin{bmatrix}[r] x_1 \\ - x_0 \end{bmatrix} \\ \mathbf{XZ} \begin{bmatrix} x_0 \\ x_1 \end{bmatrix} &= \mathbf{X} \begin{bmatrix}[r] x_0 \\ - x_1 \end{bmatrix} = \begin{bmatrix}[r] - x_1 \\ x_0 \end{bmatrix} \end{flalign*} \bigskip \noindent which implies that \bigskip \begin{equation} \mathbf{ZX} \begin{bmatrix} x_0 \\ x_1 \end{bmatrix} = - \mathbf{XZ} \begin{bmatrix} x_0 \\ x_1 \end{bmatrix} \label{eqn:equal} \end{equation} \bigskip \bigskip \noindent So now let $x_0 = \beta$ and $x_1 = \alpha$. Then \begin{flalign*} \mathbf{XZ} \Big [\beta \ket{0} - \alpha \ket{1} \Big ] = \mathbf{XZ} \begin{bmatrix}[r] \beta \\ - \alpha \end{bmatrix} = \mathbf{X} \begin{bmatrix} \beta \\ \alpha \end{bmatrix} = \begin{bmatrix} \alpha \\ \beta \end{bmatrix} = \alpha \ket{0} + \beta \ket{1} = \ket{\psi} \end{flalign*} \bigskip \noindent and $- \big (\beta \ket{0} - \alpha \ket{1} \big )= -\beta \ket{0} + \alpha \ket{1} \longrightarrow$ \begin{flalign*} \mathbf{ZX} \Big [ -\beta \ket{0} + \alpha \ket{1} \Big ] = \mathbf{ZX} \begin{bmatrix}[r] - \beta \\ \alpha \end{bmatrix} = \mathbf{Z} \begin{bmatrix}[r] \alpha \\ - \beta\end{bmatrix} = \begin{bmatrix} \alpha \\ \beta \end{bmatrix} = \alpha \ket{0} + \beta \ket{1} = \ket{\psi} \end{flalign*} \bigskip \bigskip \noindent The choice of the transformation rules shown in Table \ref{tab:bob} and Equation \ref{eqn:equal} allows us to write $\beta \ket{0} - \alpha \ket{1}$ rather than $- \beta \ket{0} + \alpha \ket{1}$. \bigskip \noindent Why do this? One thing it does is make the symmetry in Table \ref{tab:bob} more explicit, but hopefully there is a better reason... \subsubsection{Cloning and/or Faster Than Light Communication?} First, no faster-than-light communication occurs since Bob learns nothing from the changes until Alice actually sends the two classical bits to him (even though Alice operating on $\ket{\phi^{+}_A}$ instantly affects $\ket{\phi^{+}_B}$). \bigskip \noindent The No-Cloning Theorem \cite{2018arXiv180804213E} is not violated since, even though Bob has an exact copy of $\ket{\psi}$, Alice had to destroy her copy (by measuring it). \bigskip \noindent Finally, an interesting point is that neither Alice or Bob ever "know" what $\ket{\psi}$ is (in terms of its actual amplitudes); all they know is that it was transferred (whatever it was). \section{Bell and CHSH} \bigskip \noindent \section{Acknowledgements} \newpage \bibliographystyle{plain} \bibliography{/Users/dmm/papers/bib/qc} \end{document}
{ "alphanum_fraction": 0.6350307256, "avg_line_length": 50.7791970803, "ext": "tex", "hexsha": "d89a4abec9b319bb4735cf4a7d7f9c1e7561e78a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "14f01e0a50b9c643b5176a10c840f270b9da7bc1", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "davidmeyer/davidmeyer.github.io", "max_forks_repo_path": "_my_stuff/papers/qc/bell/bell.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "14f01e0a50b9c643b5176a10c840f270b9da7bc1", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "davidmeyer/davidmeyer.github.io", "max_issues_repo_path": "_my_stuff/papers/qc/bell/bell.tex", "max_line_length": 299, "max_stars_count": null, "max_stars_repo_head_hexsha": "14f01e0a50b9c643b5176a10c840f270b9da7bc1", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "davidmeyer/davidmeyer.github.io", "max_stars_repo_path": "_my_stuff/papers/qc/bell/bell.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 10790, "size": 27827 }
\section{Related Work} \label{sec:related_work} The Lightning \noindent Several papers have already analyzed security and privacy concerns in the Lightning Network. \cite{rohrer2019discharged} focuses on channel-based attacks and proposes methods to exhaust a victim's channels via malicious routing (up to potentially total isolation from the victim's neighbors) and to deny service to a victim via malicious HTLC construction. \newline \noindent \cite{tochner2019hijacking} proposes a denial of service attack by creating low-fee channels to other nodes, which are then naturally used to route payments for fee-minimizing network participants and then dropping the payment packets, therefore forcing the sender to await the expiration of the already set-up HTLCs. \newline \noindent \cite{balancehiding} provides a closer look into the privacy-performance trade-off inherent in LN routing. The authors also propose an attack to discover channel balances within the network. \newline \noindent \cite{wang2019flash} examines the LN routing process in more detail and proposes a split routing approach, dividing payments into large size and small size transactions. The authors show that by routing large payments dynamically to avoid superfluous fees and by routing small payments via a lookup mechanism to reduce excessive probing, the overall success rate can be maintained while significantly reducing performance overhead. \newline \noindent General information on the background of the Lightning network along with technical specifications can be found at \cite{ln_whitepaper}, \cite{lightningrfc} and \cite{lnbook}. The c-lightning GitHub repository can be found at \cite{clightninggit}.
{ "alphanum_fraction": 0.8257798705, "avg_line_length": 89.4210526316, "ext": "tex", "hexsha": "a6380216e40744cbda6dc811cdd13b2d83eb7e2f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "dc4a58f1fbfe4655c03f9f27640ee86f91f67090", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "utzn42/icissp_2020_lightning", "max_forks_repo_path": "icissp_paper/paper/chapters/02_related_work.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "dc4a58f1fbfe4655c03f9f27640ee86f91f67090", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "utzn42/icissp_2020_lightning", "max_issues_repo_path": "icissp_paper/paper/chapters/02_related_work.tex", "max_line_length": 440, "max_stars_count": null, "max_stars_repo_head_hexsha": "dc4a58f1fbfe4655c03f9f27640ee86f91f67090", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "utzn42/icissp_2020_lightning", "max_stars_repo_path": "icissp_paper/paper/chapters/02_related_work.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 350, "size": 1699 }
\begin{landscape} \subsection{Experiment Components} \label{components} \label{sec:experiment-components} Component tables were generated from the project budget spreadsheet in Appendix \ref{sec:appO} using the scripts included in Appendix \ref{sec:appK}. \subsubsection{Electrical Components} Table \ref{tab:components-table-electrical} shows all required electrical components with their total mass and price.\\ \input{4-experiment-design/tables/component-table-electronics.tex} \end{landscape} \begin{landscape} \subsubsection{Mechanical Components} Table \ref{tab:components-table-mechanical} shows all required mechanical components with their total mass and price.\\ \input{4-experiment-design/tables/component-table-mechanical.tex} \raggedbottom \end{landscape} \begin{landscape} \subsubsection{Other Components} Table \ref{tab:component-table-other} shows other components which contribute to the mass and/or price.\\ \input{4-experiment-design/tables/component-table-other.tex} \raggedbottom \end{landscape}
{ "alphanum_fraction": 0.8062015504, "avg_line_length": 29.4857142857, "ext": "tex", "hexsha": "22c0163a961a2ce05c6bfe066f5f289a9b3480e0", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c0db957167dfc90c25743af64c514fce837c1405", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "georgeslabreche/tubular-bexus-sed", "max_forks_repo_path": "4-experiment-design/4.3-experiment-components.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c0db957167dfc90c25743af64c514fce837c1405", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "georgeslabreche/tubular-bexus-sed", "max_issues_repo_path": "4-experiment-design/4.3-experiment-components.tex", "max_line_length": 150, "max_stars_count": 1, "max_stars_repo_head_hexsha": "c0db957167dfc90c25743af64c514fce837c1405", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "georgeslabreche/tubular-bexus-sed", "max_stars_repo_path": "4-experiment-design/4.3-experiment-components.tex", "max_stars_repo_stars_event_max_datetime": "2018-01-17T10:38:07.000Z", "max_stars_repo_stars_event_min_datetime": "2018-01-17T10:38:07.000Z", "num_tokens": 236, "size": 1032 }
\documentclass[a4paper]{article} \usepackage{listings} \usepackage[english]{babel} \usepackage[utf8]{inputenc} \usepackage{amsmath} \usepackage{graphicx} \usepackage[colorinlistoftodos]{todonotes} \title{COL331 Assignment 1 \\ \large Process Management and System Calls} \author{Utkarsh Singh, 2015ME10686} \date{\today} \begin{document} \maketitle \section*{Objective} To modify the xv6 kernel to achieve the following functionalities: \begin{enumerate} \item System Call Tracing \begin{enumerate} \item Print out a line for each system call invocation, and the number of times it has been called. \item Introduce a sys\_toggle() system call to toggle the system trace printed on the screen. \end{enumerate} \item sys\_add() System Call - To add a system call that takes two integer arguments and return their sum. \item sys\_ps() System Call - To add a system call that prints a list of all the current running processes. \end{enumerate} \section*{Procedure} Each part is implemented in the order mentioned above. \subsection*{Part 1(a)} In order to print out the system trace along with it's count, the following two global arrays are introduced in \textit{syscall.c} \newline \begin{lstlisting} int count_syscalls[24];//Maintains count of each syscall char *name_syscalls[] //Array containing all syscall names = {"sys_fork","sys_exit","sys_wait","sys_pipe","sys_read", "sys_kill","sys_exec","sys_fstat","sys_chdir","sys_dup", "sys_getpid","sys_sbrk","sys_sleep","sys_uptime","sys_open", "sys_write","sys_mknod","sys_unlink","sys_link","sys_mkdir", "sys_close", "sys_toggle", "sys_add", "sys_ps"}; \end{lstlisting} \noindent (Note that the extra system calls that will be introduced later have already been accounted for in these arrays. Their declarations and definitions are explained later.) \newpage \noindent The count\_syscalls array keeps the count of the number of times each system call has been called since booting xv6, and the name\_syscalls array stores the names of all system calls. \newline \noindent The syscall() function in \textit{syscall.c} was modified as follows to print the system trace, along with keeping track of the system call count. Below is the syscall() function from \textit{syscall.c}, with the modifications mentioned as "added". \begin{lstlisting} void syscall(void) { int num; struct proc *curproc = myproc(); num = curproc->tf->eax; if(num > 0 && num < NELEM(syscalls) && syscalls[num]) { count_syscalls[num-1] = count_syscalls[num-1]+1; //added if(check_systrace()) //is explained later cprintf("%s %d\n", name_syscalls[num-1], count_syscalls[num-1]); //added curproc->tf->eax = syscalls[num](); } else { cprintf("%d %s: unknown sys call %d\n", curproc->pid, curproc->name, num); curproc->tf->eax = -1; } } \end{lstlisting} (Note that check\_systrace() is a part of Part 1(b) and will be defined later). \newline \newline This finishes Part 1(a). \subsection*{Part 1(b)} For this, we have to declare and define the sys\_toggle system call. toggle() will be the user function that will call the system call. This is done as follows. \begin{enumerate} \item Add the following in \textit{syscall.c}. \begin{lstlisting} extern int sys_toggle(void); extern int check_systrace(void); \end{lstlisting} To see how check\_systrace() is being used, refer to snippet in Part 1(a). \item Add the following in \textit{syscall.h}. \begin{lstlisting} #define SYS_toggle 22 \end{lstlisting} \item Add the following to the array of functions in \textit{syscall.c}. \begin{lstlisting} [SYS_toggle] sys_toggle \end{lstlisting} \item Now let's define the sys\_toggle system call and check\_systrace() (it is a helper function). These definitions will be given in \textit{sysproc.c}. Add the following snippet to this file. \newpage \begin{lstlisting} int systrace_mode; int sys_toggle(void) { systrace_mode = 1-systrace_mode; return 0; } int check_systrace(void) //Not a syscall { if(systrace_mode == 0) return 1; return 0; } \end{lstlisting} \item Add the following in \textit{usys.S}. \begin{lstlisting} SYSCALL(toggle) \end{lstlisting} \item Finally add the following in \textit{user.h} \begin{lstlisting} int toggle(void); \end{lstlisting} \end{enumerate} This completes Part 1(b). \subsection*{Part 2} For this, we have to declare and define the sys\_add system call. add() will be the user function that will call the system call. This is done as follows. \begin{enumerate} \item Add the following in \textit{syscall.c}. \begin{lstlisting} extern int sys_add(void); \end{lstlisting} \item Add the following in \textit{syscall.h}. \begin{lstlisting} #define SYS_add 23 \end{lstlisting} \item Add the following to the array of functions in \textit{syscall.c}. \begin{lstlisting} [SYS_add] sys_add \end{lstlisting} \item Now let's define the sys\_add system call. The definition will be given in \textit{sysproc.c}. Add the following snippet to this file. \newpage \begin{lstlisting} int sys_add(int a, int b) { argint(0, &a); argint(1, &b); return a+b; } \end{lstlisting} \item Add the following in \textit{usys.S}. \begin{lstlisting} SYSCALL(add) \end{lstlisting} \item Finally add the following in \textit{user.h} \begin{lstlisting} int add(int, int); \end{lstlisting} \end{enumerate} This completes Part 2. \subsection*{Part 3} For this, we have to declare and define the sys\_ps system call. ps() will be the user function that will call the system call. This is done as follows. \begin{enumerate} \item Add the following in \textit{syscall.c}. \begin{lstlisting} extern int sys_ps(void); \end{lstlisting} \item Add the following in \textit{syscall.h}. \begin{lstlisting} #define SYS_ps 24 \end{lstlisting} \item Add the following to the array of functions in \textit{syscall.c}. \begin{lstlisting} [SYS_ps] sys_ps \end{lstlisting} \item Now let's define the sys\_ps system call. The definition will be given in \textit{sysproc.c}. Add the following snippet to this file. \newpage \begin{lstlisting} extern void get_pid_name(void); int sys_ps(void) { get_pid_name(); return 0; } \end{lstlisting} \item Add the following in \textit{usys.S}. \begin{lstlisting} SYSCALL(ps) \end{lstlisting} \item Finally add the following in \textit{user.h} \begin{lstlisting} int ps(void); \end{lstlisting} \item The sys\_ps system call used a function get\_pid\_name() which does the main job of printing the process name and id. The definition will be given in \textit{proc.c}. Add the following snippet to this file. \begin{lstlisting} void get_pid_name(void) { struct proc *p; for(p = ptable.proc; p < &ptable.proc[NPROC]; p++) { int check = p->state; if(check == UNUSED || check == EMBRYO || check == ZOMBIE) continue; cprintf("pid:%d name:%s\n", p->pid, p->name); } } \end{lstlisting} \end{enumerate} This completes Part 3, and also concludes the Assignment. \end{document}
{ "alphanum_fraction": 0.6942227033, "avg_line_length": 34.8632075472, "ext": "tex", "hexsha": "ccb4a4a7ad366e2c56a0098df59b255931a2388e", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-03-31T23:49:37.000Z", "max_forks_repo_forks_event_min_datetime": "2019-03-31T23:49:37.000Z", "max_forks_repo_head_hexsha": "43be2b9bca50e28fa2dea4592ede67f11b0bfbd4", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "utkarsh1097/COL331-Assignments", "max_forks_repo_path": "Assignment 1/report.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "43be2b9bca50e28fa2dea4592ede67f11b0bfbd4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "utkarsh1097/COL331-Assignments", "max_issues_repo_path": "Assignment 1/report.tex", "max_line_length": 258, "max_stars_count": 3, "max_stars_repo_head_hexsha": "43be2b9bca50e28fa2dea4592ede67f11b0bfbd4", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "utkarsh1097/COL331-Assignments", "max_stars_repo_path": "Assignment 1/report.tex", "max_stars_repo_stars_event_max_datetime": "2020-04-27T12:20:00.000Z", "max_stars_repo_stars_event_min_datetime": "2019-02-20T10:19:03.000Z", "num_tokens": 2047, "size": 7391 }
\chapter{TD Prediction} \section{Summary} \subsection{TD prediction} The basic formula for monte carlo prediction is $V(S_t)=V(S_t)+\alpha [G_t - V(S_t)]$. $G_t$ is the final result, this means that the update only can happen at the end of the simulation. By replacing $G_t$ with $R_{t+1} V(S_{t+1})$ we get the TD method $V(S_t) := V(S_t)+\alpha [R_{t+1} + \gamma V(S_{t+1}) - V(S_t)]$. The update of the TD method is called the \textbf{TD error} $\delta = G_t - R_{t+1} V(S_{t+1})$. An equivalent entity exists with Monte-Carlo methods, and is called the Monte-Carlo error. The monte carlo error can be written as a sum of TD errors, illustrated by equation \ref{eq:monte carlo error is a sum of td errors}. (proof on page 121) \begin{equation} G_t - V(S_t) = \sum_{k=t}^{T-1} \gamma^{k-t} \delta_k \label{eq:monte carlo error is a sum of td errors} \end{equation} \subsection{TD Advantages} \begin{enumerate} \item No model of the behavior is required \item Naturally online/incremental algorithm (useful with long episodes) \item Learns from experimental choices (monte carlo need to discard them) \item In practice faster then monte carlo methods \end{enumerate} \subsection{Optimality of TD(0)} When using batch learning, as in only changing the value function everytime a whole batch is processes. TD(0) and Monte Carlo do not converge to the same solution. Monte Carlo methods finds the solution that minimized the error on the dataset. TD(0) finds the parameters that most like would cause a markov process to result in the dataset. This is called the certainty-equivalence estimate. \subsection{SARSA} SARSA stands for $S_t,A_t,R_{t+1},S_{t+1}, A_{t+1}$. It uses an policy to generate $A_{t}$ and $A_{t+1}$. Updates the Q value, applies $A_{t+1}$ and then finds the next input $A_{t+2}$. \begin{equation} Q(S_t, A_t) := Q(S_t, A_t) + \alpha [R_{t+1} + \gamma Q(S_{t+1}, A_{t+1})-Q(S_t, A_t)] \end{equation} \subsection{Q-Learning} Q-learning acts greedily in when predicting, but acts according to it's policy when finding an input to apply to the system. So in contrast to SARSA it won't reuse $A_{t+1}$ it generated when predicting. \begin{equation} Q(S_t, A_t) := Q(S_t, A_t) + \alpha [R_{t+1} + \gamma \max_a Q(S_{t+1},a) - Q(S_t, A_t)] \label{eq:Q learning update} \end{equation} \subsection{Difference between SARSA and Q-Learning} SARSA will act a bit more carefull, as it's prediction is not greedy. And it takes into account that the next action might not be the best one. Q-Learning will take the more risky route, as it uses the best(according to Q(S, A)) possible action in it's prediction. \subsection{Expected Sarsa} Expected SARSA uses the expected value of all possible actions $A_{t+1}$ given the policy. Then it uses a greedy policy to act, just like with Q-learning. Expected Sarsa will work with $\alpha=1$, which would not work very will with classical SARSA. This makes the short term behavior much better. But is more computational expensive. \begin{equation} Q(S_t, A_t) := Q(S_t, A_t) + \alpha [R_{t+1} + \gamma \EX[Q(A_{t+1}, S_{t+1})|S_{t+1}] - Q(s_t, A_t)] \label{eq:expected sarsa update rule} \end{equation} \subsection{Double learning} Equation~\ref{eq:Q learning update} uses an argmax to estimate the value of Q. If one of these estimates is over-estimated, it will result in bad behavior(bias). Double learning reduces the odds of this happening by using two $Q(A,S)$ estimates. One to find the maximum action, and one to estimate it's value.(equation~\ref{eq:estimation double learning}) It's less like that the overestimate will happen this way. \begin{equation} A = Q_2(\argmax_a Q1(S,a)) \label{eq:estimation double learning} \end{equation} It's good practice to swap $Q_1$ and $Q_2$ in equation~\ref{eq:estimation double learning} constantly. For example at random with odds 50/50. \section{Exercises} \subsection{Exercise 6.1} \begin{equation} V_{t+1}(s_{t}) = \alpha [R_{t+1} + \gamma V_t(s_{t+1})-V_t(s_t)] + V_t(s_t) \label{eq:difference value function update} \end{equation} The difference between the value function at time t and t+1 is defined by equation~\ref{eq:difference value function update}. The equality $G_t = R_{t+1} + \gamma G_{t+1}$ still holds. However the monte carlo error is slightly different in every iteration. $G_t - V_t(s_t)$ becomes $G_{t+1} - V_{t+1}(s_{t+1})$ in the next iteration. As the value function now changes at iteration t, with a difference of $d_t = \alpha [R_{t+1} + \gamma V_t(s_{t+1})-V_t(s_t)]$. \begin{equation} G_{t+1} - V_t(S_{t+1}) = G_{t+1} - V_{t+1}(S_{t+1})-d_{t+1} \label{eq:single iteration difference} \end{equation} \begin{equation} error = -\sum_{k=t+1}^{T-1} \gamma^{k-t} d_{k-1} \label{eq:ex_6_1_difference} \end{equation} In conclusion the different factor is equation~\ref{eq:ex_6_1_difference}. \subsection{Exercise 6.2} If (as explained in the example of the hint) a part of the statespace is already well estimated. Then the TD prediction will be very good as you enter those states and if your path ends on one of those states. So you only have lesser predictions while in an unexplored part. The Monte Carlo approach would still need to evaluate through the already well estimated part. Which is rather slow. \subsection{Exercise 6.3} The change on a value function is defined by: $\alpha [R_{t+1} + \gamma V_t(s_{t+1})-V_t(s_t)] = 0.1[0 + 0 - 0.5]=-0.05$ if $V_t(s_{+1}) = 0$ so it ends on the left terminal state. And $\alpha=0.1$ and $V_t(A)=0.5$. \subsection{Exercise 6.4} The TD algo is over-fitting when $\alpha>0.05$ we could try to make it a bit smaller. But at $\alpha=0.05$ it seems to flatten out nicely, so I would not expect better results. A similar story with the MC method, this time at $\alpha0.02$ we get a nice flat tail. It's not as clear as with the TD method, but that's due the larger variance on the MC method. So no, I would not expect any changes in results if more samples were ran with different values for $\alpha$. \subsection{Exercise 6.5} Overfitting, the step is too large so TD cannot find the optimal values. But keeps over/under estimating every time it runs through an episode. \subsection{Exercise 6.6} You setup the bellman optionality equation, and the pick a method to solve it. As this is a rather simple example, you could just manually solve the equation. \begin{equation} \begin{split} V(A) = 0.5 V(B)\\ V(B) = 0.5 V(A) + 0.5 V(C)\\ V(C) = 0.5 V(B) + 0.5 V(D)\\ V(D) = 0.5 V(C) + 0.5 V(E)\\ V(E) = 0.5 V(D) + 0.5\\ \end{split} \end{equation} This seems like the simplest way to do it, as it's small. \subsection{Exercise 6.7} The normal on-policy TD(0) update looks like $V(s_t) = V(S_t) + \alpha[R_{t+1}+\gamma V(S_{t+1}) - V(S_t)]$. I would expect that $\alpha=\frac{\rho}{\sum_t \rho_t }$ as it becomes a weighted average due too the importance sampling. \subsection{Exercise 6.8} todo, not hard, but a bit of bookkeeping to be done. \subsection{Exercise 6.11} In Q-learning the actions that are applied to the system are learning through a $\epsilon$-greedy policy(behavior policy) are not used for the prediction(Q). This is by definition an off-policy control. \subsection{Exercise 6.12} It would be nearly the same, SARSA selects the next action before updating Q and Q-learning selects it after. So the update of Q might make a difference in some cases. \subsection{Exercise 6.13} todo \subsection{Exercise 6.14} todo
{ "alphanum_fraction": 0.7276515661, "avg_line_length": 53.9057971014, "ext": "tex", "hexsha": "8f84fcb0a41833a8b92b2625392f882219bd7c6f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ab9510e27103bb7c14e801606bb25b7c4e17e8ea", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Zilleplus/HML", "max_forks_repo_path": "RL/notes/TeX_files/chapter06.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ab9510e27103bb7c14e801606bb25b7c4e17e8ea", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Zilleplus/HML", "max_issues_repo_path": "RL/notes/TeX_files/chapter06.tex", "max_line_length": 415, "max_stars_count": null, "max_stars_repo_head_hexsha": "ab9510e27103bb7c14e801606bb25b7c4e17e8ea", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Zilleplus/HML", "max_stars_repo_path": "RL/notes/TeX_files/chapter06.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2289, "size": 7439 }
\chapter{Sound} When you set off a firecracker, it makes sound. Let's break that down a little more: Inside the cardboard wrapper of the firecracker, there is potassium nitrate ($KNO_3$), sulfer ($S$), and carbon($C$). These are all solids. When you trigger the chemical reactions with a little heat, these atoms rearrange themselves to be potassium carbonate ($K_2CO_3$), potassium sulfate ($K_2SO_4$), carbon dioxide ($CO_2$), and nitrogen ($N_2$). Note that the last two are gasses. The molecules of a solid are much more tightly packed than the molecules of a gas. So after the chemical reaction, the molecules expand to fill a much bigger volume. The air molecules nearby get pushed away from the firecracker. They compress the molecules beyond them, and those compress the molecules beyond them. This compression wave radiates out as a sphere; its radius growing at about 343 meters per second (``The speed of sound''). The energy of the explosion is distributed around the surface of this sphere. As the radius increases, the energy is spread more and more thinly around. This is why the firecracker seems louder when you are closer to it. (If you set off a firecracker in a sewer pipe, the sound will travel much, much farther.) This compression wave will bounce off of hard surfaces. If you set off a firecracker 50 meters from a big wall, you will hear the explosion twice. We call the second one ``an echo.'' The compression wave will be absorbed by soft surfaces. If you covered that wall with pillows, there would be almost no echo. The study of how these compression waves move and bounce is called \newterm{accoustics}. Before you build a concert hall, you hire an accoustician to look at your plans and tell you how to make it sound better. \section{Pitch and frequency} The string on a guitar is very similar to the weighted spring example. The farther the string is displaced, the more force it feels pushing it back to equilibrium. Thus, it moves back and forth in a sine wave. (OK, it isn't a pure sine wave, but we will get to that later.) The string is connected to the center of the boxy part of the guitar, which is pushed and pulled by the string. That creates compression waves in the air around it. If you are in the room with the guitar, those compression waves enter your ear, push and pull your ear drum, which is attached to bones that move a fluid that tickles tiny hairs, called \newterm{cilia} in your inner ear. That is how you hear. We sometimes see plots of sound waveforms. The $x$-axis represents time. The $y$-axis represents the amount the air is compressed at the microphone that converted the air pressure into an electrical signal. \includegraphics[width=0.8\linewidth]{soundwave.png} If the guitar string is made tighter (by the tuning pegs) or shorter (by the guitarist's fingers on the strings), the string vibrates more times per second. We measure the number of waves per second and we call it the \newterm{frequency} of the tone. The unit for frequency is \newterm{Hertz}: cycles per second. Musicans have given the different frequencies names. If the guitarist plucks the lowest note on his guitar, it will vibrate at 82.4 Hertz. The guitarist will say ``That pitch is low E.'' If the string is made half as long (by a finger on the 12th fret), the frequency will be twice as fast (164.8 Hertz), and the guitarist will say ``That is E an octave up.'' For any note, the note that has twice the frequency is one octave up. The note that has half the frequency is one octave down. The octave is a very big jump in pitch, so musicians break it up into 12 smaller steps. If the guitarist shortens the E string by one fret, the frequency will be $82.4 \times 1.059463 \approx 87.3$ Hertz. Shortening the string one fret always increases the frequency by a factor of 1.059463. Why? Because $1.059463^12 = 2$. That is, if you take 12 of these hops, you end up an octave higher. This, the smallest hop in western music, is referred to as \newterm{half step}. \begin{Exercise}[title={Notes and frequecies}, label=note_to_frequency] The note A near the middle of the piano, is 440Hz. The note E is 7 half steps above A. What is its frequency? \end{Exercise} \begin{Answer}[ref=note_to_frequency] A is 440 Hz. Each half-step is a multiplication by $\sqrt[12]{2} = 1.059463094359295$ So the frequency of E is $(440)(2^{7/12}) = 659.255113825739859$ \end{Answer} \section{Chords and harmonics} Of course, a guitarist seldom plays only one string at a time. Instead, he uses the frets to pick a pitch for each string and strums all six strings. Some combinations of frequencies sound better than others. We have already talked about the octave: if one string vibrates twice for each vibration of another, they sound sweet together. Musicians speak of ``the fifth''. If one string vibrates three times and the other vibrates twice in the same amount of time, they sound sweet together. If one string vibrates 4 times while the other vibrates 3 times, they sound sweet together. Musicians call this ``the third.'' Each of these different frequencies tickle different cilia in the inner ear, so you are able to hear all six notes at the same time when the guitarist strums his guitar. When a string vibrates, it doesn't create a single sine wave. Yes, the string vibrates from end-to-end and this generates a sine wave at what we call \newterm{the fundmental frequency}.However, there are also ``standing waves'' on the string. One of these standing waves, is still at the centerpoint of the string, but everything to the left of the centerpoint is going up when everything to the right is going down. This creates \newterm{an overtone} that is twice the frequency of the fundamental. \begin{tikzpicture}[ tl/.style = {% tick labels fill=white, inner sep=1pt, font=\scriptsize, }, ] \draw[dashed,draw=black, domain=-0:6.283,samples=300,variable=\x] plot (\x,{0.7 * sin(deg{\x}/2)}); \draw[thick,draw=black, domain=0:6.283,samples=300,variable=\x] plot (\x,{0.7 * sin(deg{-1 * \x}/2)}); \draw[dashed,draw=black, domain=-0:6.283,samples=300,variable=\x] plot (\x,{0.2 * sin(deg{\x})}); \draw[thick,draw=black, domain=0:6.283,samples=300,variable=\x] plot (\x,{0.2 * sin(deg{-1 * \x})}); \filldraw[black] (0, 0) circle(3pt); \filldraw[black] (6.283, 0) circle(3pt); \end{tikzpicture} The next overtone has two still points -- it divides the string into three parts. The outer parts are up while the inner part is down. It's frequency is three times the fundamental frequency. \begin{tikzpicture}[ tl/.style = {% tick labels fill=white, inner sep=1pt, font=\scriptsize, }, ] \draw[dashed,draw=black, domain=-0:6.283,samples=300,variable=\x] plot (\x,{0.7 * sin(deg{\x}/2)}); \draw[thick,draw=black, domain=0:6.283,samples=300,variable=\x] plot (\x,{0.7 * sin(deg{-1 * \x}/2)}); \draw[dashed,draw=black, domain=-0:6.283,samples=300,variable=\x] plot (\x,{0.2 * sin(1.5 * deg{\x})}); \draw[thick,draw=black, domain=0:6.283,samples=300,variable=\x] plot (\x,{0.2 * sin(1.5 * deg{-1 * \x})}); \filldraw[black] (0, 0) circle(3pt); \filldraw[black] (6.283, 0) circle(3pt); \end{tikzpicture} And so on: 4 times the fundamental, 5 times the fundamental, etc. In general, tones with a lot of overtones tend to sound bright. Tones with just the fundamental sound thin. Humans can generally hear frequencies from 20Hz to 20,000Hz (or 20kHz). Young people tend to be able to hear very high sounds better than older people. Dogs can generally hear sounds in the 65Hz to 45kHz range. \section{Making waves in Python} Let's make a sine wave and add some overtones to it. Create a file \filename{harmonics.py} \begin{Verbatim} import matplotlib.pyplot as plt import math # Constants: frequency and amplitude fundamental_freq = 440.0 # A = 440 Hz fundamental_amp = 2.0 # Up an octave first_freq = fundamental_freq * 2.0 # Hz first_amp = fundamental_amp * 0.5 # Up a fifth more second_freq = fundamental_freq * 3.0 # Hz second_amp = fundamental_amp * 0.4 # How much time to show max_time = 0.0092 # seconds # Calculate the values 10,000 times per second time_step = 0.00001 # seconds # Initialize time = 0.0 times = [] totals = [] fundamentals = [] firsts = [] seconds = [] while time <= max_time: # Store the time times.append(time) # Compute value each harmonic fundamental = fundamental_amp * math.sin(2.0 * math.pi * fundamental_freq * time) first = first_amp * math.sin(2.0 * math.pi * first_freq * time) second = second_amp * math.sin(2.0 * math.pi * second_freq * time) # Sum them up total = fundamental + first + second # Store the values fundamentals.append(fundamental) firsts.append(first) seconds.append(second) totals.append(total) # Increment time time += time_step # Plot the data fig, ax = plt.subplots(2, 1) # Show each component ax[0].plot(times, fundamentals) ax[0].plot(times, firsts) ax[0].plot(times, seconds) ax[0].legend() # Show the totals ax[1].plot(times, totals) ax[1].set_xlabel("Time (s)") plt.show() \end{Verbatim} When you run it, you should see a plot of all three sine waves and another plot of their sum: \includegraphics[width=0.9\linewidth]{harmonicspy.png} \subsection{Making a sound file} The graph is pretty to look at, but make a file that we can listen to. The WAV audio file format is supported on pretty much any device, and a library for writing WAV files comes with Python. Lets write some sine waves and some noise into a WAV file. Create a file called \filename{soundmaker.py} \begin{Verbatim} import wave import math import random # Constants frame_rate = 16000 # samples per second duration_per = 0.3 # seconds per sound frequencies = [220, 440, 880, 392] # Hz amplitudes = [20, 125] baseline = 127 # Values will be between 0 and 255, so 127 is the baseline samples_per = int(frame_rate * duration_per) # number of samples per sound # Open a file wave_writer = wave.open('sound.wav', 'wb') # Not stereo, just one channel wave_writer.setnchannels(1) # 1 byte audio means everything is in the range 0 to 255 wave_writer.setsampwidth(1) # Set the frame rate wave_writer.setframerate(frame_rate) # Loop over the amplitudes and frequencies for amplitude in amplitudes: for frequency in frequencies: time = 0.0 # Write a sine wave for sample in range(samples_per): s = baseline + int(amplitude * math.sin(2.0 * math.pi * frequency * time)) wave_writer.writeframes(bytes([s])) time += 1.0 / frame_rate # Write some noise after each sine wave for sample in range(samples_per): s = baseline + random.randint(0, 15) wave_writer.writeframes(bytes([s])) # Close the file wave_writer.close() \end{Verbatim} When you run it, it should create a sound file with several tones of different frequencies and volumes. Each tone should be followed by some noise.
{ "alphanum_fraction": 0.7221825326, "avg_line_length": 34.244648318, "ext": "tex", "hexsha": "b5902a0e1632bac33a82fb05288ad355aa436407", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2022-01-05T00:43:58.000Z", "max_forks_repo_forks_event_min_datetime": "2022-01-05T00:43:58.000Z", "max_forks_repo_head_hexsha": "b7b4896d804c49cbc93fe86a0d2fce531afbcc1f", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "hillegass/sequence", "max_forks_repo_path": "Modules/Oscillations/sound-en_US.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b7b4896d804c49cbc93fe86a0d2fce531afbcc1f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "hillegass/sequence", "max_issues_repo_path": "Modules/Oscillations/sound-en_US.tex", "max_line_length": 110, "max_stars_count": 10, "max_stars_repo_head_hexsha": "b7b4896d804c49cbc93fe86a0d2fce531afbcc1f", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "hillegass/sequence", "max_stars_repo_path": "Modules/Oscillations/sound-en_US.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-05T00:43:44.000Z", "max_stars_repo_stars_event_min_datetime": "2020-06-13T17:19:16.000Z", "num_tokens": 3044, "size": 11198 }
\documentclass{article} \usepackage[utf8]{inputenc} \title{PS5} \author{Amir Tayebi} \date{\today} \begin{document} \maketitle \section{Web Scraping without an API} As I talk to you earlier this semester, I need to extract all of the censorship events from the Beacon for Freedom of Expression data set. What I did for this homework is not exactly what I want since it just extracts 10 records. I need to create a loop later to export all the data. I also made use of a tutorial that I found on the net. \section{Web Scraping with an API} I scrapped some data from the votesmart.org website. VoteSmart is a nonprofit organization that gathers detailed data on elections, candidates, politicians, etc. I need all the biographical data on politicians for my research. What I did for this assignment is just getting some biographical data for two politicians, and all the candidates for two elections. \end{document}
{ "alphanum_fraction": 0.7690677966, "avg_line_length": 42.9090909091, "ext": "tex", "hexsha": "6ea46069962f31f822eeab610c54567853c712a1", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b0a63a18c1304403e108fc613a861dcbcc076013", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "amirtayebi/DScourseS19", "max_forks_repo_path": "ProblemSets/PS5/PS5_Tayebi.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b0a63a18c1304403e108fc613a861dcbcc076013", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "amirtayebi/DScourseS19", "max_issues_repo_path": "ProblemSets/PS5/PS5_Tayebi.tex", "max_line_length": 361, "max_stars_count": 1, "max_stars_repo_head_hexsha": "b0a63a18c1304403e108fc613a861dcbcc076013", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "amirtayebi/DScourseS19", "max_stars_repo_path": "ProblemSets/PS5/PS5_Tayebi.tex", "max_stars_repo_stars_event_max_datetime": "2019-03-08T02:44:51.000Z", "max_stars_repo_stars_event_min_datetime": "2019-03-08T02:44:51.000Z", "num_tokens": 216, "size": 944 }
\documentclass{ecnreport} \stud{Master 1 CORO / Option Robotique} \topic{Robot Operating System} \author{G. Garcia, O. Kermorgant} \begin{document} \inserttitle{Robot Operating System} \insertsubtitle{Lab 4: Image topics} \section{Goals} In this lab you will practice image subscription and publishing in ROS. You will program a ROS node that subscribes to one of the camera images of the robot and after overlaying your names onto the image it will be re-published in order to be visualized on Baxter's screen. \section{Deliverables} After validation, the whole package should be zipped and sent by mail (G. Garcia) or through the lab upload form (O. Kermorgant). \section{Tasks} Start by creating a ROS package (\texttt{catkin create pkg}) with dependencies on \texttt{baxter\_core\_msgs}, \texttt{sensor\_msgs} and \texttt{ecn\_common}. Then modify \texttt{package.xml} and \texttt{CMakeLists.txt} to add the dependency on \texttt{cv\_bridge} by hand. \subsection{First task: Baxter's camera images} \begin{itemize} \item Display the images in RViz. What are the name of the image topics? \item Use the \texttt{image\_view} package to display images coming from the left and right arm cameras. \item Search for the name of the subscribed topic from Baxter that allows displaying images on its screen. \end{itemize} \subsection{Second task: Programming an image subscriber} In this task you will create a ROS node that subscribes to the \texttt{image\_in} topic and display it on your screen (with OpenCV's \texttt{imshow}).\\ Run it through a launch file with the correct topic remapping in order to display either the left or the right image. \subsection{Third task: Programming an image republisher} Now that you know how to subscribe to an image topic, add a text parameter to your node. You then have to grab the current image, write the text parameter on it (see OpenCV function) and publish the resultant image on the {image\_out} topic.\\ Run this node through a launch file with the correct topic remapping and parameter in order to see your text on Baxter's screen. \section{Informations} Tutorials can be found online about \emph{image\_transport} and \emph{cv\_bridge}. \subsection{Avoiding conflicts between controllers} During this lab, all the groups will try to display images on Baxter's screen and thus it is not advised that you all run your nodes at the same time. In order to avoid this, a tool is provided in the \texttt{ecn\_common} package that will have your node wait for the availability of Baxter. The code in explained below: \paragraph{C++: } The token manager relies on the class defined in \texttt{ecn\_common/token\_handle.h}. It can be used this way: \cppstyle \begin{lstlisting} #include <ecn_common/token_handle.h> // other includes and function definitions int main(int argc, char** argv) { // initialize the node ros::init(argc, argv, "my_node"); ecn::TokenHandle token(); // this class instance will return only when Baxter is available // initialize other variables // begin main loop while(ros::ok()) { // do stuff // tell Baxter that you are still working and spin token.update(); loop.sleep(); ros::spinOnce(); } } \end{lstlisting} \paragraph{Python: } The token manager relies on the class defined in \texttt{ecn\_common.token\_handle}. It can be used this way: \pythonstyle \begin{lstlisting} #! /usr/bin/env python from ecn_common.token_handle import TokenHandle # other imports and function definitions # initialize the node rospy.init_node('my_node') token = TokenHandle() # this class instance will return only when Baxter is available #initialize other variables # begin main loop while not rospy.is_shutdown(): # do stuff # tell Baxter that you are still working and spin token.update(); rospy.sleep(0.1) \end{lstlisting} With this code, two groups can never control Baxter at the same time. When the controlling group ends their node (either from Ctrl-C or because of a crash) the token passes to the group that has been asking it for the longuest time. Remind the supervisor that they should have a \texttt{token\_manager} running on some computer of the room. \end{document}
{ "alphanum_fraction": 0.7480260102, "avg_line_length": 34.448, "ext": "tex", "hexsha": "97c3e533cf7a34c385afbe0d37786de20acad41f", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2021-01-25T12:24:08.000Z", "max_forks_repo_forks_event_min_datetime": "2017-10-04T13:53:22.000Z", "max_forks_repo_head_hexsha": "3627c7176ed238b869dc6eaa632d115004541b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "oKermorgant/ecn_ros_labs", "max_forks_repo_path": "Lab4/midwa_lab4.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3627c7176ed238b869dc6eaa632d115004541b93", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "oKermorgant/ecn_ros_labs", "max_issues_repo_path": "Lab4/midwa_lab4.tex", "max_line_length": 182, "max_stars_count": 3, "max_stars_repo_head_hexsha": "3627c7176ed238b869dc6eaa632d115004541b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "oKermorgant/ecn_ros_labs", "max_stars_repo_path": "Lab4/midwa_lab4.tex", "max_stars_repo_stars_event_max_datetime": "2020-09-01T23:44:52.000Z", "max_stars_repo_stars_event_min_datetime": "2017-10-04T13:53:18.000Z", "num_tokens": 1030, "size": 4306 }
\documentclass[11pt,a4paper]{article} \usepackage{od,wrapfig} \usepackage[utf8]{inputenc} \usepackage[main=english,russian]{babel} \usepackage{tikz} \usetikzlibrary{shapes.misc, arrows.meta} \usetikzlibrary{arrows, decorations.markings} \newcommand{\bbox}[2]{\parbox{#1cm}{\small\centering #2}} \tikzstyle{vecArrow} = [thick, decoration={markings,mark=at position 1 with { \arrow[semithick]{open triangle 60} } }, double distance=1.4pt, shorten >= 5.5pt, preaction = {decorate}, postaction = {draw,line width=1.4pt, white,shorten >= 4.5pt} ] \title{Human and Technical System} \author{Nikolay Shpakovsky, Minsk} \date{January 20, 2003} \begin{document} \maketitle \begin{quote} The original Russian text consists of two parts, which are combined here. Part 1: \url{http://www.gnrtr.ru/Generator.html?pi=201&cp=3}\\ Part 2: \url{http://www.gnrtr.ru/Generator.html?pi=200&cp=3} \end{quote} \section*{Is a human a part of a Technical System or not?} \begin{flushright} ... the last words of the book of the prophet Lustrog read:\\ «all true believers break eggs from whichever end is more convenient».\\ Jonathan Swift «Gulliver's Travels» \end{flushright} \section*{Introduction} The Theory of Inventive Problem Solving (TRIZ), developed by the talented engineer, inventor and ingenious thinker G.S. Altshuller, is widely known and, undoubtedly, the most effective tool for solving engineering problems at present time. A large number of materials have been published in Russian and English languages, in which the essence of the theory is quite fully revealed for an initial acquaintance with her. The best Russian-language resource is the Minsk website center OTSM-TRIZ\footnote{\url{http://www.trizminsk.org}}, the best English-speaking is the American TRIZ Journal\footnote{\url{http://www.triz-journal.com}}. Having studied TRIZ from books and articles, you can easily teach others -- the material is so rich and fascinating that interest in the classes will be ensured. However, for a deeper understanding of TRIZ, one has to think through the material, first of all, of the concepts and terms of TRIZ. This is required, since most of the TRIZ material is presented for further reflection, and not set up for simple memorization. During my work for SAMSUNG as a TRIZ consultant, I had anew and seriously to rethink everything that I knew about TRIZ before. When solving technical tasks, bypassing patents of competing companies and developing a development forecast of technical systems, it was very important to understand the deep content of each TRIZ term in order to use its tools with maximum efficiency. One of the basic concepts in TRIZ and one of the most important links to all, without exception, of its tools is the concept of a «Technical System». This term is introduced in classical TRIZ without definition, as a derivative of the concept of a «System». But with closer examination, it becomes clear that this concept «Technical System» requires further specification. This statement is supported, for example, by semantic aspects. The concept of «Technical System» is translated from Russian into English in two ways: «Technical System» and «Engineering System». Using any search engine on the Internet, it is easy to convince yourself that these concepts are practically equal for TRIZ specialists. Or take, for example, the glossary of Victor Fey\footnote{\url{http://www.triz-journal.com/archives/2001/03/a/index.htm}}, in which there is simply no explanation of either one or the other concepts. In this article, I tried to describe my understanding of the term «Technical System», gradually developed after the solution of a specific problem, which challenged me to find out the full composition of a minimum operational technical system. \section*{An attempt to analyze the concept «Technical System»} First, let's consider what a general system is. There are many different definitions of a system. The most dashing, abstract, therefore absolutely exhaustive, but with little use for practical purposes was given by B.R. Gaines [1]: \textbf{«The system is what we define as a system»}. In practice, the most often used definition of a system is due to A. Bogdanov [2]: \textbf{«A system is a set of interconnected elements with a common (systemic) property that is not reducible to the properties of these elements»}. What is a «Technical System»? Unfortunately, G. Altshuller did not dierectly define the concept of a «Technical System». It is clear from the context that this is some kind of system related to technology, to technical objects. As indirect definition of a Technical System (TS) can serve the three laws formulated by him, or rather, three conditions that should be satisfied for its existence [3]: \begin{itemize}[noitemsep] \item[1.] The law of completeness of the parts of a system. \item[2.] The law of «energy conductivity» of the system. \item[3.] The law of harmonization of the rhythms of the parts of the system. \end{itemize} According to the law of completeness of the parts of a system, each TS includes at least four parts: engine, transmission, working body and control system. \begin{center} \begin{tikzpicture}[ >={Triangle[length=3pt 9, width=3pt 3]}, rounded corners=2pt] \draw[fill=yellow] (1.2,1.5) -- (8.8,1.5) -- (8.8,4.5) -- (1.2,4.5) -- (1.2,1.5) ; \node[draw] at (5,3.5) [rectangle] (A0) {\bbox{2}{Control System}}; \node[draw] at (2.5,2) [rectangle] (A1) {\bbox{2}{Engine}}; \node[draw] at (5,2) [rectangle] (A2) {\bbox{2}{Transmission}}; \node[draw] at (7.5,2) [rectangle] (A3) {\bbox{2}{Working Organ}}; \node[draw] at (12,2) [rectangle] {\bbox{2}{Processed Object}}; \draw (0,2.5) node {\bbox{2}{Energy Resources}}; \draw (9.7,2.3) node {{\small Action}}; \draw[->] (A0) -| (A1) ; \draw[->] (A0) -- (A2) ; \draw[->] (A0) -| (A3) ; \draw[vecArrow] (-1,2) -- (1,2) ; \draw[vecArrow] (8.9,2) -- (10.7,2) ; \end{tikzpicture}\\ Minimal structure of a technical system capable to work according to G. Altshuller. \end{center} That is, there is some kind of system, a machine consisting of technical objects, subsystems that can perform the required function. It includes a working body, transmission and engine. Everything governing the action of this machine is placed in the «control system» or a not well understood «cybernetic part» [4]. The important thing here is the understanding that the TS is created to perform some function. Probably, this should be understood in such a way that a minimally TS capable to work can perform this function at any time, without additional supplementation. Ways to the definition of a Technical System are given in the book «Search for New Ideas» [5], where the definition of a «Developing Technical System» is given. This question is touched by V. Korolev in his interesting studies [6,7]. Some critical remarks are devoted to this topic also in the materials of N. Matvienko [8]. The definition of the concept of a «Technical System» in relation to TRIZ is given by Yu. Salamatov in [9]: \begin{quote}\bf A Technical System is a set of orderly interacting elements, which has properties that can not reduced to properties of individual elements and intended to perform certain useful functions. \end{quote} Indeed, a human has some kind of needs, for the satisfaction of which it is necessary to perform some function. Hence you need somehow to organize a system, performing this function -- the Technical System -- and satisfy the need. What is confusing in the above definition of a Technical System? The word «intended» is not quite clear. Probably, it's not someone's wishes that are important here, but the objective ability to perform the required function. \begin{quote}\it For example, what is a metal cylinder for with an axial hole with variable diameter and threaded at one end? It is almost impossible to answer such a question. The discussion is immediately switching to the level of the question «where it could be applied?». \end{quote} But is it possible, using this definition, to say: Until now this is not yet a Technical System, and from now on -- it is? It is written like this: «... the TS appears, as soon as the technical object acquires the ability to perform the Main Useful Function without a human.» And then it is claimed that one of the trends in the development of the TS is the removal of the human from its parts. This means that at some stage of TS development, a human is part of it. Or not? Unclear ... \begin{quote}\it We probably won't understand anything if we don't find an answer to the question: is the human part of a Technical System or not? \end{quote} Having interviewed several TRIZ experts, I received a fairly wide range of answers: from a firm «No», backed up by references to big experts, to a timid «yes, probably». The most original of the answers: when the car moves evenly and straight -- the human is not part of this technical system, but once a car begins to turn, then the human immediately becomes a necessary and useful part of it. What's in our literature? Salamatov [9, Section 4.3] gives an example that a man with a hoe is not a TS. Moreover, the hoe itself is not a Technical System. But a bow is a TS. But what is the difference between a hoe and a bow? The bow has an energy accumulator -- string and flexible rod, in a good hoe, too, when swinging, the handle bends and when moving down increases the force of the blow. It bends a little, but it's about the principle. The bow work in two movements: first it is cocked, then released, with the hoe -- too. Why then such an injustice? Let's try to figure it out. A sharpened wooden stick is a Technical System? Does not look like it. And an automatic pen? This is probably a TS, and a quite complex one. And what about a printer? Undoubtedly TS. And a pencil? Who knows ... It seems like neither this nor that. Maybe call it «simple Technical System»? Lead or silver writing stick? Question ... Already not a splinter of wood, after all -- a precious metal, but it is still far from the pen. A modern capillary pen, a pencil, a sharpened stick and the writing unit of a printer -- what do they have in common? Some useful function that they, in principle, could perform: «leave a mark on the surface». «Lanky Timoshka is running along a narrow path. His traces is your labor». Do you remember? This is a pencil. And also a stick, lead or silver stylus, pen, felt-tip pen, printer, printing press. What a set! And the row is logical ... However, here again a question arises. If all these objects can perform the same function, then they all are Technical Systems. And there is no need to divide them into complex and primitive ones. If objects implement the same functions, then they have not only the same purpose, but also the level of hierarchy should be the same. Or vice versa -- these are all not TS. Well, what a Technical System -- a sharpened stick? Where is its engine or transmission? But then it turns out that the printer is also not a TS. Let's be formal. Any Technical System must perform some useful function. Can the sharpened stick fulfill its function? No. And the printer? Let's do a simple experiment. Place the pen on the table. Or, for simplicity, on the paper. Let’s just wait until it begins to perform its main useful function. Does not. And it will not perform until a human, the operator, takes it in his hand, does attach it to a sheet of paper, and «... the verses will flow freely». And the printer? Will he start typing until the user gives a command to the computer, and this one in its turn, does forward it to the printer? That is, without pressing a button, a voice or, in perspective, a mental command, the action will not happen. Thus, the following is obtained. A pen, a hoe, a printer, a bicycle are not TS. More precisely, not complete TS. They are simply «systems of technical objects». Without a human, an operator, they cannot work, i.e. cannot fulfill their function. Of course, in principle -- can, but in reality ... In the same way, four wheels, a body and a hood can't nothing transport ... Even a fully equipped brand new car, refueled lonely, with key in the ignition lock, is not a Technical System, but simply a «system of technical objects». If the operator will sit down on his place, in common language, the driver, takes rushes behind the wheel, and immediately the car becomes a Technical System. And all others technical objects and systems become complete TS and operate only and exclusively together with a human, the operator. The operator can sit inside the «system of technical objects». Can stand near it, farther or closer. Can even program the action of the Technical System, turn it on and leave. But in any case -- the operator must participate in the TS management. And no reason to oppose the spaceship and the hoe. Both the first and the second -- this is a greater or lesser part of a certain TS, which for normal execution of the main useful function must be supplemented with one or more operators. Let us recall the law of completeness of the parts of a system, formulated by G.S. Altshuller. A TS arises when all four parts are present (Fig. 1), and each of them should be minimally capable to work. If at least one part is missing, then this is not a Technical System. It is also not a TS if one of the four parts is not working. It turns out that the Technical System is something that should be completely ready for immediate fulfillment of its main useful function without additional completing. Like a ship that is ready to cast off. Everything is filled up, charged up and the entire crew on their places. And without human, the control system is not only not «minimally capable to work», but not capable to work in principle, since it is not staffed. The law of completeness of the parts of the system is not fulfilled. And the law of energy conductivity is not fulfilled. There is a signal going into the control system, and -- stop. There is no reverse flow of energy. And what about those «Technical Systems» that successfully perform their useful function, but do not contain technical objects at all? For example, the electrician changing a light bulb ... It seems that there is such a special level of the hierarchy at which a collection of objects, elements turns into an actual Technical System. This is the level of a car with a driver, a video camera with operator, a pen with a writer, an automated production in a water complex with operators who start and maintain it, etc. This is the level at which the system is formed: a set of natural and technical objects, a human operator and his actions, who is performing some kind of a directly useful for humans function. It is interesting to see how the hierarchy of biological objects and systems is built. Molecules, cells, elements, parts of organisms -- this are the levels of subsystems. A «Subsystem» is a separate part of a body, for example, the skeleton of an elephant, the sting of a mosquito or a feather of a titmouse. The sum of such subsystems, even their complete set, an organism entirely assembled from them, cannot perform useful functions. You need to add something else to this «set», breathe in a «spark of God» to get a living, functioning organism. \begin{center} \includegraphics[width=.6\textwidth]{mts-2.png} \end{center} Living organisms, individuals, can be combined into a supersystem. A «supersystem» is more or a less an organized collection of animals or plants, such as a bee family. But such a sharp qualitative leap does not occur here. By analogy with biological systems, the concept of a «Technical System» can be considered as a special level of the hierarchy, at which the system gets the opportunity to act independently, i.e. at the level of a living organism. In other words, the «Technical System» in technology corresponds to the level of a living organism in nature. In a patent application, this is called "machine in operation". That is, «the system of technical objects» plus a human operator. For example, a carburetor is not a TS, but simply a system, a set of technical objects. But the human (operator), knocking with the carburetor on a nut is a TS with a useful function: to peel nuts from the shell. So a man with a hoe is a TS, but a tractor with a plow is not. Paradox ... \begin{quote}\it «Human» -- what is this applied to a Technical System? What is here difficult for understanding? \end{quote} The confusion is probably caused by the very wording of the question. It is psychologically difficult to put a human and a shoe brake on the same level. There is no doubt that human, as part of the technosphere, has the most direct relation to any TS and can be in relation to it in the following role situations: \emph{In the supersystem:} \begin{itemize}[noitemsep] \item[1.] As user. \item[2.] As developer. \item[3.] As manufacturer of the technical objects of the system. \item[4.] As person providing maintenance, repair and disposal of equipment system objects. \end{itemize} \emph{In the system:} \begin{itemize}[noitemsep] \item[1.] As operator, the main element of the control system. \item[2.] As source of energy. \item[3.] As engine. \item[4.] As transmission. \item[5.] As working body. \item[6.] As the processed object. \end{itemize} \emph{In the environment:} \begin{itemize}[noitemsep] \item[1.] As element of the environment. \end{itemize} The user is undoubtedly the main person. It is he who pays for the creation of the TS, at his will, developers and manufacturers get down to business. He pays for the operator's labor, maintenance, repair and disposal of technical objects of the system. The second group of persons ensures the functioning of the TS during work, feels its impact on itself. The third group indirectly helps or hinders this process, or simply observes gives behind it and is exposed to the side effects that occur during operation. A person can fulfill several roles at the same time. For example, the driver owning the car or a person using an inhaler. Or a bicyclist. He is an element of almost all bicycle subsystems, except for the working body (seat) and transmission (wheels and bike frame). \emph{Still, it turns out that a human is an obligatory part of the Technical System}. It seems what does it matter. After all, how it comes down to it, to the solution of real engineering tasks, then human quickly leaves the problem zone and has to work at the level of subsystems. Yes, but only in those places where the coordination and passage of energy is carried out between subsystems not connected in any way with the operator. But if we come closer to the control system the problem of human interaction with technical objects grows up in full size. Take a car, for example. The car acquired its current appearance by the end of the 1970s, when airbags and a reliable automatic transmission were invented. Most of the improvements since then are aimed only to improve management, safety, ease of maintenance and repair, i.e., on the interaction of a human, the main part of the TS, with its other parts. The truck of the 1940-50s had a steering wheel with a diameter of 80 cm. The driver must be very strong to drive such a car. And in aviation ... the giant airplane of the 1930s «Maxim Gorky». To perform a maneuver, at the contol stick the first and second pilot had to pull together. Sometimes they called the navigator for help and the rest of the crew. Nowadays the operator with the help of amplifiers can control much more loaded mechanisms. It seems the problem has been solved. But no, again often the human is forgotten ... The fact is that amplifiers do not always allow the operator to fully feel the behavior of the controlled mechanism. This sometimes leads to accidents. For example, the problem of the safety of driving a car or the more «monotonous» locomotive management. It is very important that the operator is always in an alert, workable state. This problem is also solved in the supersystem -- causes we fall asleep while driving are removed, medical control is carried out, the responsibility of the driver-operator is increased. But increasingly it is solved directly in the Technical System. Right in the cabin. If the driver does not turn off the warning light in time, the engine is stopped and the train slows down. Or in a car: you won't go until you fasten your seatbelt. That is, there is a normal feedback in the same way as between all other elements of the TS. Perhaps one of the reasons why this direction of improving technical systems began actively to develop actively in recent years is a lack of understanding of the place of human in their structure. Rather, not that not understanding, but ... In general, the developer finds himself in a difficult psychological situation. As human the developer of something new rightfully feels like a creator. He cannot fully feel that a human can also be an operator, engine or working body, a part of the mechanism, the machine, the Technical System. It's good yet if it's a widely used TS that closely interacts with a human, for example, a car. Here a person can be a developer, an operator and a user at the same time. As with a computer. It is difficult to work with most computer programs even today, when the developers understood the simple truth, that with the program will work a human operator who cares about the result, not the construction of the device. Today such concepts as «user friendly interface» were introduced. But earlier ... Why walk far, remember «Lexicon». And other TS, standing, at first glance, far from the human ... Their name is legion. Here often the thought does not even occur that human is a part of the Technical System. But when developing any of them, it is necessary to analyze the interaction of the elements composing the system and taking into account the capabilities of the human body and mind. Sometimes it is not done. Even worse, often many of the known natural factors are not taken into account, affecting the well-being of humans, the clarity of their movements and the speed of reaction. A newly discovered psychological factors, for example, the «Cassandra effect» [10]? It rises Chernobyl as terrible mushroom, airliners fall and ships collide. \emph{But what else, besides the operator, is needed to get a ready-to-operate Technical System?} \section*{The complete composition of a minimally capable to work Technical System} There is a set of technical objects combined into a system, there is a human operator. Is this enough for the Technical System to perform a useful function and to satisfy user's need, or do you need something else? Let us recall the well-known TRIZ example given in the book by G. Ivanov [11]. We are talking about the Russian scientist Kapitsa, who visited the Simmens and Schuckert plant on production of generators. The owners of the plant showed him the generator, which did not want to work and offered 1000 marks for correction. Kapitsa quickly realized that the central bearing was skewed and jammed, took a hammer and hit the bearing housing -- the generator started working. The confused customers asked for an invoice for the work performed. Kapitsa wrote: \emph{«1 blow with a hammer -- 1 mark, for knowing where to hit -- 999 marks»}. And here is another example from Fenimore Cooper [12]. The heroes of the story run away from the chase, the Indians drove them into thickets of tall dry grass and set it on fire. The fire goes like a wall, what to do? The old hunter was not taken aback and set fire to grass near where they stood. The wall of fire went towards the one who overtook them fiery shaft, burning fuel for it. The fire went out, those who fled escaped. What is a Technical System in the one and the other case? In the first example. The user's need is to start the generator. Useful function -- align the bearing. Operator -- Kapitsa, system of technical objects -- hammer. It turns out that the Technical System is Kapitsa with a hammer. In the second example. The user's need is to stop the fire. Useful function -- destroy the grass (fuel for the coming fire). The operator is an old hunter, the system of technical objects -- flint and steel. Technical System -- a hunter with flint and tinder. What's coming out? A minor action of a human operator using a primitive technical means gave such a tremendous result both in the first and second case! Is that all? Is it really a complete set of technical systems, action which allowed in the first case to start a huge generator, and in the second -- to stop a wall of fire? No, it’s not. The most important thing is that which was completely overlooked in the previous reasoning -- informational component. Indeed, you can uselessly hammer on the generator from morning to night. But Kapitsa knocked not at random, but in a strictly defined manner. And in this case, the informational support of his actions consisted of two parts: «the ability to knock with a hammer» and the knowledge, understanding of «where to hit». In the same way, setting fire to the grass could be completely useless, and most of the variants could end in disaster for the one who set it on fire. If we further analyze the second example, it becomes obvious that the burning dry grass makes sense when the hunter not only knows that the wind can drive the fire towards the approaching fire, but if the wind blowing in the right side is present. Therefore, it is very important to know «how to do it?», how to perform a useful function, using for this technical objects and available substance-field resources that also become part of the TS during its operation. To complete a full minimum capable to work TS, it is necessary to take into account the following informational and material components: \begin{itemize}[noitemsep] \item[1.] The technological process of performing the useful function. \item[2.] Material technical and natural objects and systems of different levels of hierarchy. \item[3.] One or more operators who own a set of control techniques of the material objects and systems. \item[4.] Substances and fields necessary for the operation of the material objects and systems, and the products of their processing. \item[5.] Substances and fields necessary for the functioning of the operator, and the products of their processing. \item[6.] The processed object (in some cases). \end{itemize} Full composition of a Technical System: \begin{center} \begin{tikzpicture}[rounded corners=2pt] \draw[dashed,rounded corners=6pt,fill=gray!10] (7,1) -- (7,-1) -- (13,-1) -- (13,1) ; \draw[dashed,rounded corners=20pt,fill=gray!20] (2,5.25) -- (2,1) -- (17,1) -- (17,5.25) ; \draw[dashed,rounded corners=20pt,fill=yellow!30] (2,5.25) -- (2,7.5) -- (17,7.5) -- (17,5.25) ; \node[draw=green] at (5.5,6.5) [rectangle] (A0) {\bbox{6}{Information about the realisation of the technical process}}; \node[draw=green] at (13.5,6.5) [rectangle] (A1) {\bbox{6}{Knowledge how to work with the system of technical objects}}; \node[draw=red] at (5,4) [rectangle] (A2) {\bbox{5}{Systems, objects, substances and fields for the operator's actions}}; \node[draw=red] at (5,2) [rectangle] (A3) {\bbox{5}{Systems, objects, substances and fields for the actions of the system of technical objects}}; \node[draw=blue] at (14,4) [rectangle] (A4) {\bbox{3}{Processing products}}; \node[draw=blue] at (14,2) [rectangle] (A5) {\bbox{3}{Processing products}}; \node[draw=blue, line width=2pt] at (10,4) [rectangle] (A6) {\bbox{2}{Operator}}; \node[draw=blue, line width=2pt] at (10,2) [rectangle] (A7) {\bbox{2}{System of technical objects}}; \node[draw,fill=blue!30] at (10,0) [rectangle] (A8) {\bbox{4}{Processed object}}; \draw[color=red] (15,5.6) node {\small Level of information}; \draw[color=red] (15,4.9) node {\small Object level}; \draw[->] (A0) -- (A6) ; \draw[->] (A1) -- (A6) ; \draw[->] (A2) -- (A6) ; \draw[->] (A6) -- (A4) ; \draw[->] (A3) -- (A7) ; \draw[->] (A7) -- (A5) ; \draw[vecArrow] (A6) -- (A7) ; \draw[vecArrow] (A7) -- (A8) ; \end{tikzpicture} \end{center} It is in this composition that the TS gets the opportunity to work everywhere, in any place and full autonomy. Even in zero gravity and airless space. This approach -- complete a TS with everything necessary to carry out its useful features -- does not override the traditional one, but is quite convenient. Collect everything you need to perform the function into one system and transform it, mentally separating it from the supersystems. It is easier to do any job if you prepare in advance all the necessary materials, tools and drawings, arrange it in the most convenient way not to rummage later on around the "workshop" (supersystem), remembering what else is required to provide ensuring the capability of our TS to work. That is, the Technical System is the supersystem for the System of technical (material) objects. This understanding of the TS has something in common with its description given by N. Matvienko [8]: \textbf{«Every Technical System is a set of material, energetic and information elements (in other words, real parts and details, energy resources for their functioning and a set of prescriptions, instructions, commands, signals that determine the sequence and type of interactions of material elements with surrounding systems and with each other)»}. This approach puts the human operator at the center, in the basis of the Technical System. At the same time, a "Technical System", organized by a human, may include the use of object like technical or natural elements -- for example, acupuncture or transportation of goods, as well as avoiding them altogether -- the speech of a lawyer in the court or a dance. This sometimes changes little. As example of this statement may serve a lawyer speaking to the audience with or without a microphone. But, if you really look at it, then a human is a multifunctional Technical System, too. Human nature is twofold -- he has the ability to think, to model his actions, to make decisions. And act using his body to do some work. It is here where the informational and material components of a human unite into a single entity. The human operator includes all the main parts of a TS and, subject to information and material support, can perform some functions, in accordance with the possibilities of his body. When these possibilities are exhausted, one can add to the body material objects, combine them into systems and expand the capabilities of the human. The normal process of enfolding a Technical System begins. A rock, stick, shovel, excavator ... Human is getting stronger, he can fulfill a more and more increasing amount of work. And what about folding? After all, it seems that it is already impossible to fold a human. Yes, when talking about folding objects. But here the folding is on the information level. For example, it's time to water the garden. One can take a water can, adapt a water tube, set up a whole irrigation machine. Or you can just look at the sky and, if it is raining soon, you don't have to do anything. That is, folding occurs at the level of functions, technological operations. Finally, at the level of systems and process design. TRIZ itself is a logical continuation of this direction. After all, the concepts of «Ideality», «Ideal Final Result» are basic concepts of this methodology. This was noticed a long time ago, and a rare fairy tale avoids a part where something is going on by itself, a person achieved what he wanted without any expenses. The power of thought, so to say, breaks mountains. Move in time and space. A «technical task» on human development in this direction are prescribed by science fiction writers and storytellers. And there are reasons to think that this direction will be mastered. Levitation, moving objects with glances, communication over long distances without any technical means and much more -- all this can be accessible to humans. Yes, this is interesting, but what does all of the above give for transforming, improving Technical Systems in real practice? \textbf{A dramatic increase in the amount of resources that can be acquired for change when transforming the system.} \emph{In the traditional approach the following resources can be used:} \begin{itemize}[noitemsep] \item[1.] The system itself. \item[2.] Its subsystems. \item[3.] Connections between subsystems. \item[4.] Links between each subsystem and system. \end{itemize} \emph{With the proposed approach, the number of possible resources for use increases dramatically. Here are just a few of them:} \begin{itemize}[noitemsep] \item[1.] The Technical System itself. \item[2.] The Technological process. \item[3.] Technological operations. \item[4.] The System of technical objects. \item[5.] Subsystems of the system of technical objects. \item[6.] The operator as a thinking system. \item[7.] The body of the operator, as a material biological system. \item[8.] Sense organs of the operator. \item[9.] The system of skills of the operator. \item[10.] Individual skills of the operator. \item[11.] Systems, objects, substances and fields consumed by a system of technical objects. \item[12.] Systems, objects, substances and fields consumed by the operator. \item[13.] Relations between the Technical System and the technological process. \item[14.] Relationships between technological operations and the technological process. \item[15.] Relationships between technological operations. \item[16.] Relations between the Technical System and technological operations. \item[17.] Interaction of substances and fields consumed by the Technical System with a system of technical objects. \item[18.] Interaction of substances and fields consumed by the Technical System with the operator. \item[19.] Connections between subsystems of the system of technical objects. \item[20.] Connections between each subsystem of the System of Material Objects and the system of technical objects. \item[21.] Relationships between the subsystems of the system of technical objects and the technological process. \end{itemize} ... And many other combinations of elements of the Technical System ... It's time to give some examples. %------------------- \paragraph{1. Classic airplane.} A classic airplane of the beginning of the twentieth century consists of two wings that were attached to the fuselage with the help of numerous struts and cable guides. To make such a plane well flew (this was especially important for air fighters), the stretched cables must be properly tensioned. Since the cables under tension are further stretching they often had to be adjusted using a simple screw mechanism. They applied a special ruler, and the cable was pulled with a dynamometer. About the degree of tensions they judged by the deviation of the cable from a straight line. This process was very laborious and slow. How to be? How to speed up the process of adjusting the stretching? Basically, a new system had to be invented to adjust the stretch marks. If those who solved this problem would only start from the System of Material Objects, used to perform this function, it would be extremely difficult to solve it. If remember and take into account that there is an operator in the system, then the number of possible conversions increases significantly. So, you can solve the problem using the organs of sense of the operator. Indeed, why not use hearing, or rather people with perfect pitch? Piano tuners were invited to adjust the stretch marks, the adjustment process was accelerated many times. Interestingly, since there were not enough piano tuners, the next solution was found, which demonstrates the repeatedly described TRIZ tendency «displacement of human from the TS». Stretch adjustment was handed over to the mechanics again, but instead of a bulky ruler and dynamometer it was suggested to use a suitably configured tuning fork. \paragraph{2. Oil lamp.} It's hard to imagine what a titanic job was done by inventors who tried to make an oil lamp shine well. All the problem was poor oil flow to the wick tip. To improve the supply numerous spring based devices were created to build up pressure in the oil reservoir. Pumps for forced oil supply were also used. That is, work went within the framework of the «system of technical objects» -- they tried to improve the machine. And when examined the full composition of the TS, it became clear that the issue was not in the lamp device, but in the combustible material. When instead of oil that was poorly absorbed by the wick oil liquid kerosene was used, all problems disappeared. \paragraph{3. Computer.} Suppose you want to use your computer in the dark. If we transform the System of Material Objects, then ideas about glowing keys, light bulbs and more come to the thought. If you think about the Technical System, then the answer is obvious -- the operator must be able to type in the dark, remember the location of the keys by heart. What can be said in conclusion? Now in TRIZ and other innovative methods the concept of a «Technical System» is completely confused mixing up constantly a system that \textbf{performs} some function, and a «System of technical (material) objects», that is \textbf{designed} to perform some function. Interfering as little as possible into the dispute between "sharp-edged" and "blunt-edged" (see the epigraph), I tried to understand this matter. Without calling the reader to agree with me, I will be glad if this attempt of analysis turns out to be useful to him to some extent. I am very grateful to colleagues V. Lenyashin, G. Severinets, E. Novitskaya, N. Khomenko and to the merciless critic of the first version of this article by V. Sibiryakov for his help in preparing this material. \section*{Literatur} \begin{itemize} \item[1.] B.R. Gaines. General System research: Quo vadis? General System. Yearboor, 24, 1979. \item[2.] A.A. Bogdanov. \foreignlanguage{russian}{Всеобщая организационная наука. Тектология} (General Management Science. Tectology). Book 1. Moscow 1989. \item[3.] G.S. Altshuller. \foreignlanguage{russian}{Творчество как точная наука} (Creativity as an exact science). \url{http://www.trizminsk.org/r/4117.htm#05}. \item[4.] A.F. Kamenyev. \foreignlanguage{russian}{Технические Системы. Закономерности развития} (Technical Systems. Development Patterns). Leningrad, Mashinostroenie 1985. \item[5.] G. Altshuller, B. Zlotin, A. Zusman, V. Filatov. \foreignlanguage{russian}{Поиск новых идей: от озарения к технологии} (The search for new ideas: from insight to technology). Chisinau, Carta Moldaveniasca, 1989. S. 365. \item[6.] V. Korolev. \foreignlanguage{russian}{О понятии «система»} (About the «system» notion). TRIZ Encyclopaedia. \url{http://triz.port5.com/data/w24.html}. \item[7.] V. Korolev. \foreignlanguage{russian}{О понятии «система»} (About the «system» notion) (2). TRIZ Encyclopaedia. \url{http://triz.port5.com/data/w108.html}. \item[8.] N.N. Matvienko. \foreignlanguage{russian}{Термины ТРИЗ} (TRIZ terms, a collection of problems). Wladiwostok, 1991. \item[9.] Y.P. Salamatov. \foreignlanguage{russian}{Система законов развития техники (Основы теории развития Технических систем)} -- The system of laws of technical development (Basics of a theory of technical system development). Institute for Innovative Design. Krasnojarsk, 1996. \url{http://www.trizminsk.org/e/21101000.htm}. \item[10.] V.A. Sviridov. \foreignlanguage{russian}{Человеческий фактор} (The human factor). \url{http://www.rusavia.spb.ru/digest/sv/sv.html}. \item[11.] G.I. Ivanov. \foreignlanguage{russian}{Формулы творчества или как научиться изобретать} (Formulas for creativity or how to learn to invent). Moscow. Prosveshtchenie. 1994 \item[12.] F. Cooper. Prairie. \end{itemize} \end{document}
{ "alphanum_fraction": 0.7662699103, "avg_line_length": 51.0050697085, "ext": "tex", "hexsha": "e5a4a6654055cf702e27eb2e9a58fdfe32031699", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "445b25b8a6f5d03e41a98c28a60c38003e9b84a4", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "wumm-project/OpenDiscovery", "max_forks_repo_path": "Sources/Shpakovsky_NA/mts-en.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "445b25b8a6f5d03e41a98c28a60c38003e9b84a4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "wumm-project/OpenDiscovery", "max_issues_repo_path": "Sources/Shpakovsky_NA/mts-en.tex", "max_line_length": 79, "max_stars_count": 1, "max_stars_repo_head_hexsha": "445b25b8a6f5d03e41a98c28a60c38003e9b84a4", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "wumm-project/OpenDiscovery", "max_stars_repo_path": "Sources/Shpakovsky_NA/mts-en.tex", "max_stars_repo_stars_event_max_datetime": "2020-04-21T08:48:43.000Z", "max_stars_repo_stars_event_min_datetime": "2020-04-21T08:48:43.000Z", "num_tokens": 10108, "size": 40243 }
% 9.5.07 % This is a sample documentation for Compass in the tex format. % We restrict the use of tex to the following subset of commands: % % \section, \subsection, \subsubsection, \paragraph % \begin{enumerate} (no-nesting), \begin{quote}, \item % {\tt ... }, {\bf ...}, {\it ... } % \htmladdnormallink{}{} % \begin{verbatim}...\end{verbatim} is reserved for code segments % ...'' % \section{No Variadic Functions} \label{NoVariadicFunctions::overview} ``CERT Secure Coding DCL33-C.'' states \begin{quote} A variadic function – a function declared with a parameter list ending with ellipsis (...) – can accept a varying number of arguments of differing types. Variadic functions are flexible, but they are also hazardous. The compiler can't verify that a given call to a variadic function passes an appropriate number of arguments or that those arguments have appropriate types. Consequently, a runtime call to a variadic function that passes inappropriate arguments yields undefined behavior. Such undefined behavior could be exploited to run arbitrary code. \end{quote} \subsection{Parameter Requirements} This checker takes no parameters and inputs source file. \subsection{Implementation} This pattern is checked using a simple AST traversal that visits all function and member function references checking the function declaration for arguments of variadic type. Those defined functions with variadic arguments flag violations of this rule. \subsection{Non-Compliant Code Example} % write your non-compliant code subsection \begin{verbatim} #include <cstdarg> char *concatenate(char const *s, ...) { return 0; } int main() { char *separator = "\t"; char *t = concatenate("hello", separator, "world", 0); return 0; } \end{verbatim} \subsection{Compliant Solution} The compliant solution uses a chain of string binary operations instead of a variadic function. \begin{verbatim} #include <string> string separator = /* some reasonable value */; string s = "hello" + separator + "world"; \end{verbatim} \subsection{Mitigation Strategies} \subsubsection{Static Analysis} Compliance with this rule can be checked using structural static analysis checkers using the following algorithm: \begin{enumerate} \item Perform simple AST traversal on all function and member function references. \item For each function reference check the function declaration and existence of function definition. \item If function definition does not exist then stop check. \item Else check function declaration arguments for variadic types. \item Report any violations. \end{enumerate} \subsection{References} % Write some references % ex. \htmladdnormallink{ISO/IEC 9899-1999:TC2}{https://www.securecoding.cert.org/confluence/display/seccode/AA.+C+References} Forward, Section 6.9.1, Function definitions'' \htmladdnormallink{DCL33-C. Do not define variadic functions}{https://www.securecoding.cert.org/confluence/display/cplusplus/DCL33-C.+Do+not+define+variadic+functions}
{ "alphanum_fraction": 0.7747234328, "avg_line_length": 37.7594936709, "ext": "tex", "hexsha": "d7dadb67c2f8d2410c10c37184aee2935069eaa0", "lang": "TeX", "max_forks_count": 146, "max_forks_repo_forks_event_max_datetime": "2022-03-04T07:32:53.000Z", "max_forks_repo_forks_event_min_datetime": "2015-04-27T02:48:34.000Z", "max_forks_repo_head_hexsha": "7435d4fa1941826c784ba97296c0ec55fa7d7c7e", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "sujankh/rose-matlab", "max_forks_repo_path": "projects/compass/extensions/checkers/noVariadicFunctions/noVariadicFunctionsDocs.tex", "max_issues_count": 174, "max_issues_repo_head_hexsha": "7435d4fa1941826c784ba97296c0ec55fa7d7c7e", "max_issues_repo_issues_event_max_datetime": "2022-03-31T16:51:05.000Z", "max_issues_repo_issues_event_min_datetime": "2015-01-28T18:41:32.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "sujankh/rose-matlab", "max_issues_repo_path": "projects/compass/extensions/checkers/noVariadicFunctions/noVariadicFunctionsDocs.tex", "max_line_length": 553, "max_stars_count": 488, "max_stars_repo_head_hexsha": "7597292cf14da292bdb9a4ef573001b6c5b9b6c0", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "maurizioabba/rose", "max_stars_repo_path": "projects/compass/extensions/checkers/noVariadicFunctions/noVariadicFunctionsDocs.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-30T07:15:46.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-09T08:54:48.000Z", "num_tokens": 678, "size": 2983 }
\documentclass{article} \usepackage[utf8]{inputenc} \title{Problem Set 2} \author{Fuqing Yang} \date{January 2020} \usepackage{natbib} \usepackage{graphicx} \begin{document} \maketitle \section{Main tools for Data Scientists} \begin{itemize} \item Measurement \item Statistical Programming languages \item Web Scraping \item Handling Large Data Sets \item Visualization \item Modeling \end{itemize} \end{document}
{ "alphanum_fraction": 0.7432432432, "avg_line_length": 17.0769230769, "ext": "tex", "hexsha": "76fddfafafdf1b1ba57da2fb5699244bfc79915c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "80016413be1f6fcdaa08c679525245b92795f77a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Fuqing-Yang/DScourseS20", "max_forks_repo_path": "ProblemSets/PS2/PS2_Yang.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "80016413be1f6fcdaa08c679525245b92795f77a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Fuqing-Yang/DScourseS20", "max_issues_repo_path": "ProblemSets/PS2/PS2_Yang.tex", "max_line_length": 43, "max_stars_count": null, "max_stars_repo_head_hexsha": "80016413be1f6fcdaa08c679525245b92795f77a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Fuqing-Yang/DScourseS20", "max_stars_repo_path": "ProblemSets/PS2/PS2_Yang.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 126, "size": 444 }
\section{\module{BaseHTTPServer} --- Basic HTTP server} \declaremodule{standard}{BaseHTTPServer} \modulesynopsis{Basic HTTP server (base class for \class{SimpleHTTPServer} and \class{CGIHTTPServer}).} \indexii{WWW}{server} \indexii{HTTP}{protocol} \index{URL} \index{httpd} This module defines two classes for implementing HTTP servers (Web servers). Usually, this module isn't used directly, but is used as a basis for building functioning Web servers. See the \refmodule{SimpleHTTPServer}\refstmodindex{SimpleHTTPServer} and \refmodule{CGIHTTPServer}\refstmodindex{CGIHTTPServer} modules. The first class, \class{HTTPServer}, is a \class{SocketServer.TCPServer} subclass. It creates and listens at the HTTP socket, dispatching the requests to a handler. Code to create and run the server looks like this: \begin{verbatim} def run(server_class=BaseHTTPServer.HTTPServer, handler_class=BaseHTTPServer.BaseHTTPRequestHandler): server_address = ('', 8000) httpd = server_class(server_address, handler_class) httpd.serve_forever() \end{verbatim} \begin{classdesc}{HTTPServer}{server_address, RequestHandlerClass} This class builds on the \class{TCPServer} class by storing the server address as instance variables named \member{server_name} and \member{server_port}. The server is accessible by the handler, typically through the handler's \member{server} instance variable. \end{classdesc} \begin{classdesc}{BaseHTTPRequestHandler}{request, client_address, server} This class is used to handle the HTTP requests that arrive at the server. By itself, it cannot respond to any actual HTTP requests; it must be subclassed to handle each request method (e.g. GET or POST). \class{BaseHTTPRequestHandler} provides a number of class and instance variables, and methods for use by subclasses. The handler will parse the request and the headers, then call a method specific to the request type. The method name is constructed from the request. For example, for the request method \samp{SPAM}, the \method{do_SPAM()} method will be called with no arguments. All of the relevant information is stored in instance variables of the handler. Subclasses should not need to override or extend the \method{__init__()} method. \end{classdesc} \class{BaseHTTPRequestHandler} has the following instance variables: \begin{memberdesc}{client_address} Contains a tuple of the form \code{(\var{host}, \var{port})} referring to the client's address. \end{memberdesc} \begin{memberdesc}{command} Contains the command (request type). For example, \code{'GET'}. \end{memberdesc} \begin{memberdesc}{path} Contains the request path. \end{memberdesc} \begin{memberdesc}{request_version} Contains the version string from the request. For example, \code{'HTTP/1.0'}. \end{memberdesc} \begin{memberdesc}{headers} Holds an instance of the class specified by the \member{MessageClass} class variable. This instance parses and manages the headers in the HTTP request. \end{memberdesc} \begin{memberdesc}{rfile} Contains an input stream, positioned at the start of the optional input data. \end{memberdesc} \begin{memberdesc}{wfile} Contains the output stream for writing a response back to the client. Proper adherence to the HTTP protocol must be used when writing to this stream. \end{memberdesc} \class{BaseHTTPRequestHandler} has the following class variables: \begin{memberdesc}{server_version} Specifies the server software version. You may want to override this. The format is multiple whitespace-separated strings, where each string is of the form name[/version]. For example, \code{'BaseHTTP/0.2'}. \end{memberdesc} \begin{memberdesc}{sys_version} Contains the Python system version, in a form usable by the \member{version_string} method and the \member{server_version} class variable. For example, \code{'Python/1.4'}. \end{memberdesc} \begin{memberdesc}{error_message_format} Specifies a format string for building an error response to the client. It uses parenthesized, keyed format specifiers, so the format operand must be a dictionary. The \var{code} key should be an integer, specifying the numeric HTTP error code value. \var{message} should be a string containing a (detailed) error message of what occurred, and \var{explain} should be an explanation of the error code number. Default \var{message} and \var{explain} values can found in the \var{responses} class variable. \end{memberdesc} \begin{memberdesc}{protocol_version} This specifies the HTTP protocol version used in responses. If set to \code{'HTTP/1.1'}, the server will permit HTTP persistent connections; however, your server \emph{must} then include an accurate \code{Content-Length} header (using \method{send_header()}) in all of its responses to clients. For backwards compatibility, the setting defaults to \code{'HTTP/1.0'}. \end{memberdesc} \begin{memberdesc}{MessageClass} Specifies a \class{rfc822.Message}-like class to parse HTTP headers. Typically, this is not overridden, and it defaults to \class{mimetools.Message}. \withsubitem{(in module mimetools)}{\ttindex{Message}} \end{memberdesc} \begin{memberdesc}{responses} This variable contains a mapping of error code integers to two-element tuples containing a short and long message. For example, \code{\{\var{code}: (\var{shortmessage}, \var{longmessage})\}}. The \var{shortmessage} is usually used as the \var{message} key in an error response, and \var{longmessage} as the \var{explain} key (see the \member{error_message_format} class variable). \end{memberdesc} A \class{BaseHTTPRequestHandler} instance has the following methods: \begin{methoddesc}{handle}{} Calls \method{handle_one_request()} once (or, if persistent connections are enabled, multiple times) to handle incoming HTTP requests. You should never need to override it; instead, implement appropriate \method{do_*()} methods. \end{methoddesc} \begin{methoddesc}{handle_one_request}{} This method will parse and dispatch the request to the appropriate \method{do_*()} method. You should never need to override it. \end{methoddesc} \begin{methoddesc}{send_error}{code\optional{, message}} Sends and logs a complete error reply to the client. The numeric \var{code} specifies the HTTP error code, with \var{message} as optional, more specific text. A complete set of headers is sent, followed by text composed using the \member{error_message_format} class variable. \end{methoddesc} \begin{methoddesc}{send_response}{code\optional{, message}} Sends a response header and logs the accepted request. The HTTP response line is sent, followed by \emph{Server} and \emph{Date} headers. The values for these two headers are picked up from the \method{version_string()} and \method{date_time_string()} methods, respectively. \end{methoddesc} \begin{methoddesc}{send_header}{keyword, value} Writes a specific HTTP header to the output stream. \var{keyword} should specify the header keyword, with \var{value} specifying its value. \end{methoddesc} \begin{methoddesc}{end_headers}{} Sends a blank line, indicating the end of the HTTP headers in the response. \end{methoddesc} \begin{methoddesc}{log_request}{\optional{code\optional{, size}}} Logs an accepted (successful) request. \var{code} should specify the numeric HTTP code associated with the response. If a size of the response is available, then it should be passed as the \var{size} parameter. \end{methoddesc} \begin{methoddesc}{log_error}{...} Logs an error when a request cannot be fulfilled. By default, it passes the message to \method{log_message()}, so it takes the same arguments (\var{format} and additional values). \end{methoddesc} \begin{methoddesc}{log_message}{format, ...} Logs an arbitrary message to \code{sys.stderr}. This is typically overridden to create custom error logging mechanisms. The \var{format} argument is a standard printf-style format string, where the additional arguments to \method{log_message()} are applied as inputs to the formatting. The client address and current date and time are prefixed to every message logged. \end{methoddesc} \begin{methoddesc}{version_string}{} Returns the server software's version string. This is a combination of the \member{server_version} and \member{sys_version} class variables. \end{methoddesc} \begin{methoddesc}{date_time_string}{\optional{timestamp}} Returns the date and time given by \var{timestamp} (which must be in the format returned by \function{time.time()}), formatted for a message header. If \var{timestamp} is omitted, it uses the current date and time. The result looks like \code{'Sun, 06 Nov 1994 08:49:37 GMT'}. \versionadded[The \var{timestamp} parameter]{2.5} \end{methoddesc} \begin{methoddesc}{log_date_time_string}{} Returns the current date and time, formatted for logging. \end{methoddesc} \begin{methoddesc}{address_string}{} Returns the client address, formatted for logging. A name lookup is performed on the client's IP address. \end{methoddesc} \begin{seealso} \seemodule{CGIHTTPServer}{Extended request handler that supports CGI scripts.} \seemodule{SimpleHTTPServer}{Basic request handler that limits response to files actually under the document root.} \end{seealso}
{ "alphanum_fraction": 0.7780537855, "avg_line_length": 37.6382113821, "ext": "tex", "hexsha": "64c069f075a101673595531d12a5aa80caef1abd", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2019-07-18T21:33:17.000Z", "max_forks_repo_forks_event_min_datetime": "2017-01-30T21:52:13.000Z", "max_forks_repo_head_hexsha": "93e24b88564de120b1296165b5c55975fdcb8a3c", "max_forks_repo_licenses": [ "PSF-2.0" ], "max_forks_repo_name": "jasonadu/Python-2.5", "max_forks_repo_path": "Doc/lib/libbasehttp.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "93e24b88564de120b1296165b5c55975fdcb8a3c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "PSF-2.0" ], "max_issues_repo_name": "jasonadu/Python-2.5", "max_issues_repo_path": "Doc/lib/libbasehttp.tex", "max_line_length": 75, "max_stars_count": 1, "max_stars_repo_head_hexsha": "93e24b88564de120b1296165b5c55975fdcb8a3c", "max_stars_repo_licenses": [ "PSF-2.0" ], "max_stars_repo_name": "jasonadu/Python-2.5", "max_stars_repo_path": "Doc/lib/libbasehttp.tex", "max_stars_repo_stars_event_max_datetime": "2015-10-23T02:57:29.000Z", "max_stars_repo_stars_event_min_datetime": "2015-10-23T02:57:29.000Z", "num_tokens": 2281, "size": 9259 }
% Options for packages loaded elsewhere \PassOptionsToPackage{unicode}{hyperref} \PassOptionsToPackage{hyphens}{url} % \documentclass[ 12pt, ]{article} \usepackage{lmodern} \usepackage{setspace} \usepackage{amssymb,amsmath} \usepackage{ifxetex,ifluatex} \ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{textcomp} % provide euro and other symbols \else % if luatex or xetex \usepackage{unicode-math} \defaultfontfeatures{Scale=MatchLowercase} \defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1} \fi % Use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} \IfFileExists{microtype.sty}{% use microtype if available \usepackage[]{microtype} \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts }{} \makeatletter \@ifundefined{KOMAClassName}{% if non-KOMA class \IfFileExists{parskip.sty}{% \usepackage{parskip} }{% else \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt}} }{% if KOMA class \KOMAoptions{parskip=half}} \makeatother \usepackage{xcolor} \IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available \IfFileExists{bookmark.sty}{\usepackage{bookmark}}{\usepackage{hyperref}} \hypersetup{ pdftitle={Standardized NEON organismal data for biodiversity research}, pdfauthor={Daijiang Li1,2†‡, Sydne Record3†‡, Eric Sokol4,5†‡, Matthew E. Bitters6, Melissa Y. Chen6, Anny Y. Chung7, Matthew R. Helmus8, Ruvi Jaimes9, Lara Jansen10, Marta A. Jarzyna11,12, Michael G. Just13, Jalene M. LaMontagne14, Brett Melbourne6, Wynne Moss6, Kari Norman15, Stephanie Parker4, Natalie Robinson4, Bijan Seyednasrollah16, Colin Smith17, Sarah Spaulding5, Thilina Surasinghe18, Sarah Thomsen19, Phoebe Zarnetske20,21}, hidelinks, pdfcreator={LaTeX via pandoc}} \urlstyle{same} % disable monospaced font for URLs \usepackage[margin=1in]{geometry} \usepackage{longtable,booktabs} % Correct order of tables after \paragraph or \subparagraph \usepackage{etoolbox} \makeatletter \patchcmd\longtable{\par}{\if@noskipsec\mbox{}\fi\par}{}{} \makeatother % Allow footnotes in longtable head/foot \IfFileExists{footnotehyper.sty}{\usepackage{footnotehyper}}{\usepackage{footnote}} \makesavenoteenv{longtable} \usepackage{graphicx} \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi} \def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} \makeatother % Scale images if necessary, so that they will not overflow the page % margins by default, and it is still possible to overwrite the defaults % using explicit options in \includegraphics[width, height, ...]{} \setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} % Set default figure placement to htbp \makeatletter \def\fps@figure{htbp} \makeatother \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \setcounter{secnumdepth}{-\maxdimen} % remove section numbering \usepackage{geometry} \geometry{verbose,letterpaper,margin=2.45cm} % \usepackage[breaklinks=true,pdfstartview=FitH,citecolor=blue]{hyperref} \hypersetup{colorlinks,% citecolor=black,% filecolor=red,% linkcolor=blue,% urlcolor=red,% pdfstartview=FitH} % \usepackage[T1]{fontenc} % \usepackage[utf8]{inputenc} % \usepackage{textgreek} % \usepackage{babel} \usepackage{microtype} \usepackage{amsmath} \usepackage[osf]{libertine} \usepackage{libertinust1math} \usepackage{inconsolata} \usepackage{booktabs} % \usepackage{setspace} % \doublespacing % \setstretch{1.8999999999999999} \usepackage{lineno} \linenumbers \usepackage{authblk} \renewcommand\Authfont{\fontsize{10.5}{11}\selectfont} \usepackage{caption} % \DeclareCaptionLabelSeparator{bar}{\textbf{ | }} % \captionsetup{ % labelsep=bar % } % \renewcommand{\rmdefault}{cmr} % flush left while keep identation \makeatletter \newcommand\iraggedright{% \let\\\@centercr\@rightskip\@flushglue \rightskip\@rightskip \leftskip\z@skip} \makeatother \raggedright % make pdf as default figure format \DeclareGraphicsExtensions{.pdf,.png, % .jpg,.mps,.jpeg,.jbig2,.jb2,.JPG,.JPEG,.JBIG2,.JB2} \usepackage{booktabs} \usepackage{longtable} \usepackage{array} \usepackage{multirow} \usepackage{wrapfig} \usepackage{float} \usepackage{colortbl} \usepackage{pdflscape} \usepackage{tabu} \usepackage{threeparttable} \usepackage{threeparttablex} \usepackage[normalem]{ulem} \usepackage{makecell} \usepackage{xcolor} \newlength{\cslhangindent} \setlength{\cslhangindent}{1.5em} \newenvironment{cslreferences}% {\setlength{\parindent}{0pt}% \everypar{\setlength{\hangindent}{\cslhangindent}}\ignorespaces}% {\par} \title{Standardized NEON organismal data for biodiversity research} \author{Daijiang Li\textsuperscript{1,2†‡}, Sydne Record\textsuperscript{3†‡}, Eric Sokol\textsuperscript{4,5†‡}, Matthew E. Bitters\textsuperscript{6}, Melissa Y. Chen\textsuperscript{6}, Anny Y. Chung\textsuperscript{7}, Matthew R. Helmus\textsuperscript{8}, Ruvi Jaimes\textsuperscript{9}, Lara Jansen\textsuperscript{10}, Marta A. Jarzyna\textsuperscript{11,12}, Michael G. Just\textsuperscript{13}, Jalene M. LaMontagne\textsuperscript{14}, Brett Melbourne\textsuperscript{6}, Wynne Moss\textsuperscript{6}, Kari Norman\textsuperscript{15}, Stephanie Parker\textsuperscript{4}, Natalie Robinson\textsuperscript{4}, Bijan Seyednasrollah\textsuperscript{16}, Colin Smith\textsuperscript{17}, Sarah Spaulding\textsuperscript{5}, Thilina Surasinghe\textsuperscript{18}, Sarah Thomsen\textsuperscript{19}, Phoebe Zarnetske\textsuperscript{20,21}} \date{08 April, 2021} \begin{document} \maketitle % align only at left, not at right. \iraggedright \setstretch{1.5} \footnotesize \textsuperscript{1} Department of Biological Sciences, Louisiana State University, Baton Rouge, LA, United States\\ \textsuperscript{2} Center for Computation \& Technology, Louisiana State University, Baton Rouge, LA, United States\\ \textsuperscript{3} Department of Biology, Bryn Mawr College, Bryn Mawr, PA, United States\\ \textsuperscript{4} Battelle, National Ecological Observatory Network (NEON), Boulder, CO, United States\\ \textsuperscript{5} Institute of Arctic and Alpine Research (INSTAAR), University of Colorado Boulder, Boulder, CO, United States\\ \textsuperscript{6} Department of Ecology and Evolutionary Biology, University of Colorado Boulder, Boulder, CO, United States\\ \textsuperscript{7} Departments of Plant Biology and Plant Pathology, University of Georgia, Athens, GA, United States\\ \textsuperscript{8} Integrative Ecology Lab, Center for Biodiversity, Department of Biology, Temple University, Philadelphia, PA, United States\\ \textsuperscript{9} St.~Edward's University, Austin, Texas\\ \textsuperscript{10} Department of Environmental Science and Management, Portland State University, Portland, OR, United States\\ \textsuperscript{11} Department of Evolution, Ecology and Organismal Biology, The Ohio State University, Columbus, OH, United States\\ \textsuperscript{12} Translational Data Analytics Institute, The Ohio State University, Columbus, OH, United States\\ \textsuperscript{13} Ecological Processes Branch, U.S. Army ERDC CERL, Champaign, IL, United States\\ \textsuperscript{14} Department of Biological Sciences, DePaul University, Chicago, IL, United States\\ \textsuperscript{15} Department of Environmental Science, Policy, and Management, University of California Berkeley, Berkeley, CA, United States\\ \textsuperscript{16} School of Informatics, Computing and Cyber Systems, Northern Arizona University, Flagstaff, AZ, United States\\ \textsuperscript{17} Environmental Data Initiative, University of Wisconsin-Madison, Madison, WI\\ \textsuperscript{18} Department of Biological Sciences, Bridgewater State University, Bridgewater, MA, United States\\ \textsuperscript{19} Department of Integrative Biology, Oregon State University, Corvallis, OR, United States\\ \textsuperscript{20} Department of Integrative Biology, Michigan State University, East Lansing, MI, United States\\ \textsuperscript{21} Ecology, Evolution, and Behavior Program, Michigan State University, East Lansing, MI, United States\\ \textsuperscript{†} Equal contributions\\ \textsuperscript{‡} Corresponding authors: \href{mailto:[email protected]}{\nolinkurl{[email protected]}}; \href{mailto:[email protected]}{\nolinkurl{[email protected]}}; \href{mailto:[email protected]}{\nolinkurl{[email protected]}} \normalsize \hypertarget{open-research-statement}{% \section{Open Research Statement}\label{open-research-statement}} No data were collected for this study. All original data were collected by NEON and are publicly available at NEON's data portal. We standardized such data and provided them as a data package, which is available at Github (\url{https://github.com/daijiang/neonDivData}). Data were also permanently archived at the EDI data repository (\url{https://portal-s.edirepository.org/nis/mapbrowse?scope=edi\&identifier=190\&revision=2}). \textbf{Abstract}: Understanding patterns and drivers of species distributions and abundances, and thus biodiversity, is a core goal of ecology. Despite advances in recent decades, research into these patterns and processes is currently limited by a lack of standardized, high-quality, empirical data that spans large spatial scales and long time periods. The National Ecological Observatory Network (NEON) fills this gap by providing freely available observational data that are: generated during robust and consistent organismal sampling of several sentinel taxonomic groups within 81 sites distributed across the United States; and will be collected for at least 30 years. The breadth and scope of these data provides a unique resource for advancing biodiversity research. To maximize the potential of this opportunity, however, it is critical that NEON data be maximally accessible and easily integrated into investigators' workflows and analyses. To facilitate its use for biodiversity research and synthesis, we created a workflow to process and format NEON organismal data into the ecocomDP (ecological community data design pattern) format, and available through the \texttt{ecocomDP} R package; we then provided the standardized data as an R data package (\texttt{neonDivData}). We briefly summarize sampling designs and data wrangling decisions for the major taxonomic groups included in this effort. Our workflows are open-source so the biodiversity community may: add additional taxonomic groups; modify the workflow to produce datasets appropriate for their own analytical needs; and regularly update the data packages as more observations become available. Finally, we provide two simple examples of how the standardized data may be used for biodiversity research. By providing a standardized data package, we hope to enhance the utility of NEON organismal data in advancing biodiversity research. \textbf{Key words}: NEON, Biodiversity, Organismal Data, Data Product, R, Data package, EDI \hypertarget{introduction-or-why-standardized-neon-organismal-data}{% \section{Introduction (or why standardized NEON organismal data)}\label{introduction-or-why-standardized-neon-organismal-data}} A central goal of ecology is to understand the patterns and processes of biodiversity, and this is particularly important in an era of rapid global environmental change (Midgley and Thuiller 2005, Blowes et al. 2019). Such understanding is only possible through studies that address questions like: How is biodiversity distributed across large spatial scales, ranging from ecoregions to continents? What mechanisms drive spatial patterns of biodiversity? Are spatial patterns of biodiversity similar among different taxonomic groups, and if not, why do we see variation? How does community composition vary across spatial and environmental gradients? What are the local and landscape scale drivers of community structure? How and why do biodiversity patterns change over time? Answers to such questions will enable better management and conservation of biodiversity and ecosystem services. Biodiversity research has a long history (Worm and Tittensor 2018), beginning with major scientific expeditions (e.g., Alexander von Humboldt, Charles Darwin) aiming to document global species lists after the establishment of Linnaeus's Systema Naturae (Linnaeus 1758). Beginning in the 1950's (Curtis 1959, Hutchinson 1959), researchers moved beyond documentation to focus on quantifying patterns of species diversity and describing mechanisms underlying their heterogeneity. Since the beginning of this line of research major theoretical breakthroughs (MacArthur and Wilson 1967, Hubbell 2001, Brown et al. 2004, Harte 2011) have advanced our understanding of potential mechanisms causing and maintaining biodiversity. Modern empirical studies, however, have been largely constrained to local or regional scales and focused on one or a few taxonomic groups, because of the considerable effort required to collect observational data. There are now unprecedented numbers of observations from independent small and short-term ecological studies. These data support research into generalities through syntheses and meta-analyses (Vellend et al. 2013, Blowes et al. 2019, Li et al. 2020), but this work is challenged by the difficulty of integrating data from different studies and with varying limitations. Such limitations include: differing collection methods (methodological uncertainties); varying levels of statistical robustness; inconsistent handling of missing data; spatial bias; publication bias; and design flaws (Martin et al. 2012, Nakagawa and Santos 2012, Koricheva and Gurevitch 2014, Welti et al. 2021). Additionally, it has historically been challenging for researchers to obtain and collate data from a diversity of sources for use in syntheses and/or meta-analyses (Gurevitch and Hedges 1999). Barriers to meta-analyses have been reduced in recent years to bring biodiversity research into the big data era (Hampton et al. 2013, Farley et al. 2018) by large efforts to digitize museum and herbarium specimens (e.g., iDigBio), successful community science programs (e.g., iNaturalist, eBird), technological advances (e.g., remote sensing, automated acoustic recorders), and long running coordinated research networks. Yet, each of these remedies comes with its own limitations. For instance, museum/herbarium specimens and community science records are increasingly available, but are still incidental and unstructured in terms of the sampling design, and exhibit marked geographic and taxonomic biases (Martin et al. 2012, Beck et al. 2014, Geldmann et al. 2016). Remote sensing approaches may cover large spatial scales, but may also be of low spatial resolution and unable to reliably penetrate vegetation canopy (Palumbo et al. 2017, G Pricope et al. 2019). The standardized observational sampling of woody trees by the United States Forest Service's Forest Inventory and Analysis and of birds by the United States Geological Survey's Breeding Bird Survey have been ongoing across the United States since 2001 and 1966, respectively (Bechtold and Patterson 2005, Sauer et al. 2017), but cover few taxonomic groups. The Long Term Ecological Research Network (LTER) and Critical Zone Observatory (CZO) both are hypotheses-driven research efforts built on decades of previous work (Jones et al. 2021). While both provide considerable observational and experimental datasets for diverse ecosystems and taxa, their sampling and dataset design are tailored to their specific research questions and a priori, standardization is not possible. Thus, despite recent advances biodiversity research is still impeded by a lack of standardized, high quality, and open-access data spanning large spatial scales and long time periods. The recently established National Ecological Observatory Network (NEON) provides continental-scale observational and instrumentation data for a wide variety of taxonomic groups and measurement streams. Data are collected using standardized methods, across 81 field sites in both terrestrial and freshwater ecosystems, and will be freely available for at least 30 years. These consistently collected, long-term, and spatially robust measurements are directly comparable throughout the Observatory, and provide a unique opportunity for enabling a better understanding of ecosystem change and biodiversity patterns and processes across space and through time (Keller et al. 2008). NEON data are designed to be maximally useful to ecologists by aligning with FAIR principles (findable, accessible, interoperable, and reusable, Wilkinson et al. 2016). Despite meeting these requirements, however, there are still challenges to integrating NEON organismal data for reproducible biodiversity research. For example: field names may vary across NEON data products, even for similar measurements; some measurements include sampling unit information, whereas units must be calculated for others; and data are in a raw form that often includes metadata unnecessary for biodiversity analyses. These issues and inconsistencies may be overcome through data cleaning and formatting, but understanding how best to perform this task requires a significant investment in the comprehensive NEON documentation for each data product involved in an analysis. Thoroughly reading large amounts of NEON documentation is time consuming, and the path to a standard data format, as is critical for reproducibility, may vary greatly between NEON organismal data products and users - even for similar analyses. Ultimately, this may result in subtle differences from study to study that hinder meta-analyses using NEON data. A simplified and standardized format for NEON organismal data would facilitate wider usage of these datasets for biodiversity research. Furthermore, if these data were formatted to interface well with datasets from other coordinated research networks, more comprehensive syntheses could be accomplished and to advance macrosystem biology (Record et al. 2020). One attractive standardized formatting style for NEON organismal data is that of ecocomDP (ecological community data design pattern, O'Brien et al.~In review). EcocomDP is the brainchild of members of the LTER network, the Environmental Data Initiative (EDI), and NEON staff, and provides a model by which data from a variety of sources may be easily transformed into consistently formatted, analysis ready community-level organismal data packages. This is done using reproducible code that maintains dataset ``levels'': L0 is incoming data, L1 represents an ecocomDP data format and includes tables representing observations, sampling locations, and taxonomic information (at a minimum), and L2 is an output format. Thus far, \textgreater70 LTER organismal datasets have been harmonized to the L1 ecocomDP format through the R package \href{https://github.com/EDIorg/ecocomDP}{\texttt{ecocomDP}} and more datasets are in the queue for processing into the ecocomDP format by EDI (O'Brien et al.~In review). We standardized NEON organismal data into the ecocomDP format and all R code to process NEON data products can be obtained through the R package \texttt{ecocomDP}. For the major taxonomic groups included in this initial effort, NEON sampling designs and major data wrangling decisions are summarized in the Materials and Methods section. We archived the standardized data in the \href{https://portal-s.edirepository.org/nis/mapbrowse?scope=edi\&identifier=190\&revision=2}{EDI Data Repository}. To facilitate the usage of the standardized datasets, we also developed an R data package, \href{https://github.com/daijiang/neonDivData}{\texttt{neonDivData}}. We refer to the input data streams provided by NEON as data products, whereas the cleaned and standardized collection of data files provided here as objects within the R data package, \texttt{neonDivData}, across this paper. Standardized datasets will be maintained and updated as new data become available from the NEON portal. We hope this effort will substantially reduce data processing times for NEON data users and greatly facilitate the use of NEON organismal data to advance our understanding of Earth's biodiversity. \hypertarget{materials-and-methods-or-how-to-standardize-neon-organismal-data}{% \section{Materials and Methods (or how to standardize NEON organismal data)}\label{materials-and-methods-or-how-to-standardize-neon-organismal-data}} There are many details to consider when starting to use NEON organismal data products. Below we outline key points relevant to community-level biodiversity analyses with regards to the NEON sampling design and decisions that were made as the data products presented in this paper were converted into the ecocomDP data model. While the methodological sections below are specific to particular taxonomic groups, there are some general points that apply to all NEON organismal data products. First, species occurrence and abundance measures as reported in NEON biodiversity data products are not standardized to sampling effort. Because there are often multiple approaches to cleaning (e.g., dealing with multiple levels of taxonomic resolution, interpretations of absences, etc.) and standardizing biodiversity survey data, NEON publishes raw observations along with sampling effort data to preserve as much information as possible so that data users can clean and standardize data as they see fit. The workflows described here for twelve taxonomic groups represented in eleven NEON data products produce standardized counts based on sampling effort, such as count of individuals per area sampled or count standardized to the duration of trap deployment, as described in Table \ref{tab:dataMapping}. The data wrangling workflows described below can be used to access, download, and clean data from the NEON Data Portal by using the R \texttt{ecocomDP} package. To view a catalog of available NEON data products in the ecocomDP format, use \texttt{ecocomDP::search\_data(“NEON”)}. To import data from a given NEON data product into your R environment, use \texttt{ecocomDP::read\_data()}, and set the \texttt{id} argument to the selected NEON to ecocomDP mapping workflow (the ``L0 to L1 ecocomDP workflow ID'' in Table \ref{tab:dataMapping}). This will return a list of ecocomDP formatted tables and accompanying metadata. To create a flat data table (similar to the R objects in the data package \texttt{neonDivData} described in Table \ref{tab:dataSummary}), use the \texttt{ecocomDP::flatten\_ecocomDP()} function. Second, it should be noted that NEON data collection efforts will continue well after this paper is published and new changes to data collection methods and/or processing may vary over time. Such changes (e.g., change in the number of traps used for ground beetle collection) or interruptions (e.g., due to COVID-19) to data collection are documented in the Issues log for each data product on the NEON Data Portal as well as the Readme text file that is included with NEON data downloads. \begin{figure} {\centering \includegraphics[width=0.95\linewidth]{/Users/dli30/Github/neonDivData/manuscript/figures/fig_1} } \caption{Generalized sampling schematics for Terrestrial Observation System (A) and Aquatic Observation System (B-D) plots. For Terrestrial Observation System (TOS) plots, Distributed, Tower, and Gradient plots, and locations of various sampling regimes, are presented via symbols. For Aquatic Observation System (AOS) plots, Wadeable streams, Non-wadeable streams, and Lake plots are shown in detail, with locations of sensors and different sampling regimes presented using symbols. Panel A was originally published in Thorpe et al. (2016).}\label{fig:Fig1Design} \end{figure} \begin{table} \caption{\label{tab:dataMapping}Mapping NEON data products to ecocomDP formatted data packages with abundances \emph{standardized} to observation effort. IDs in the \texttt{L0\ to\ L1\ ecocomDP\ workflow\ ID} columns were used in the R package \texttt{ecocomDP} to standardize organismal data. Notes: *Bird counts are reported per taxon per ``cluster'' observed in each point count in the NEON data product and have not been further standardized to sampling effort because standard methods for modeling bird abundances are beyond the scope of this paper; ** plants percent cover value \texttt{NA} represents presence/absence data only; *** incidence rate per number of tests conducted is reported for tick pathogens.} \centering \resizebox{\linewidth}{!}{ \begin{tabular}[t]{>{\raggedright\arraybackslash}p{7.5em}>{\raggedright\arraybackslash}p{6em}>{\raggedright\arraybackslash}p{12em}>{\raggedright\arraybackslash}p{12em}>{\raggedright\arraybackslash}p{9em}>{\raggedright\arraybackslash}p{9em}} \toprule Taxon group & L0 dataset (NEON data product ID) & Version of NEON data used in this study & L0 to L1 ecocomDP workflow ID & Primary variable reported in ecocomDP observation table & Units\\ \midrule \cellcolor{gray!6}{Algae} & \cellcolor{gray!6}{DP1.20166.001} & \href{https://doi.org/10.48443/3cvp-hw55}{\cellcolor{gray!6}{https://doi.org/10.48443/3cvp-hw55 and provisional data}} & \cellcolor{gray!6}{neon.ecocomdp.20166.001.001} & \cellcolor{gray!6}{cell density OR cells OR valves} & \cellcolor{gray!6}{cells/cm2 OR cells/mL}\\ Beetles & DP1.10022.001 & \href{https://doi.org/10.48443/tx5f-dy17}{https://doi.org/10.48443/tx5f-dy17 and provisional data} & neon.ecocomdp.10022.001.001 & abundance & count per trap day\\ \cellcolor{gray!6}{Birds*} & \cellcolor{gray!6}{DP1.10003.001} & \href{https://doi.org/10.48443/s730-dy13}{\cellcolor{gray!6}{https://doi.org/10.48443/s730-dy13 and provisional data}} & \cellcolor{gray!6}{neon.ecocomdp.10003.001.001} & \cellcolor{gray!6}{cluster size} & \cellcolor{gray!6}{count of individuals}\\ Fish & DP1.20107.001 & \href{https://doi.org/10.48443/17cz-g567}{https://doi.org/10.48443/17cz-g567 and provisional data} & neon.ecocomdp.20107.001.001 & abundance & catch per unit effort\\ \cellcolor{gray!6}{Herptiles} & \cellcolor{gray!6}{DP1.10022.001} & \href{https://doi.org/10.48443/tx5f-dy17}{\cellcolor{gray!6}{https://doi.org/10.48443/tx5f-dy17 and provisional data}} & \cellcolor{gray!6}{neon.ecocomdp.10022.001.002} & \cellcolor{gray!6}{abundance} & \cellcolor{gray!6}{count per trap day}\\ Macroinvertebrates & DP1.20120.001 & \href{https://doi.org/10.48443/855x-0n27}{https://doi.org/10.48443/855x-0n27 and provisional data} & neon.ecocomdp.20120.001.001 & density & count per square meter\\ \cellcolor{gray!6}{Mosquitoes} & \cellcolor{gray!6}{DP1.10043.001} & \href{https://doi.org/10.48443/9smm-v091}{\cellcolor{gray!6}{https://doi.org/10.48443/9smm-v091 and provisional data}} & \cellcolor{gray!6}{neon.ecocomdp.10043.001.001} & \cellcolor{gray!6}{abundance} & \cellcolor{gray!6}{count per trap hour}\\ Plants** & DP1.10058.001 & \href{https://doi.org/10.48443/abge-r811}{https://doi.org/10.48443/abge-r811 and provisional data} & neon.ecocomdp.10058.001.001 & percent cover & percent of plot area covered by taxon\\ \cellcolor{gray!6}{Small mammals} & \cellcolor{gray!6}{DP1.10072.001} & \href{https://doi.org/10.48443/j1g9-2j27}{\cellcolor{gray!6}{https://doi.org/10.48443/j1g9-2j27 and provisional data}} & \cellcolor{gray!6}{neon.ecocomdp.10072.001.001} & \cellcolor{gray!6}{count} & \cellcolor{gray!6}{unique individuals per 100 trap nights per plot per month}\\ Tick pathogens*** & DP1.10092.001 & \href{https://doi.org/10.48443/5fab-xv19}{https://doi.org/10.48443/5fab-xv19 and provisional data} & neon.ecocomdp.10092.001.001 & positivity rate & positive tests per pathogen per sampling event\\ \cellcolor{gray!6}{Ticks} & \cellcolor{gray!6}{DP1.10093.001} & \href{https://doi.org/10.48443/dx40-wr20}{\cellcolor{gray!6}{https://doi.org/10.48443/dx40-wr20 and provisional data}} & \cellcolor{gray!6}{neon.ecocomdp.10093.001.001} & \cellcolor{gray!6}{abundance} & \cellcolor{gray!6}{count per square meter}\\ Zooplankton & DP1.20219.001 & \href{https://doi.org/10.48443/qzr1-jr79}{https://doi.org/10.48443/qzr1-jr79 and provisional data} & neon.ecocomdp.20219.001.001 & density & count per liter\\ \bottomrule \end{tabular}} \end{table} \hypertarget{terrestrial-organisms}{% \subsection{Terrestrial Organisms}\label{terrestrial-organisms}} \hypertarget{breeding-land-birds}{% \subsubsection{Breeding Land Birds}\label{breeding-land-birds}} \textbf{NEON Sampling Design} NEON designates breeding landbirds as ``smaller birds (usually exclusive of raptors and upland game birds) not usually associated with aquatic habitats'' (Ralph 1993, Thibault 2018). Most species observed are diurnal and include both resident and migrant species. Landbirds are surveyed via point counts in each of the 47 terrestrial sites (Thibault 2018). At most NEON sites, breeding landbird points are located in five to ten 3 \(\times\) 3 grids (Fig. \ref{fig:Fig1Design}), which are themselves located in representative (dominant) vegetation. Whenever possible, grid centers are co-located with distributed base plot centers. When sites are too small to support a minimum of five grids, separated by at least 250 m from edge to edge, point counts are completed at single points instead of grids. In these cases, points are located at the southwest corners of distributed base plots within the site. Five to 25 points may be surveyed depending on the size and spatial layout of the site, with exact point locations dictated by a stratified-random spatial design that maintains a 250 m minimum separation between points. Surveys occur during one or two sampling bouts per season, at large and small sites respectively. Observers go to the specified points early in the morning and track birds observed during each minute of a 6-minute period, following a 2-minute acclimation period, at each point (Thibault 2018). Each point count contains species, sex, and distance to each bird (measured with a laser rangefinder except in the case of flyovers) seen or heard. Information relevant for subsequent modeling of detectability is also collected during the point counts (e.g., weather, detection method). The point count surveys for NEON were modified from the Integrated Monitoring in Bird Conservation Regions (IMBCR) field protocol for spatially-balanced sampling of landbird populations (Pavlacky Jr et al. 2017). \textbf{Data Wrangling Decisions} The bird point count NEON data product (`DP1.10003.001') consists of a list of two associated data frames: \texttt{brd\_countdata} and \texttt{brd\_perpoint}. The former data frame contains information such as locations, species identities, and their counts. The latter data frame contains additional location information such as latitude and longitude coordinates and environmental conditions during the time of the observations. The separate data frames are linked by `eventID', which refers to the location, date and time of the observation. To prepare the bird point count data for the L1 ecocomDP model, we first merged both data frames into one and then removed columns that are likely not needed for most community-level biodiversity analyses (e.g., observer names, etc.). The field \texttt{taxon\_id} in the R object \texttt{data\_bird} with the \texttt{neonDivData} data package consists of the standard AOU 4-letter species code, although \texttt{taxon\_rank} refers to eight potential levels of identification (class, family, genus, species, speciesGroup, subfamily, and subspecies). Users can decide which level is appropriate, for example one might choose to exclude all unidentified birds (taxon\_id = UNBI), where no further details are available below the class level (Aves sp.). The NEON sampling protocol has evolved over time, so users are advised to check whether the `samplingProtocolVersion' associated with bird point count data (`DP1.10003.001') fits their data requirements and subset as necessary. Older versions of protocols can be found at the \href{https://data.neonscience.org/documents/-/document_library_display/JEygRkSpUBoq/view/1883155?_110_INSTANCE_JEygRkSpUBoq_topLink=home\&_110_INSTANCE_JEygRkSpUBoq_delta1=20\&_110_INSTANCE_JEygRkSpUBoq_keywords=\&_110_INSTANCE_JEygRkSpUBoq_advancedSearch=false\&_110_INSTANCE_JEygRkSpUBoq_andOperator=true\&p_r_p_564233524_resetCur=false\&_110_INSTANCE_JEygRkSpUBoq_delta2=20\&_110_INSTANCE_JEygRkSpUBoq_cur2=1}{NEON document library}. \hypertarget{ground-beetles-and-herp-bycatch}{% \subsubsection{Ground Beetles and Herp Bycatch}\label{ground-beetles-and-herp-bycatch}} \textbf{NEON Sampling Design} Ground beetle sampling is conducted via pitfall trapping, across 10 distributed plots at each NEON site. The original sampling design included the placement of a pitfall trap at each of the cardinal directions along the distributed plot boundary, for a total of four traps per plot and 40 traps per site. In 2018, sampling was reduced via the elimination of the North pitfall trap in each plot, resulting in 30 traps per site (LeVan et al. 2019b). Beetle pitfall trapping begins when the temperature has been \textgreater4°C for 10 days in the spring and ends when temperatures dip below this threshold in the fall. Sampling occurs biweekly throughout the sampling season with no single trap being sampled more frequently than every 12 days (LeVan 2020a). After collection, the samples are separated into carabid species and bycatch. Invertebrate bycatch is pooled to the plot level and archived. Vertebrate bycatch is sorted and identified by NEON technicians, then archived at the trap level. Carabid samples are sorted and identified by NEON technicians, after which a subset of carabid individuals are sent to be pinned and re-identified by an expert taxonomist. More details can be found in Hoekman et al. (2017) and LeVan et al. (2019b). Pitfall traps and sampling methods are designed by NEON to reduce vertebrate bycatch (LeVan et al. 2019b). The pitfall cup is medium in size with a low clearance cover installed over the trap entrance to minimize large vertebrate bycatch. When a live vertebrate with the ability to move on its own volition is found in a trap, the animal is released. Live but morbund vertebrates are euthanized and collected along with deceased vertebrates. When ≥15 individuals of a vertebrate species are collected, cumulatively, within a single plot, NEON may initiate localized mitigation measures such as temporarily deactivating traps and removing all traps from the site for the remainder of the season. Thus, while herpetofaunal (herp) bycatch is present in many pitfall samples it is unclear how well these pitfall traps capture herp community structure and diversity - due to these active efforts to reduce vertebrate bycatch. Users of NEON herp bycatch data should be aware of these limitations. \textbf{Data Wrangling Decisions} The beetle and herp bycatch data product identifier is `DDP1.10022.001'. Carabid samples are recorded and identified in a multi-step workflow wherein a subset of samples are passed on in each successive step. Individuals are first identified by the sorting technician after which a subset is sent on to be pinned. Some especially difficult individuals are not identified by technicians during sorting, instead being labelled ``other carabid''. The identifications for those individuals are recorded with the pinning data. Any individuals for which identification is still uncertain are then verified by an expert taxonomist. There are a few cases where an especially difficult identification was sent to multiple expert taxonomists and they did not agree on a final taxon, these individuals were excluded from the data set at the recommendation of NEON staff. Preference is given to expert identification whenever available. However, these differences in taxonomic expertise do not seem to cause systematic biases in estimating species richness across sites, but non-expert taxonomists are more likely to misidentify non-native carabid species (Egli et al. 2020). Beetle abundances are recorded for the sorted samples by NEON technicians. To account for individual samples that were later reidentified, the final abundance for a species is the original sorting sample abundance minus the number of individuals that were given a new ID. Prior to 2018, \texttt{trappingDays} values were not included for many sites. Missing entries were calculated as the range from \texttt{setDate} through \texttt{collectDate} for each trap. We also accounted for a few plots for which \texttt{setDate} was not updated based on a previous collection event in the \texttt{trappingDays} calculations. To facilitate easy manipulation of data within and across bouts, a new \texttt{boutID} field was created to identify all trap collection events at a site in a bout. The original \texttt{EventID} field is intended to identify a bout, but has a number of issues that necessitates creation of a new ID. First, \texttt{EventID} does not correspond to a single collection date but rather all collections in a week. This is appropriate for the small number of instances when collections for a bout happen over multiple consecutive days (\textasciitilde5\% of bouts), but prevents analysis of bout patterns at the temporal scale of a weekday. The data here were updated so all entries for a bout correspond to the date (i.e., \texttt{collectDate}) on which the majority of traps are collected to maintain the weekday-level resolution with as high of fidelity as possible, while allowing for easy aggregation within bouts and \texttt{collectDate}'s. Second, there were a few instances in which plots within a site were set and collected on the same day, but have different \texttt{EventID}'s. These instances were all considered a single bout by our new \texttt{boutID}, which is a unique combination of \texttt{setDate}, \texttt{collectDate}, and \texttt{siteID}. Herpetofaunal bycatch (amphibian and reptile) in pitfall traps were identified to species or the lowest taxonomic level possible within 24 h of recovery from the field. To process the herp bycatch NEON data we cleaned \texttt{trappingDays} and the other variables and added \texttt{boutID} as described above for beetles. The variable \texttt{sampleType} in the \texttt{bet\_sorting} table provides the type of animal caught in a pitfall trap as one of five types: `carabid', `vert bycatch herp', `other carabid', `invert bycatch' and `vert bycatch mam'. We filtered the beetle data described above to only include the `carabid' and `other carabid' types. For herps, we only kept the \texttt{sampleType} of `vert bycatch herp'. Abundance data of beetles and herps bycatch were standardized to be the number of individuals captured per trap day. \hypertarget{mosquitos}{% \subsubsection{Mosquitos}\label{mosquitos}} \textbf{NEON Sampling Design} Mosquito specimens are collected at 47 terrestrial sites across all NEON domains and the data are reported in NEON data product DP1.10043.001. Traps are distributed throughout each site according to a stratified-random spatial design used for all Terrestrial Observation System sampling, maintaining stratification across dominant (\textgreater5\% of total cover) vegetation types (LeVan 2020b). The number of mosquito traps placed in each vegetation type is proportional to its percent cover, until 10 total mosquito traps have been placed in the site. Mosquito traps are typically located within 30 m of a road to facilitate expedient sampling, and are placed at least 300 m apart to maintain independence. Mosquito monitoring is divided into off-season and field season sampling (LeVan et al. 2019a). Off-season sampling begins after three consecutive zero-catch field sampling bouts have occurred, and represents a reduced sampling regime that is designed for the rapid detection of when the next field season should begin and to provide mosquito phenology data. Off-season sampling is conducted at three dedicated mosquito traps spread throughout each core site, while temperatures are \textgreater10 °C. Once per week, technicians deploy traps at dusk and then collect them at dawn the following day. Field season sampling begins when the first mosquito is detected during off season sampling (LeVan et al. 2019a). Technicians deploy traps at all 10 dedicated mosquito trap locations per site. Traps remain out for a 24-hour period, or sampling bout, and bouts occur every two or four weeks at core and relocatable terrestrial sites, respectively. During the sampling bout, traps are serviced twice and yield one night-active sample, collected at dawn or about eight hours after the trap was set, and one day-active sample, collected at dusk or \textasciitilde16 hours after the trap was set. Thus, a 24-hour sampling bout yields 20 samples from 10 traps. NEON collects mosquito specimens using Center for Disease Control (CDC) CO\textsubscript{2} light traps (LeVan et al. 2019a). These traps have been used by other public health and mosquito-control agencies for a half-century, so that NEON mosquito data align across NEON field sites and with existing long-term data sets. A CDC CO\textsubscript{2} light trap consists of a cylindrical insulated cooler that contains dry ice, a plastic rain cover attached to a battery powered light/fan assembly, and a mesh collection cup. During deployment, the dry ice sublimates and releases CO\textsubscript{2}. Mosquitoes attracted to the CO\textsubscript{2} bait are sucked into the mesh collection cup by the battery-powered fan, where they remain alive until trap collection. Following field collection, NEON's field ecologists process, package, and ship the samples to an external lab where mosquitoes are identified to species and sex (when possible). A subset of identified mosquitoes are tested for infection by pathogens to quantify the presence/absence and prevalence of various arboviruses. Some mosquitoes are set aside for DNA barcode analysis as well as long-term archiving. Particularly rare or difficult to identify mosquito specimens are prioritized for DNA barcoding. More details can be found in LeVan et al. (2019a). \textbf{Data Wrangling Decisions} The mosquito data product (\texttt{DP1.10043.001}) consists of four data frames: trapping data (\texttt{mos\_trapping}), sorting data (\texttt{mos\_sorting}), archiving data (\texttt{mos\_archivepooling}), and expert taxonomist processed data (\texttt{mos\_expertTaxonomistIDProcessed}). We first removed rows (records) with missing information about location, collection date, and sample or subsample ID for all data frames. We then merged all four data frames into one, wherein we only kept records for target taxa (i.e., targetTaxaPresent = ``Y'') with no known compromised sampling condition (i.e., sampleCondition = ``No known compromise''). We further removed a small number of records with species identified only to the family level; all remaining records were identified at least to the genus level. We estimated the total individual count per trap-hour for each species within a trap as \texttt{(individualCount/subsampleWeight)\ *\ totalWeight\ /\ trapHours}. We then removed columns that were not likely to be used for calculating biodiversity values. \hypertarget{small-mammals}{% \subsubsection{Small Mammals}\label{small-mammals}} \textbf{NEON Sampling Design} NEON defines small mammals based on taxonomic, behavioral, dietary, and size constraints, and includes any rodent that is (1) nonvolant; (2) nocturnally active; (3) forages predominantly aboveground; and (4) has a mass \textgreater5 grams, but \textless\textasciitilde{} 500-600 grams (Thibault et al. 2019). In North America, this includes cricetids, heteromyids, small sciurids, and introduced murids, but excludes shrews, large squirrels, rabbits, or weasels, although individuals of these species may be incidentally captured. Small mammals are collected at NEON sites using Sherman traps, identified to species in the field, marked with a unique tag, and released (Thibault et al. 2019). Multiple 90 m \(\times\) 90 m trapping grids are set up in each terrestrial field site within the dominant vegetation type. Each 90 m \(\times\) 90 m trapping grid contains 100 traps placed in a pattern with 10 rows and 10 columns set 10 m apart. Three of these 90 m \(\times\) 90 m grids per site are designated pathogen (as opposed to diversity) grids and additional blood sampling is conducted here. Small mammal sampling occurs in bouts, with a bout comprised of three consecutive (or nearly consecutive) nights of trapping at each pathogen grid and one night of trapping at each diversity grid. The timing of sampling occurs within 10 days before or after the new moon. The number of bouts per year is determined by site type: core sites are typically trapped for six bouts per year (except for areas with shorter seasons due to cold weather), while relocatable sites are trapped for four bouts per year. More information can be found in Thibault et al. (2019). \textbf{Data Wrangling Decisions} In the small mammal NEON data product (\texttt{DP1.10072.001}), records are stratified by NEON site, year, month, and day and represent data from both the diversity and pathogen sampling grids. Capture records were removed if they were not identified to genus or species (e.g., if the species name was denoted as `either/or' or as family name), or if their trap status is not ``5 - capture'' or ``4 - more than 1 capture in one trap''. Abundance data for each plot and month combination were standardized to be the number of individuals captured per 100 trap nights. \hypertarget{terrestrial-plants}{% \subsubsection{Terrestrial Plants}\label{terrestrial-plants}} \textbf{NEON Sampling Design} NEON plant diversity sampling is completed once or twice per year (one or two `bouts') in multiscale, 400 m\textsuperscript{2} (20 m \(\times\) 20 m) plots (Barnett 2019). Each multiscale plot is subdivided into four 100 m\textsuperscript{2} (10 m \(\times\) 10 m) subplots that each encompass one or two sets of 10 m\textsuperscript{2} (3.16 m \(\times\) 3.16 m) subplots within which a 1 m\textsuperscript{2} (1 m \(\times\) 1 m) subplot is nested. The percent cover of each plant species is estimated visually in the 1 m\textsuperscript{2} subplots, while only species presences are documented in the 10 m\textsuperscript{2} and 100 m\textsuperscript{2} subplots. To estimate plant percent cover by species, technicians record this value for all species in a 1 m\textsuperscript{2} subplot (Barnett 2019). Next, the remaining 9 m\textsuperscript{2} area of the associated 10 m\textsuperscript{2} subplot is searched for the presence of species. The process is repeated if there is a second 1 and 10 m\textsuperscript{2} nested pair in the specific 100 m\textsuperscript{2} subplot. Next, the remaining 80 m\textsuperscript{2} area is searched for the presence of species; data can be aggregated for a complete list of species present at the 100 m\textsuperscript{2} subplot scale. Data for all four 100 m\textsuperscript{2} subplots represent indices of species at the 400 m\textsuperscript{2} plot scale. In most cases, species encountered in a nested, finer scale, subplot are not rerecorded in any corresponding larger subplot - in order to avoid duplication. Plant species are occasionally recorded more than once, however, when data are aggregated across all nested subplots within each 400 m\textsuperscript{2} plot, and these require removal from the dataset. More details about the sampling design can be found in Barnett et al. (2019). NEON manages plant taxonomic entries with a master taxonomy list that is based on the community standard, where possible. Using this list, synonyms for a given species are converted to the currently used name. The master taxonomy for plants is the USDA PLANTS Database (USDA, NRCS. 2014. \url{https://plants.usda.gov}), and the portions of this database included in the NEON plant master taxonomy list are those pertaining to native and naturalized plants present within the NEON sampling area. A sublist for each NEON domain includes those species with ranges that overlap the domain as well as nativity designations - introduced or native - in that part of the range. If a species is reported at a location outside of its known range, and the record proves reliable, the master taxonomy list is updated to reflect the distribution change. For more details on plant taxonomic handling, see Barnett (2019). For more on the NEON plant master taxonomy list see NEON.DOC.014042 (\url{https://data.neonscience.org/api/v0/documents/NEON.DOC.014042vK}). \textbf{Data Wrangling Decisions} In the plant presence and percent cover NEON data product (\texttt{DP1.10058.001}) sampling at the 1 m \(\times\) 1 m scale also includes observations of abiotic and non-target species ground cover (i.e., soil, water, downed wood), so we removed records with \texttt{divDataType} as ``otherVariables.'' We also removed records whose \texttt{targetTaxaPresent} is \texttt{N} (i.e., a non-target species). Additionally, for all spatial resolutions (i.e., 1 m\textsuperscript{2}, 10 m\textsuperscript{2}, and 100 m\textsuperscript{2} data), any record lacking information critical for combining data within a plot and for a given sampling bout (i.e., \texttt{plotID}, \texttt{subplotID}, \texttt{boutNumber}, \texttt{endDate}, or \texttt{taxonID}) was dropped from the dataset. Furthermore, records without a definitive genus or species level \texttt{taxonID} (i.e., those representing unidentified morphospecies) were not included. To combine data from different spatial resolutions into one data frame, we created a pivot column entitled \texttt{sample\_area\_m2} (with possible values of 1, 10, and 100). Because of the nested sampling design of the plant data, to capture all records within a subplot at the 100 m\textsuperscript{2} scale, we incorporated all data from both the 1 m\textsuperscript{2} and 10 m\textsuperscript{2} scales for that subplot. Similarly, to obtain all records within a plot at the 400 m\textsuperscript{2} scale, we included all data from that plot. Species abundance information was only recorded as area coverage within 1 m by 1 m subplots; however, users may use the frequency of a species across subplots within a plot or plots within a site as a proxy of its abundance if needed. \hypertarget{ticks-and-tick-pathogens}{% \subsubsection{Ticks and Tick Pathogens}\label{ticks-and-tick-pathogens}} \textbf{NEON Sampling Design} Tick sampling occurs in six distributed plots at each site, which are randomly chosen in proportion to NLCD land cover class (LeVan et al. 2019c). Ticks are sampled by walking the perimeter of a 40 m \(\times\) 40 m plot using a 1 m \(\times\) 1 m drag cloth. Ideally, 160 meters are sampled (shortest straight line distance between corners), but the cloth can be dragged around obstacles if a straight line is not possible. Acceptable total sampling area is between 80 and 180 m per plot. The cloth can also be flagged over vegetation when the cloth cannot be dragged across it. Ticks are collected from the cloth and technicians' clothing at appropriate intervals, depending on vegetation density, and at every corner of the plot. Specimens are immediately transferred to a vial containing 95\% ethanol. Onset and offset of tick sampling coincides with phenological milestones at each site, beginning within two weeks of the onset of green-up and ending within two weeks of vegetation senescence (LeVan et al. 2019c). Sampling bouts are only initiated if the high temperature on the two consecutive days prior to planned sampling was \textgreater0°C. Early season sampling is conducted on a low intensity schedule, with one sampling bout every six weeks. When more than five ticks of any life stage have been collected within the last calendar year at a site, sampling switches to a high intensity schedule at the site - with one bout every three weeks. A site remains on the high intensity schedule until fewer than five ticks are collected within a calendar year, then sampling reverts back to the low intensity schedule. Ticks are sent to an external facility for identification to species, life stage, and sex (LeVan et al. 2019c). A subset of nymphal ticks are additionally sent to a pathogen testing facility. \emph{Ixodes} species are tested for \emph{Anaplasma phagocytophilum}, \emph{Babesia microti}, \emph{Borrelia burgdorferi} sensu lato, \emph{Borrelia miyamotoi}, \emph{Borrelia mayonii}, other \emph{Borrelia} species (\emph{Borrelia} sp.), and a \emph{Ehrlichia} muris-like agent (Pritt et al. 2017). \emph{Non-Ixodes} species are tested for \emph{Anaplasma phagocytophilum}, \emph{Borrelia lonestari} (and other undefined \emph{Borrelia} species), \emph{Ehrlichia chaffeensis}, \emph{Ehrlichia ewingii}, \emph{Francisella tularensis}, and \emph{Rickettsia rickettsii}. Additional information about tick pathogen testing can be found in the Tick Pathogen Testing SOP (\url{https://data.neonscience.org/api/v0/documents/UMASS_LMZ_tickPathogens_SOP_20160829}) for the NEON Tick-borne Pathogen Status data product. \textbf{Data Wrangling Decisions} The tick NEON data product (\texttt{DP1.10093.001}) consists of two dataframes: `tck\_taxonomyProcessed' hereafter referred to as `taxonomy data' and `tck\_fielddata' hereafter referred to as `field data.' Users should be aware of some issues related to taxonomic ID. Counts assigned to higher taxonomic levels (e.g., at the order level \emph{Ixodida}; IXOSP2) are not the sum of lower levels; rather they represent the counts of individuals that could not reliably be assigned to a lower taxonomic unit. Samples that were not identified in the lab were assigned to the highest taxonomic level (order \emph{Ixodida}; IXOSP2). However, users could make an informed decision to assign these ticks to the most probable group if a subset of individuals from the same sample were assigned to a lower taxonomy. To clean the tick data, we first removed surveys and samples not meeting quality standards. In the taxonomy data, we removed samples where sample condition was not listed as ``OK'' (\textless1\% of records). In the field data, we removed records where samples were not collected due to logistical concerns (10\%). We then combined male and female counts in the taxonomy table into one ``adult'' class. The taxonomy table was re-formatted so that every row contained a sampleID and counts for each species life-stages were separate columns (i.e., ``wide format''). Next, we joined the field data to the taxonomy data, using the sample ID to link the two tables. When joining, we retained field records where no ticks were found in the field and thus there were no associated taxonomy data. In drags where ticks were not found, counts were given zeros. All counts were standardized by area sampled. Prior to 2019, both field surveyors and laboratory taxonomists enumerated each tick life-stage; consequently, in the joined dataset there were two sets of counts (``field counts'' and ``lab counts''). However, starting in 2019, counts were performed by taxonomists rather than field surveyors. Field surveys conducted after 2019 no longer have field counts. Users of tick abundance data should be aware that this change in protocol has several implications for data wrangling and for analysis. First, after 2019, tick counts are no longer published at the same time as field survey data. Subsequently, some field records from the most recent years have tick presence recorded (targetTaxaPresent = ``Y''), but do not yet have associated counts or taxonomic information and so the counts are still listed as \texttt{NA}. Users should be aware that counts of zero are therefore published earlier than positive counts. We strongly urge users to filter data to those years where there are no counts pending. The second major issue is that in years where both field counts and lab counts were available, they did not always agree (8\% of records). In cases of disagreement, we generally used lab counts in the final abundance data, because this is the source of all tick count data after 2019 and because life-stage identification was more accurate. However, there were a few exceptions where we used field count data. In some cases, only a subsample of a certain life-stage was counted in the lab, which resulted in higher field counts than lab counts. In this case, we assigned the additional un-identified individuals (e.g., the difference between the field and lab counts) to the order level (IXOSP2). If quality notes from NEON described ticks being lost in transit, we also added the additional lost individuals to the order level. There were some cases (\textless1\%) where the field counts were greater than lab counts by more than 20\% and where the explanation was not obvious; we removed these records.We note that the majority of samples (\textasciitilde85\%) had no discrepancies between the lab or field, therefore this process could be ignored by users whose analyses are not sensitive to exact counts. The tick pathogen NEON data product (\texttt{DP1.10092.001}) consists of two dataframes: \texttt{tck\_pathogen} hereafter referred to as `pathogen data' and \texttt{tck\_pathogenqa} hereafter referred to as `quality data'. First, we removed any samples that had flagged quality checks from the quality data and removed any samples that did not have a positive DNA quality check from the pathogen data. Although the original online protocol aimed to test 130 ticks per site per year from multiple tick species, the final sampling decision was to extensively sample IXOSCA, AMBAME, and AMBSP species only because IXOPAC and \emph{Dermacentor} nymph frequencies were too rare to generate meaningful pathogen data. \emph{Borrelia burgdorferi} and \emph{Borrelia burgdorferi sensu lato} tests were merged, since the former was an incomplete pathogen name and refers to \emph{B. burgdorferi sensu lato} as opposed to \emph{sensu stricto} (Rudenko et al. 2011). Tick pathogen data are presented as positivity rate calculated as number positive tests per number of tests conducted for a given pathogen on ticks collected during a given sampling event. \hypertarget{aquatic-organisms}{% \subsection{Aquatic Organisms}\label{aquatic-organisms}} \hypertarget{aquatic-macroinvertebrates}{% \subsubsection{Aquatic macroinvertebrates}\label{aquatic-macroinvertebrates}} \textbf{NEON Sampling Design} Aquatic macroinvertebrate sampling occurs three times/year at wadeable stream, river, and lake sites from spring through fall. Timing of sampling is site-specific and based on historical hydrological, meteorological, and phenological data including dates of known ice cover, growing degree days, and green up and brown down (Cawley et al. 2016). Samplers vary by habitat and include Surber, Hess, hand corer, modified kicknet, D-frame sweep, and petite ponar samplers (Parker 2019). Stream sampling occurs throughout the 1 km permitted reach in wadeable areas of the two dominant habitat types. Lake sampling occurs with a petite ponar near buoy, inlet, and outlet sensors, and D-frame sweeps in wadeable littoral zones. Riverine sample collections in deep waters or near instrument buoys are made with a petite ponar, and in littoral areas are made with a D-frame sweep or large-woody debris sampler. In the field, samples are preserved in pure ethanol, and later in the domain support facility, glycerol is added to prevent the samples from becoming brittle. Samples are shipped from the domain facility to a taxonomy lab for sorting and identification to lowest possible taxon (e.g., genus or species) and counts of each taxon per size are made to the nearest mm. \textbf{Data Wrangling Decisions} Aquatic macroinvertebrate data contained in the NEON data product \texttt{DP1.20120.001} are subsampled and identified to the lowest practical taxonomic level, typically genus, by expert taxonomists in the \texttt{inv\_taxonomyProcessed} table, measured to the nearest mm size class, and counted. Taxonomic naming has been standardized in the \texttt{inv\_taxonomyProcessed} file, according to NEON's master taxonomy (\url{https://data.neonscience.org/taxonomic-lists}), removing any synonyms. We calculated macroinvertebrate density by dividing \texttt{estimatedTotalCount} (which includes the corrections for subsampling in the taxonomy lab) by \texttt{benthicArea} from the \texttt{inv\_fieldData} table to return count per square meter of stream, lake, or river bottom (Chesney et al. 2021). \hypertarget{microalgae-periphyton-and-phytoplankton}{% \subsubsection{MicroAlgae (Periphyton and Phytoplankton)}\label{microalgae-periphyton-and-phytoplankton}} \textbf{NEON Sampling Design} NEON collects periphyton samples from natural surface substrata (i.e., cobble, silt, woody debris) over a 1 km reach in streams and rivers, and in the littoral zone of lakes. Various collection methods and sampler types are used, depending on substrate (Parker 2020). In lakes and rivers, periphyton are also collected from the most dominant substratum type in three areas within the littoral (i.e., shoreline) zone. Prior to 2019, littoral zone periphyton sampling occurred in five areas. NEON collects three phytoplankton samples per sampling date using Kemmerer or Van Dorn samplers. In rivers, samples are collected near the sensor buoy and at two other deep-water points in the main channel. For lakes, phytoplankton are collected near the central sensor buoy as well as at two littoral sensors. Where lakes and rivers are stratified, each phytoplankton sample is a composite from one surface sample, one sample from the metalimnion (i.e., middle layer), and one sample from the bottom of the euphotic zone. For non-stratified lakes and non-wadeable streams, each phytoplankton sample is a composite from one surface sample, one sample just above the bottom of the euphotic zone, and one mid-euphotic zone sample - if the euphotic zone is \textgreater{} 5 m deep. All microalgae sampling occurs three times per year (i.e., spring, summer, and fall bouts) in the same sampling bouts as aquatic macroinvertebrates and zooplankton. In wadeable streams, which have variable habitats (e.g., riffles, runs, pools, step pools), three periphyton samples are collected per bout in the dominant habitat type (five samples collected prior to 2019) and three per bout in the second most dominant habitat type. No two samples are collected from the sample habitat unit (i.e., the same riffle). Samples are processed at the domain support facility and separated into subsamples for taxonomic analysis or for biomass measurements. Aliquots shipped to an external facility for taxonomic determination are preserved in glutaraldehyde or Lugol's iodine (before 2021). Aliquots for biomass measurements are filtered onto glass-fiber filters and processed for ash-free dry mass. \textbf{Data Wrangling Decisions} The periphyton, seston, and phytoplankton NEON data product (\texttt{DP1.20166.001}) contains three dataframes for algae containing information on algae taxonomic identification, biomass and related field data, which are hereafter referred to as \texttt{alg\_tax\_long}, \texttt{alg\_biomass} and \texttt{alg\_field\_data}. Algae within samples are identified to the lowest possible taxonomic resolution, usually species, by contracting laboratory taxonomists. Some specimens can only be identified to the genus or even class level, depending on the condition of the specimen. Ten percent of all samples are checked by a second taxonomist and are noted in the \texttt{qcTaxonomyStatus}. Taxonomic naming has been standardized in the \texttt{alg\_tax\_long} files, according to NEON's master taxonomy, removing nomenclatural synonyms. Abundance and cell/colony counts are determined for each taxon of each sample with counts of cells or colonies that are either corrected for sample volume or not (as indicated by algalParameterUnit = `cellsperBottle'). We corrected sample units of \texttt{cellsperBottle} to density (Parker and Vance 2020). First, we summed the preservative volume and the lab's recorded sample volume for each sample (from the \texttt{alg\_biomass} file) and combined that with the \texttt{alg\_tax\_long} file using \texttt{sampleID} as a common identifier. Where samples in the \texttt{alg\_tax\_long} file were missing data in the \texttt{perBottleSampleVolume} field (measured after receiving samples at the external laboratory), we estimated the sample volume using NEON domain lab sample volumes (measured prior to shipping samples to the external laboratory). With this updated file, we combined it with \texttt{alg\_field\_data} to have the related field conditions, including benthic area sampled for each sample. \texttt{parentSampleID} was used for \texttt{alg\_field\_data} to join to the \texttt{alg\_biomass} file's \texttt{sampleID} as \texttt{alg\_field\_data} only has \texttt{parentSampleID}. We then calculated cells per milliliter for the uncorrected taxon of each sample, dividing \texttt{algalParameterValue} by the updated sample volume. Benthic sample results are expressed in terms of area (i.e., multiplied by the field sample volume, divided by benthic area sampled), in square meters. The final abundance units are either cells/mL (phytoplankton and seston samples) or cells/m\textsuperscript{2} for benthic samples. The \texttt{sampleIDs} are child records of each \texttt{parentSampleID} that will be collected as long as sampling is not impeded (i.e., ice covered or dry). In the \texttt{alg\_biomass} file, there should be only a single entry for each \texttt{parentSampleID}, \texttt{sampleID}, and \texttt{analysisType}. Most often, there were two \texttt{sampleID}'s per \texttt{parentSampleID} with one for ash-free dry mass (AFDM) and taxonomy (analysis types). For the creation of the observation table with standardized counts, we used only records from the \texttt{alg\_biomass} file with the \texttt{analysisType} of taxonomy. In \texttt{alg\_tax\_long}, there are multiple entries for each \texttt{sampleID} for each taxon by \texttt{scientificName} and \texttt{algalParameter}. \hypertarget{fish}{% \subsubsection{Fish}\label{fish}} \textbf{NEON Sampling Design} Fish sampling is carried out across 19 of the NEON eco-climatic domains, occuring in a total of 23 lotic (stream) and five lentic (lake) sites. In lotic sites, up to 10 non-overlapping reaches, each 70 to 130 m long, are designated within a 1 km section of stream (Jensen et al. 2019a). These include three constantly sampled `fixed' reaches, which encompass all representative habitats found within the 1 km stretch, and seven `random' reaches that are sampled on a rotating schedule. In lentic sites, 10 pie-shaped segments are established, with each segment ranging from the riparian zone into the lake center, therefore effectively capturing both nearshore and offshore habitats (Jensen et al. 2019b). Three of the 10 segments are fixed and are surveyed twice a year, and the remaining segments are random and are sampled rotationally. The spatial layouts of these sites are designed to capture spatial and temporal heterogeneity in the aquatic habitats. Lotic sampling occurs at three fixed and three random reaches per sampling bout, and there are two bouts per year - one in spring and one in fall. During each bout, the fixed reaches are sampled via a three-pass electrofishing depletion approach (Moulton II et al. 2002, Peck et al. 2006) while the random reaches being sampled are done so with a single-pass depletion approach. Which random reaches are surveyed depends on the year, with three of the random reaches sampled every other year. All sampling occurs during daylight hours, with each sampling bout completed within five days and with a minimum two-week gap in between two successive sampling bouts. The initial sampling date is determined using site-specific historical data on ice melting, water temperature (or accumulated degree days), and riparian peak greenness. The lentic sampling design is similar to that discussed above, with fixed segments being sampled twice per year and random segments sampled twice per year on a rotational basis (i.e., each random segment is not sampled every year). Lentic sampling is conducted using three gear types, with backpack electrofishing and mini-fyke nets near the shoreline and gill nets in deeper waters. Backpack electrofishing is done on a 4 m \(\times\) 25 m reach near the shoreline via a three-pass (for fixed segments) or single-pass (for random segments) electrofishing depletion approach (Moulton II et al. 2002, Peck et al. 2006). All three passes in a fixed sampling segment are completed on the same night, with ≤30 minutes between successive passes. Electrofishing begins within 30 minutes of sunset and ceases within 30 minutes of sunrise, with a maximum of five passes per sampling bout. A single gill net is also deployed within all segments being sampled, both fixed and random, for 1-2 hours in either the morning or early afternoon. Finally, a fyke (Baker et al. 1997) or mini-fyke net is deployed at each fixed or random segments, respectively. Fyke nets are positioned before sunset and recovered after sunrise on the following day. Precise start and end times for electrofishing and net deployments are documented by NEON technicians at the time of sampling. In all surveys, captured fish are identified to the lowest practical taxonomic level, and morphometrics (i.e., body mass and body length) are recorded for 50 individuals of each taxon before releasing. Relative abundance for each fish taxon is also recorded by direct enumeration (up to first 50 individuals) or estimation by bulk counts (\textgreater50 individuals, i.e., by placing fish of a given taxon into a dip net (i.e., net scoop), counting the total number of specimens in the dip net, and then multiplying the total number of scoops of captured fish by the counts from the first scoop). \textbf{Data Wrangling Decisions} Fish sampled via both electrofishing and trapping are identified at variable taxonomic resolutions (as fine as subspecies level) in the field. Most identifications are made to the species or genus level by a single field technician for a given bout per site. Sampled fish are identified, measured, weighed, and then released back to the site of capture. If field technicians are unable to identify to the species level, such specimens are identified to the finest possible taxonomic resolution or assigned a morphospecies with a coarse-resolution identification. The standard sources consulted for identification and a qualifier for identification validity are also documented in the \texttt{fsh\_perFish} table. The column \texttt{bulkFishCount} of the \texttt{fsh\_bulkCount} table records relative abundance for each species or the alternative next possible taxon level (specified in the column \texttt{scientificName}). Fish data (taxonomic identification and relative abundance) are recorded per each sampling reach in streams or per segment in lakes in each bout and documented in the \texttt{fsh\_perFsh} table (Monahan et al. 2020). The column \texttt{eventID} uniquely identifies the sampling date of the year, the specific site within the domain, a reach/segment identifier, the pass number (i.e., number of electrofishing passes or number of net deployment efforts), and the survey method. The \texttt{eventID} column helps tie all fish data with stream reach/lake segment data or environmental data (i.e., water quality data) and sampling effort data (e.g., electrofishing and net set time). A \texttt{reachID} column provided in the \texttt{fsh\_perPass} table uniquely identifies surveys done per stream reach or lake segment. The \texttt{reachID} is nested within the \texttt{eventID} as well. We used \texttt{eventID} as a nominal variable to uniquely identify different sampling events and to join different, stacked fish data files as described below. The fish NEON data product (\texttt{DP1.20107.001}) consists of \texttt{fsh\_perPass}, \texttt{fsh\_fieldData}, \texttt{fsh\_bulkCount}, \texttt{fsh\_perFish,} and the complete taxon table for fish, for both stream and lake sites. To join all reach-scale data, we first joined the \texttt{fsh\_perPass} with \texttt{fsh\_fieldData}, and eliminated all bouts where sampling was untenable. Subsequently, we joined the reach-scale table with \texttt{fsh\_perFsh} to add individual fish counts and fish measurements. Then, to add bulk counts, we joined the reach-scale table with \texttt{fsh\_bulkCount} datasets, and subsequently added \texttt{taxonRank} which included the taxonomic resolution into the bulk-processed table. Afterward, both individual-level and bulk-processed datasets were appended into a single table. To include samples where no fish were captured, we filtered the \texttt{fsh\_perPass} table retaining records where target taxa (fish) were absent, joined it with \texttt{fsh\_fieldData}, and finally merged it with the table that contained both bulk-processed and individual-level data. For each finer-resolution taxon in the individual-level dataset, we considered the relative abundance as one since each row represented a single individual fish. Whenever possible, we substituted missing data by cross-referencing other data columns, omitted completely redundant data columns, and retained records with genus- and species-level taxonomic resolution. For the appended dataset, we also calculated the relative abundance for each species per sampling reach or segment at a given site. To calculate species-specific catch per unit effort (CPUE), we normalized the relative abundance by either average electrofishing time (i.e., \texttt{efTime}, \texttt{efTime2}) or trap deployment time (i.e., the difference between \texttt{netEndTime} and \texttt{netSetTime}). For trap data, we assumed that size of the traps used, water depths, number of netters used, and the reach lengths (a significant proportion of bouts had reach lengths missing) to be comparable across different sampling reaches and segments. \hypertarget{zooplankton}{% \subsubsection{Zooplankton}\label{zooplankton}} \textbf{NEON Sampling Design} Zooplankton samples are collected at seven NEON lake sites across four domains. Zooplankton samples are collected at the buoy sensor set (deepest location in the lake) and at the two nearshore sensor sets using a vertical tow net for locations deeper than 4 m and a Schindler trap for locations shallower than 4 m (Parker and Roehm 2019). This results in three samples collected per sampling day. Samples are preserved with ethanol in the field and shipped from the domain facility to a taxonomy lab for sorting and identification to lowest possible taxon (e.g., genus or species) and counts of each taxon per size are made to the nearest mm. \textbf{Data Wrangling Decisions} The NEON zooplankton data product (\texttt{DP1.20219.001}) consists of dataframes for taxonomic identification and related field data (Parker and Scott 2020). Zooplankton in NEON samples are identified at contracting labs to the lowest possible taxonomic resolution, usually genus, however some specimens can only be identified to the family (or even class) level, depending on the condition of the specimen. Ten percent of all samples are checked by two taxonomists and are noted in the \texttt{qcTaxonomyStatus} column. The taxonomic naming has been standardized in the \texttt{zoo\_taxonomyProcessed} table, according to NEON's master taxonomy, removing any synonyms. Density was calculated using \texttt{adjCountPerBottle} and \texttt{towsTrapsVolume} to correct count data to ``count per liter''. \hypertarget{results-or-how-to-get-and-use-standardized-neon-organismal-data}{% \section{Results (or how to get and use standardized NEON organismal data)}\label{results-or-how-to-get-and-use-standardized-neon-organismal-data}} All cleaned and standardized datasets can be obtained from the R package \texttt{neonDivData} and from the EDI data repository (temporary link, which will be finalized upon acceptance: \url{https://portal-s.edirepository.org/nis/mapbrowse?scope=edi\&identifier=190\&revision=2}). Note that \texttt{neonDivData} included both stable and provisional data released by NEON while the data repository in EDI only included stable datasets. If users want to change some of the decisions to wrangle the data differently, they can find the code in the R package \texttt{ecocomDP} and modify them for their own purposes. The data package \texttt{neonDivData} can be installed from Github. Installation instructions can be found on the Github webpage (\url{https://github.com/daijiang/neonDivData}). Table \ref{tab:dataSummary} shows the brief summary of all data objects. To get data for a specific taxonomic group, we can just call the objects in the \texttt{R\ object} column in Table \ref{tab:dataSummary}. Such data products include cleaned (and standardized if needed) occurrence data for the taxonomic groups covered and are equivalent to the ``observation'' table of the ecocomDP data format. If environmental information were provided by NEON for some taxonomic groups, they are also included in these data objects. Information such as latitude, longitude, and elevation for all taxonomic groups were saved in the \texttt{neon\_location} object of the R package, which is equivalent to the ``sampling\_location'' table of the ecocomDP data format. Information about species scientific names of all taxonomic groups were saved in the \texttt{neon\_taxa} object, which is equivalent to the ``taxon'' table of the ecocomDP data format. \begin{table} \caption{\label{tab:dataSummary}Summary of data products included in this study (as of 08 April, 2021). Users can call the R objects in the \texttt{R\ object} column from the R data package \texttt{neonDivData} to get the standardized data for specific taxonomic groups.} \centering \resizebox{\linewidth}{!}{ \begin{tabular}[t]{llrrll} \toprule Taxon group & R object & N species & N sites & Start date & End date\\ \midrule \cellcolor{gray!6}{Algae} & \cellcolor{gray!6}{data\_algae} & \cellcolor{gray!6}{1946} & \cellcolor{gray!6}{33} & \cellcolor{gray!6}{2014-07-02} & \cellcolor{gray!6}{2019-07-15}\\ Beetles & data\_herp\_bycatch & 756 & 47 & 2013-07-03 & 2020-10-13\\ \cellcolor{gray!6}{Birds} & \cellcolor{gray!6}{data\_bird} & \cellcolor{gray!6}{541} & \cellcolor{gray!6}{47} & \cellcolor{gray!6}{2015-05-13} & \cellcolor{gray!6}{2020-07-20}\\ Fish & data\_fish & 147 & 28 & 2016-03-29 & 2020-12-03\\ \cellcolor{gray!6}{Herptiles} & \cellcolor{gray!6}{data\_herp\_bycatch} & \cellcolor{gray!6}{128} & \cellcolor{gray!6}{41} & \cellcolor{gray!6}{2014-04-02} & \cellcolor{gray!6}{2020-09-29}\\ Macroinvertebrates & data\_macroinvertebrate & 1330 & 34 & 2014-07-01 & 2020-08-12\\ \cellcolor{gray!6}{Mosquitoes} & \cellcolor{gray!6}{data\_mosquito} & \cellcolor{gray!6}{128} & \cellcolor{gray!6}{47} & \cellcolor{gray!6}{2014-04-09} & \cellcolor{gray!6}{2020-06-16}\\ Plants & data\_plant & 6197 & 47 & 2013-06-24 & 2020-10-23\\ \cellcolor{gray!6}{Small mammals} & \cellcolor{gray!6}{data\_small\_mammal} & \cellcolor{gray!6}{145} & \cellcolor{gray!6}{46} & \cellcolor{gray!6}{2013-06-19} & \cellcolor{gray!6}{2020-11-20}\\ Tick pathogens & data\_tick\_pathogen & 12 & 15 & 2014-04-17 & 2018-10-03\\ \cellcolor{gray!6}{Ticks} & \cellcolor{gray!6}{data\_tick} & \cellcolor{gray!6}{19} & \cellcolor{gray!6}{46} & \cellcolor{gray!6}{2014-04-02} & \cellcolor{gray!6}{2020-10-06}\\ Zooplankton & data\_zooplankton & 157 & 7 & 2014-07-02 & 2020-07-22\\ \bottomrule \end{tabular}} \end{table} To demonstrate the use of data packages, we used \texttt{data\_plant} to quickly visualize the distribution of species richness of plants across all NEON sites (Fig. \ref{fig:Fig2Map}). To show how easy it is to get site level species richness, we presented the code used to generate the data for Fig. \ref{fig:Fig2Map} as supporting information. Figure \ref{fig:Fig2Map} shows the utility of the data package for exploring macroecological patterns at the NEON site level. One of the most well known and studied macroecological patterns is the latitudinal biodiversity gradient, wherein sites are more species at lower latitudes relative to higher latitudes; temperature, biotic interactions, and historical biogeography are potential reasons underlying these patterns (Fischer 1960, Hillebrand 2004). Herbaceous plants of NEON generally follow this pattern. The latitudinal pattern for NEON small mammals is similar, and is best explained by increased niche space and declining similarity in body size among species in lower latitudes, rather than a direct effect of temperature (Read et al. 2018). \begin{figure} {\centering \includegraphics[width=0.95\linewidth]{/Users/dli30/Github/neonDivData/manuscript/figures/p_plant} } \caption{Plant species richness mapped across NEON terrestrial sites. The inset scatterplot shows latitude on the x-axis and species richness on the y-axis, with red points representing sites in Puerto Rico and Hawaii.}\label{fig:Fig2Map} \end{figure} In addition to allowing for quick exploration of macroecological patterns of richness at NEON sites, the data packages presented in this paper enable investigation of effects of taxonomic resolution on diversity indices since taxonomic information is preserved for observations under family level for all groups. The degree of taxonomic resolution varies for NEON taxa depending on the diversity of the group and the level of taxonomic expertise needed to identify an organism to the species level, with more diverse groups presenting a greater challenge. Beetles are one of the most diverse groups of organisms on Earth and wide-ranging geographically, making them ideal bioindicators of environmental change (Rainio and Niemelä 2003). To illustrate how the use of the beetle data package presented in this paper enables NEON data users to easily explore the effects of taxonomic resolution on community-level taxonomic diversity metrics, we calculated Jost diversity indices (Jost 2006) for beetles at the Oak Ridge National Laboratory (ORNL) NEON site for data subsetted at the genus, species, and subspecies level. To quantify biodiversity, we used Jost indices, which are essentially Hill Numbers that vary in how abundance is weighted with a parameter \emph{q}. Higher values of \emph{q} give lower weights to low-abundance species, with \emph{q} = 0 being equivalent to species richness and \emph{q} = 1 representing the effective number of species given by the Shannon entropy. These indices are plotted as rarefaction curves, which assess the sampling efficacy. When rarefaction curves asymptote they suggest that additional sampling will not capture additional taxa. Statistical methods presented by Chao et al. (2014) provide estimates of sampling efficacy beyond the observed data (i.e., extrapolated values shown by dashed lines in Fig. \ref{fig:Fig3Curve}). For the ORNL beetle data, Jost indices calculated with higher values of \emph{q} (i.e., \emph{q} \textgreater{} 0) indicated sampling has reached an asymptote in terms of capturing diversity regardless of taxonomic resolution (i.e., genus, species, subspecies). However, rarefaction curves for \emph{q} = 0, which is equivalent to species richness do not asymptote, even with extrapolation. These plots suggest that if a researcher is interested in low abundance, rare species, then the NEON beetle data stream at ORNL may need to mature with additional sample collections over time before confident inferences may be made, especially below the taxonomic resolution of genus. \begin{figure} {\centering \includegraphics[width=0.95\linewidth]{/Users/dli30/Github/neonDivData/manuscript/figures/beetle_rarefaction} } \caption{Rarefaction of beetle abundance data from collections made at the Oak Ridge National Laboratory (ORNL) National Ecological Observatory Network (NEON) site from 2014-2020 generated using the beetle data package presented in this paper and the iNEXT package in R (Hsieh et al. 2016) based on different levels of taxonomic resolution (i.e., genus, species, subspecies). Different colors indicate Jost Indices with differing values of q (Jost 2006).}\label{fig:Fig3Curve} \end{figure} \hypertarget{discussion-or-how-to-maintain-and-update-standardized-neon-organismal-data}{% \section{Discussion (or how to maintain and update standardized NEON organismal data)}\label{discussion-or-how-to-maintain-and-update-standardized-neon-organismal-data}} NEON organismal data hold enormous potential to understand biodiversity change across space and time (Balch et al. 2019, Jones et al. 2021). Multiple biodiversity research and education programs have used NEON data even before NEON became fully operational in May 2019 (e.g., Farrell and Carey 2018, Read et al. 2018). With the expected long-term investment to maintain NEON over the next 30 years, NEON organismal data will be an invaluable tool for understanding and tracking biodiversity change. NEON data are unique relative to data collected by other similar networks (e.g., LTER, CZO) because observation collection protocols are standardized across sites, enabling researchers to address macroscale questions in environmental science without having to synthesize disparate data sets that differ in collection methods (Jones et al. 2021). The data package presented in this paper holds great potential in making NEON data easier to use and more comparable across studies. Whereas the data collection protocols implemented by NEON staff are standardized, the decisions NEON data users make in wrangling their data after downloading NEON's open data will not necessarily be similar unless the user community adopts a community data standard, such as the ecocomDP data model. Adopting such a data model early on in the life of the observatory will ensure that results of studies using NEON data will be comparable and thus easier to synthesize. By providing a standardized and easy-to-use data package of NEON organismal data, our effort here will significantly lower the barriers to use the NEON organismal data for biodiversity research by many current and future researchers and will ensure that studies using NEON organismal data are comparable. There are some important notes about the data package we provided. First, our processes assume that NEON ensured correct identifications of species. However, since records may be identified to any level of taxonomic resolution, and IDs above the genus level may not be useful for most biodiversity projects, we removed records with such IDs for groups that are relatively easy to identify (i.e., fish, plant, small mammals) or have very few taxon IDs that are above genus level (i.e., mosquito). However, for groups that are hard to identify (i.e., algae, beetle, bird, macroinvertebrate, tick, and tick pathogen), we decided to keep all records regardless of their taxon IDs level. Such information can be useful if we are interested in questions such as species-to-genus ratio or species rarefaction curves at different taxonomic levels (e.g., Fig. \ref{fig:Fig3Curve}). Users thus need to carefully consider which level of taxon IDs they need to address their research questions. Another note regarding species names is the term `sp.' vs `spp.' across NEON organismal data collections; the term `sp.' refers to a single morphospecies whereas the term `spp.' refers to more than one morphospecies. This is an important point to consider for community ecology or biodiversity analyses because it may add uncertainty into estimates of biodiversity metrics such as species richness. It is also important to point out that NEON fuzzed taxonomic IDs to one higher taxonomic level to protect species of concern. For example, if a threatened Black-capped vireo (\emph{Vireo atricapilla}) is recorded by a NEON technician, the taxonomic identification is fuzzed to Vireo in the data. Rare, threatened and endangered species are those listed as such by federal and/or state agencies. Second, we standardized species abundance measurements to make them comparable across different sampling events within each taxonomic group (Table \ref{tab:dataMapping}). Such standardization is critical to study and compare biodiversity. And finally, NEON publishes data for additional organismal groups, which were not included in this study given the complexity of the data. For example, aquatic plants (DP1.20066.001 and DP1.20072.001); benthic microbe abundances (DP1.20277.001), metagenome sequences (DP1.20279.001), marker gene sequences (DP1.20280.001), and community composition (DP1.20086.001); surface water microbe abundances (DP1.20278.001), metagenome sequences (DP1.20281.001), marker gene sequences (DP1.20282.001), and community composition (DP1.20141.001); and soil microbe biomass (DP1.10104.001), metagenome sequences (DP1.10107.001), marker gene sequences (DP1.10108.001), and community composition (DP1.10081.001) were not considered here, though future work may utilize \texttt{neonDivData} to align these datasets. Users interested in further explorations of these data products may find more information on the NEON data portal (\url{https://data.neonscience.org/}). Additionally, concurrent work on a suggested bioinformatics pipeline and how to run sensitivity analyses on user-defined parameters for NEON soil microbial data, including code and vignettes, is described in Qin et al.~in prep. All code for the Data Wrangling Decisions are available within the R package \texttt{ecocomDP} (\url{https://github.com/EDIorg/ecocomDP}). Users can modify the code if they need to make different decisions during the data wrangling process and update our workflows in our code by submitting a pull request to our Github repository. If researchers wish to generate their own derived organismal data sets from NEON data with slightly different decisions than the ones outlined in this paper, we recommend that they use the ecocomDP framework, contribute their workflow to the ecocomDP R package, upload the data to the EDI repository, and cite their data with the discoverable DOI given to them by EDI. Note that the ecocomDP data model was intended for community ecology analyses and may not be well suited for population-level analyses. Because \texttt{ecocomDP} is an R package to access and format datasets following the ecocomDP format, we developed an R data package \texttt{neonDivData} to host and distribute the standardized NEON organismal data derived from \texttt{ecocomDP}. A separate dedicated data package has several advantages. First, it is easier and ready to use and saves time for users to run the code in \texttt{ecocomDP} to download and standardize NEON data products. Second, it is also easy to update the data package when new raw data products are uploaded by NEON to their data portal; and the updating process does not require any change in the \texttt{ecocomDP} package. This is ideal because \texttt{ecocomDP} provides harmonized data from other sources besides NEON. Third, the \href{(https://github.com/daijiang/neonDivData)}{Github repository page of \texttt{neonDivData}} can serve as a discussion forum for researchers regarding the NEON data products without competing for attention in the \texttt{ecocomDP} Github repository page. By opening issues on the Github repository, users can discuss and contribute to improve our workflow of standardizing NEON data products. Users can also discuss whether there are other data models that the NEON user community should adopt at the inception of the observatory. As the observatory moves forward, this is an important discussion for the NEON user community and NEON technical working groups to promote synthesis of NEON data with data from other efforts (e.g., LTER, CZO, Ameriflux, the International LTER, National Phenology Network, Long Term Agricultural Research Network). Note that the standardized datasets that are stable (defined by NEON as stable release) were archived at EDI and some of the above advantages also apply to the data repository at EDI. The derived data products presented here collectively represent hundreds of hours of work by members of our team - a group that met at the NEON Science Summit in 2019 in Boulder, Colorado and consists of researchers and NEON science staff. Just as it is helpful when working with a dataset to either have collected the data or be in close correspondence with the person who collected the data, final processing decisions were greatly informed by conversations with NEON science staff and the NEON user community. Future opportunities that encourage collaborations between NEON science staff and the NEON user community will be essential to achieve the full potential of the observatory data. \hypertarget{conclusion}{% \section{Conclusion}\label{conclusion}} Macrosystems ecology (sensu Heffernan et al. 2014) is at the start of an exciting new chapter with the decades long awaited buildout of NEON completed and standardized data streams from all sites in the observatory becoming publicly available online. As the research community embarks on discovering new scientific insights from NEON data, it is important that we make our analyses and all derived data as reproducible as possible to ensure that connections across studies are possible. Harmonized data sets will help in this endeavor because they naturally promote the collection of provenance as data are collated into derived products (O'Brien et al.~In review, Reichman et al. 2011). Harmonized data also make synthesis easier because efforts to clean and format data leading up to analyses do not have to be repeatedly performed by individual researchers (O' Brien et al.~In review). The data standardizing processes and derived data package presented here illustrate a potential path forward in achieving a reproducible framework for data derived from NEON organismal data for ecological analyses. This derived data package also highlights the value of collaboration between the NEON user community and NEON staff for advancing NEON-enabled science. \hypertarget{acknowledgement}{% \section{Acknowledgement}\label{acknowledgement}} This work is a result of participating in the first NEON Science Summit in 2019 and an internship program through the St.~Edward's Institute for Interdisciplinary Science (i4) funded through a National Science Foundation award under Grant No.~1832282. The authors acknowledge support from the NSF Award \#1906144 to attend the 2019 NEON Science Summit. Additionally, the authors acknowledge support from the NSF DEB 1926568 to S.R., NSF DEB 1926567 to P.L.Z., NSF DEB 1926598 to M.A.J, and NSF DEB 1926341 to J.M.L.. Comments from NEON staff (Katie LeVan, Dylan Mpnahan, Sata Paull, Dave Barnett, Sam Simkin), Margaret O'Brien and Tad Dallas greatly improved this work. The National Ecological Observatory Network is a program sponsored by the National Science Foundation and operated under cooperative agreement by Battelle Memorial Institute. This material is based in part upon work supported by the National Science Foundation through the NEON Program. \hypertarget{reference}{% \section*{Reference}\label{reference}} \addcontentsline{toc}{section}{Reference} \hypertarget{refs}{} \begin{cslreferences} \leavevmode\hypertarget{ref-baker1997environmental}{}% Baker, J. R., D. V. Peck, and D. W. Sutton. 1997. Environmental monitoring and assessment program surface waters: Field operations manual for lakes. US Environmental Protection Agency, Washington. \leavevmode\hypertarget{ref-balch2019neon}{}% Balch, J. K., R. Nagy, and B. S. Halpern. 2019. NEON is seeding the next revolution in ecology. Frontiers in Ecology and the Environment 18. \leavevmode\hypertarget{ref-Barnett2019}{}% Barnett, D. 2019. TOS protocol and procedure: DIV - plant diversity sampling. NEON.DOC.014042vK. NEON (National Ecological Observatory Network). \leavevmode\hypertarget{ref-barnett2019plant}{}% Barnett, D. T., P. B. Adler, B. R. Chemel, P. A. Duffy, B. J. Enquist, J. B. Grace, S. Harrison, R. K. Peet, D. S. Schimel, T. J. Stohlgren, and others. 2019. The plant diversity sampling design for the national ecological observatory network. Ecosphere 10:e02603. \leavevmode\hypertarget{ref-bechtold2005enhanced}{}% Bechtold, W. A., and P. L. Patterson. 2005. The enhanced forest inventory and analysis program--national sampling design and estimation procedures. USDA Forest Service, Southern Research Station. \leavevmode\hypertarget{ref-beck2014spatial}{}% Beck, J., M. Böller, A. Erhardt, and W. Schwanghart. 2014. Spatial bias in the gbif database and its effect on modeling species' geographic distributions. Ecological Informatics 19:10--15. \leavevmode\hypertarget{ref-blowes2019geography}{}% Blowes, S. A., S. R. Supp, L. H. Antão, A. Bates, H. Bruelheide, J. M. Chase, F. Moyes, A. Magurran, B. McGill, I. H. Myers-Smith, and others. 2019. The geography of biodiversity change in marine and terrestrial assemblages. Science 366:339--345. \leavevmode\hypertarget{ref-brown2004toward}{}% Brown, J. H., J. F. Gillooly, A. P. Allen, V. M. Savage, and G. B. West. 2004. Toward a metabolic theory of ecology. Ecology 85:1771--1789. \leavevmode\hypertarget{ref-Cawley2016}{}% Cawley, K. M., S. Parker, R. Utz, K. Goodman, C. Scott, M. Fitzgerald, J. Vance, B. Jensen, C. Bohall, and T. Baldwin. 2016. NEON aquatic sampling strategy. NEON.DOC.001152vA. NEON (National Ecological Observatory Network). \leavevmode\hypertarget{ref-chao2014rarefaction}{}% Chao, A., N. J. Gotelli, T. Hsieh, E. L. Sander, K. Ma, R. K. Colwell, and A. M. Ellison. 2014. Rarefaction and extrapolation with hill numbers: A framework for sampling and estimation in species diversity studies. Ecological monographs 84:45--67. \leavevmode\hypertarget{ref-Chesney2021}{}% Chesney, T., S. Parker, and C. Scott. 2021. NEON user guide to aquatic macroinvertebrate collection (dp1.20120.001). Revision b. NEON (National Ecological Observatory Network). \leavevmode\hypertarget{ref-curtis1959vegetation}{}% Curtis, J. T. 1959. The vegetation of wisconsin: An ordination of plant communities. University of Wisconsin Pres. \leavevmode\hypertarget{ref-egli2020taxonomic}{}% Egli, L., K. E. LeVan, and T. T. Work. 2020. Taxonomic error rates affect interpretations of a national-scale ground beetle monitoring program at national ecological observatory network. Ecosphere 11:e03035. \leavevmode\hypertarget{ref-farley2018situating}{}% Farley, S. S., A. Dawson, S. J. Goring, and J. W. Williams. 2018. Situating ecology as a big-data science: Current advances, challenges, and solutions. BioScience 68:563--576. \leavevmode\hypertarget{ref-farrell2018power}{}% Farrell, K. J., and C. C. Carey. 2018. Power, pitfalls, and potential for integrating computational literacy into undergraduate ecology courses. Ecology and evolution 8:7744--7751. \leavevmode\hypertarget{ref-fischer1960latitudinal}{}% Fischer, A. G. 1960. Latitudinal variations in organic diversity. Evolution 14:64--81. \leavevmode\hypertarget{ref-geldmann2016determines}{}% Geldmann, J., J. Heilmann-Clausen, T. E. Holm, I. Levinsky, B. Markussen, K. Olsen, C. Rahbek, and A. P. Tøttrup. 2016. What determines spatial bias in citizen science? Exploring four recording schemes with different proficiency requirements. Diversity and Distributions 22:1139--1149. \leavevmode\hypertarget{ref-g2019remote}{}% G Pricope, N., K. L Mapes, and K. D Woodward. 2019. Remote sensing of human--environment interactions in global change research: A review of advances, challenges and future directions. Remote Sensing 11:2783. \leavevmode\hypertarget{ref-gurevitch1999statistical}{}% Gurevitch, J., and L. V. Hedges. 1999. Statistical issues in ecological meta-analyses. Ecology 80:1142--1149. \leavevmode\hypertarget{ref-hampton2013big}{}% Hampton, S. E., C. A. Strasser, J. J. Tewksbury, W. K. Gram, A. E. Budden, A. L. Batcheller, C. S. Duke, and J. H. Porter. 2013. Big data and the future of ecology. Frontiers in Ecology and the Environment 11:156--162. \leavevmode\hypertarget{ref-harte2011maximum}{}% Harte, J. 2011. Maximum entropy and ecology: A theory of abundance, distribution, and energetics. OUP Oxford. \leavevmode\hypertarget{ref-heffernan2014macrosystems}{}% Heffernan, J. B., P. A. Soranno, M. J. Angilletta Jr, L. B. Buckley, D. S. Gruner, T. H. Keitt, J. R. Kellner, J. S. Kominoski, A. V. Rocha, J. Xiao, and others. 2014. Macrosystems ecology: Understanding ecological patterns and processes at continental scales. Frontiers in Ecology and the Environment 12:5--14. \leavevmode\hypertarget{ref-hillebrand2004generality}{}% Hillebrand, H. 2004. On the generality of the latitudinal diversity gradient. The American Naturalist 163:192--211. \leavevmode\hypertarget{ref-hoekman2017design}{}% Hoekman, D., K. E. LeVan, C. Gibson, G. E. Ball, R. A. Browne, R. L. Davidson, T. L. Erwin, C. B. Knisley, J. R. LaBonte, J. Lundgren, and others. 2017. Design for ground beetle abundance and diversity sampling within the national ecological observatory network. Ecosphere 8:e01744. \leavevmode\hypertarget{ref-hsieh2016inext}{}% Hsieh, T., K. Ma, and A. Chao. 2016. INEXT: An r package for rarefaction and extrapolation of species diversity (h ill numbers). Methods in Ecology and Evolution 7:1451--1456. \leavevmode\hypertarget{ref-hubbell2001unified}{}% Hubbell, S. P. 2001. The unified neutral theory of biodiversity and biogeography (mpb-32). Princeton University Press. \leavevmode\hypertarget{ref-hutchinson1959homage}{}% Hutchinson, G. E. 1959. Homage to santa rosalia or why are there so many kinds of animals? The American Naturalist 93:145--159. \leavevmode\hypertarget{ref-Jensen2019a}{}% Jensen, B., S. Parker, and J. R. Fischer. 2019a. AOS protocol and procedure: Fish sampling in wadeable streams. NEON.DOC.001295vF. NEON (National Ecological Observatory Network). \leavevmode\hypertarget{ref-Jensen2019b}{}% Jensen, B., S. Parker, and J. R. Fischer. 2019b. AOS protocol and procedure: Fish sampling in lakes. NEON.DOC.001296vF. NEON (National Ecological Observatory Network). \leavevmode\hypertarget{ref-jones2021synergies}{}% Jones, J., P. Groffman, J. Blair, F. Davis, H. Dugan, E. Euskirchen, S. Frey, T. Harms, E. Hinckley, M. Kosmala, and others. 2021. Synergies among environmental science research and monitoringnetworks: A research agenda. Earth's Future:e2020EF001631. \leavevmode\hypertarget{ref-jost2006entropy}{}% Jost, L. 2006. Entropy and diversity. Oikos 113:363--375. \leavevmode\hypertarget{ref-keller2008continental}{}% Keller, M., D. S. Schimel, W. W. Hargrove, and F. M. Hoffman. 2008. A continental strategy for the national ecological observatory network. The Ecological Society of America: 282-284. \leavevmode\hypertarget{ref-koricheva2014uses}{}% Koricheva, J., and J. Gurevitch. 2014. Uses and misuses of meta-analysis in plant ecology. Journal of Ecology 102:828--844. \leavevmode\hypertarget{ref-LeVan2020}{}% LeVan, K. 2020a. NEON user guide to ground beetles sampled from pitfall traps (dp1.10022.001).version c. NEON (National Ecological Observatory Network). \leavevmode\hypertarget{ref-LeVan2020b}{}% LeVan, K. 2020b. NEON user guide to mosquitoes sampled from co2 traps (dp1.10043.001) and mosquito-borne pathogen satatus (dp1.10041.001).version c. NEON (National Ecological Observatory Network). \leavevmode\hypertarget{ref-LeVan2019b}{}% LeVan, K., S. Paull, K. Tsao, D. Hoekman, and Y. Springer. 2019a. TOS protocol and procedure: MOS - mosquito sampling. NEON.DOC.014049vL. NEON (National Ecological Observatory Network). \leavevmode\hypertarget{ref-LeVan2019}{}% LeVan, K., N. Robinson, D. Hoekman, and K. Blevins. 2019b. TOS protocol and procedure: Ground beetle sampling. NEON.DOC.014041vJ. NEON (National Ecological Observatory Network). \leavevmode\hypertarget{ref-LeVan2019c}{}% LeVan, K., K. Thibault, K. Tsao, and Y. Springer. 2019c. TOS protocol and procedure: Tick and tick-borne pathogen sampling. NEON.DOC.014045vK. NEON (National Ecological Observatory Network). \leavevmode\hypertarget{ref-li2020changes}{}% Li, D., J. D. Olden, J. L. Lockwood, S. Record, M. L. McKinney, and B. Baiser. 2020. Changes in taxonomic and phylogenetic diversity in the anthropocene. Proceedings of the Royal Society B 287:20200777. \leavevmode\hypertarget{ref-linnaeus1758systema}{}% Linnaeus, C. 1758. Systema naturae. Stockholm Laurentii Salvii. \leavevmode\hypertarget{ref-macarthur1967theory}{}% MacArthur, R. H., and E. O. Wilson. 1967. The theory of island biogeography. Princeton university press. \leavevmode\hypertarget{ref-martin2012mapping}{}% Martin, L. J., B. Blossey, and E. Ellis. 2012. Mapping where ecologists work: Biases in the global distribution of terrestrial ecological observations. Frontiers in Ecology and the Environment 10:195--201. \leavevmode\hypertarget{ref-midgley2005global}{}% Midgley, G. F., and W. Thuiller. 2005. Global environmental change and the uncertain fate of biodiversity. The New Phytologist 167:638--641. \leavevmode\hypertarget{ref-Monahan2020}{}% Monahan, D., B. Jensen, S. Parker, and C. Scott. 2020. NEON user guide to fish electrofishing, gill netting, and fyke netting counts (dp1.20107.001). Revision b. NEON (National Ecological Observatory Network). \leavevmode\hypertarget{ref-moulton2002revised}{}% Moulton II, S. R., J. G. Kennen, R. M. Goldstein, and J. A. Hambrook. 2002. Revised protocols for sampling algal, invertebrate, and fish communities as part of the national water-quality assessment program. Geological Survey (US). \leavevmode\hypertarget{ref-nakagawa2012methodological}{}% Nakagawa, S., and E. S. Santos. 2012. Methodological issues and advances in biological meta-analysis. Evolutionary Ecology 26:1253--1274. \leavevmode\hypertarget{ref-palumbo2017building}{}% Palumbo, I., R. A. Rose, R. M. Headley, J. Nackoney, A. Vodacek, and M. Wegmann. 2017. Building capacity in remote sensing for conservation: Present and future challenges. Remote Sensing in Ecology and Conservation 3:21--29. \leavevmode\hypertarget{ref-Parker2019}{}% Parker, S. 2019. AOS protocol and procedure: INV - aquatic macroinvertebrate sampling. NEON.DOC.003046vE. NEON (National Ecological Observatory Network). \leavevmode\hypertarget{ref-Parker2020}{}% Parker, S. 2020. AOS protocol and procedure: ALG - periphyton and phytoplankton sampling. NEON.DOC.003045vE. NEON (National Ecological Observatory Network). \leavevmode\hypertarget{ref-ParkerRoehm2019}{}% Parker, S., and C. Roehm. 2019. AOS protocol and procedure: ZOO - zooplankton sampling in lakes. NEON.DOC.001194. NEON (National Ecological Observatory Network). \leavevmode\hypertarget{ref-ParkerScott2020}{}% Parker, S., and C. Scott. 2020. NEON user guide to aquatic zooplankton collection (dp1.20219.001). Revision b. NEON (National Ecological Observatory Network). \leavevmode\hypertarget{ref-ParkerVance2020}{}% Parker, S., and T. Vance. 2020. NEON user guide to periphyton and phytoplankton collection (dp1.20166.001). Revision c. NEON (National Ecological Observatory Network). \leavevmode\hypertarget{ref-pavlacky2017statistically}{}% Pavlacky Jr, D. C., P. M. Lukacs, J. A. Blakesley, R. C. Skorkowsky, D. S. Klute, B. A. Hahn, V. J. Dreitz, T. L. George, and D. J. Hanni. 2017. A statistically rigorous sampling design to integrate avian monitoring and management within bird conservation regions. PloS one 12:e0185924. \leavevmode\hypertarget{ref-peck2006environmental}{}% Peck, D. V., Herlihy, A. T., Hill, B. H., Hughes, R. M., Kaufmann, P. R., Klemm, D. J., Lazorchak, J. M., McCormick, F. H., Peterson, S. A., Ringold, P. L., Magee, T., and M. R. and Cappaert. 2006. Environmental monitoring and assessment program --- surface waters: Western pilot study field operations manual for wadeable streams. US Environmental Protection Agency, Washington. \leavevmode\hypertarget{ref-pritt2017proposal}{}% Pritt, B. S., M. E. Allerdice, L. M. Sloan, C. D. Paddock, U. G. Munderloh, Y. Rikihisa, T. Tajima, S. M. Paskewitz, D. F. Neitzel, D. K. H. Johnson, and others. 2017. Proposal to reclassify ehrlichia muris as ehrlichia muris subsp. Muris subsp. Nov. And description of ehrlichia muris subsp. Eauclairensis subsp. Nov., a newly recognized tick-borne pathogen of humans. International journal of systematic and evolutionary microbiology 67:2121. \leavevmode\hypertarget{ref-rainio2003ground}{}% Rainio, J., and J. Niemelä. 2003. Ground beetles (coleoptera: Carabidae) as bioindicators. Biodiversity \& Conservation 12:487--506. \leavevmode\hypertarget{ref-ralph1993handbook}{}% Ralph, C. J. 1993. Handbook of field methods for monitoring landbirds. Pacific Southwest Research Station. \leavevmode\hypertarget{ref-read2018among}{}% Read, Q. D., J. M. Grady, P. L. Zarnetske, S. Record, B. Baiser, J. Belmaker, M.-N. Tuanmu, A. Strecker, L. Beaudrot, and K. M. Thibault. 2018. Among-species overlap in rodent body size distributions predicts species richness along a temperature gradient. Ecography 41:1718--1727. \leavevmode\hypertarget{ref-record2020novel}{}% Record, S., N. M. Voelker, P. L. Zarnetske, N. I. Wisnoski, J. D. Tonkin, C. Swan, L. Marazzi, N. Lany, T. Lamy, A. Compagnoni, and others. 2020. Novel insights to be gained from applying metacommunity theory to long-term, spatially replicated biodiversity data. Frontiers in Ecology and Evolution 8:479. \leavevmode\hypertarget{ref-reichman2011challenges}{}% Reichman, O. J., M. B. Jones, and M. P. Schildhauer. 2011. Challenges and opportunities of open data in ecology. Science 331:703--705. \leavevmode\hypertarget{ref-rudenko2011updates}{}% Rudenko, N., M. Golovchenko, L. Grubhoffer, and J. H. Oliver Jr. 2011. Updates on borrelia burgdorferi sensu lato complex with respect to public health. Ticks and tick-borne diseases 2:123--128. \leavevmode\hypertarget{ref-sauer2017first}{}% Sauer, J. R., K. L. Pardieck, D. J. Ziolkowski Jr, A. C. Smith, M.-A. R. Hudson, V. Rodriguez, H. Berlanga, D. K. Niven, and W. A. Link. 2017. The first 50 years of the north american breeding bird survey. The Condor: Ornithological Applications 119:576--593. \leavevmode\hypertarget{ref-thibault2018TOS}{}% Thibault, K. 2018. TOS protocol and procedure: Breeding landbird abundance and diversity. NEON.DOC.014041vJ. NEON (National Ecological Observatory Network). \leavevmode\hypertarget{ref-thibault2019TOS}{}% Thibault, K., K. Tsao, Y. Springer, and L. Knapp. 2019. TOS protocol and procedure: Small mammal sampling. NEON.DOC.000481vL. NEON (National Ecological Observatory Network). \leavevmode\hypertarget{ref-thorpe2016introduction}{}% Thorpe, A. S., D. T. Barnett, S. C. Elmendorf, E.-L. S. Hinckley, D. Hoekman, K. D. Jones, K. E. LeVan, C. L. Meier, L. F. Stanish, and K. M. Thibault. 2016. Introduction to the sampling designs of the n ational e cological o bservatory n etwork t errestrial o bservation s ystem. Ecosphere 7:e01627. \leavevmode\hypertarget{ref-vellend2013global}{}% Vellend, M., L. Baeten, I. H. Myers-Smith, S. C. Elmendorf, R. Beauséjour, C. D. Brown, P. De Frenne, K. Verheyen, and S. Wipf. 2013. Global meta-analysis reveals no net change in local-scale plant biodiversity over time. Proceedings of the National Academy of Sciences 110:19456--19459. \leavevmode\hypertarget{ref-welti2021meta}{}% Welti, E., A. Joern, A. M. Ellison, D. Lightfoot, S. Record, N. Rodenhouse, E. Stanley, and M. Kaspari. 2021. Meta-analyses of insect temporal trends must account for the complex sampling histories inherent to many long-term monitoring efforts. Nature Ecology and Evolution. \leavevmode\hypertarget{ref-wilkinson2016fair}{}% Wilkinson, M. D., M. Dumontier, I. J. Aalbersberg, G. Appleton, M. Axton, A. Baak, N. Blomberg, J.-W. Boiten, L. B. da Silva Santos, P. E. Bourne, and others. 2016. The fair guiding principles for scientific data management and stewardship. Scientific data 3:1--9. \leavevmode\hypertarget{ref-worm2018theory}{}% Worm, B., and D. P. Tittensor. 2018. A theory of global biodiversity (mpb-60). Princeton University Press. \end{cslreferences} \end{document}
{ "alphanum_fraction": 0.7958594083, "avg_line_length": 162.0979827089, "ext": "tex", "hexsha": "34afa719e421da502f0696e933dc8f54d4038d59", "lang": "TeX", "max_forks_count": 9, "max_forks_repo_forks_event_max_datetime": "2022-01-19T15:47:23.000Z", "max_forks_repo_forks_event_min_datetime": "2020-11-05T23:18:14.000Z", "max_forks_repo_head_hexsha": "155b179182ec38c15ab9e22d985e1b9bc971da59", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "natalie-robinson/neonDivData", "max_forks_repo_path": "manuscript/ms.tex", "max_issues_count": 18, "max_issues_repo_head_hexsha": "155b179182ec38c15ab9e22d985e1b9bc971da59", "max_issues_repo_issues_event_max_datetime": "2022-01-19T22:52:28.000Z", "max_issues_repo_issues_event_min_datetime": "2020-11-11T18:05:54.000Z", "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "natalie-robinson/neonDivData", "max_issues_repo_path": "manuscript/ms.tex", "max_line_length": 3196, "max_stars_count": 6, "max_stars_repo_head_hexsha": "155b179182ec38c15ab9e22d985e1b9bc971da59", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "natalie-robinson/neonDivData", "max_stars_repo_path": "manuscript/ms.tex", "max_stars_repo_stars_event_max_datetime": "2021-08-07T03:44:24.000Z", "max_stars_repo_stars_event_min_datetime": "2020-10-01T17:14:09.000Z", "num_tokens": 28675, "size": 112496 }
\documentclass[10pt]{article} \setlength{\textheight}{9in} \setlength{\textwidth}{6.8in} \setlength{\topmargin}{-0.6in} \setlength{\oddsidemargin}{-0.2in} \begin{document} \title{Spectral Clustering Toolbox} \author{Deepak Verma ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Marina Meila \\\texttt{[email protected][email protected]}} \maketitle \section{Introduction} \label{sec:intro} This toolbox contains the code written to perform various spectral clustering algorithms. The details related to the code and some experiments is available in \cite{VM03}. This document is very short and the reader is encourage to look at the directories to see other code. \section{Using the toolbox} \subsection{Quick Start} \label{sec:quick} To get up and cranking : \begin{enumerate} \item Set \textsc{SPECTRAL\_HOME} to be the directory where you unpacked the library. \item Start matlab. \item Call \texttt{init\_spectral}. (sets up the path and global options). \item \texttt{assignment=cluster\_algo(similarity,number\_of\_clusters)} : Gives you the desired clustering. \item Remember that all the vectors that you see would be column vectors. \end{enumerate} \subsection{Data Input/Output} \label{sec:dataio} Reading a data file (see \texttt{data} directory for some examples). : \texttt{[similarity,cluster\_assignments,points]=read\_from\_data\_file(filePrefix,directory)} \\ Reads the data file \texttt{directory/filePrefix} (default dir=\texttt{data}) and assigns the \texttt{similarity}, the points and true \texttt{cluster\_assignments}. If either of the the above is not defined empty matrix is returned. \subsection{Spectral Algorithms} \label{sec:algos} The algorithms are present in the \texttt{algos} and \texttt{algos/allalgos} directory. The latter just contains files which act convenient shortcut names to popular algorithms. Algorithm \texttt{njw} is described in \cite{NgJW01} and \texttt{mcut} is described in \cite{MeilaS00}. For the details and comparison of all the algorithms see \cite{VM03}. \section{Experimental Framework} \subsection{Running Experiments} \label{sec:exp} To run a bunch of experiments together use: \texttt{run\_single\_experiment(dataFile,cluster\_algo\_list,k\_range,sigma,iterations,outdir,plot\_points)} This runs the experiments on \texttt{dataFile} for the algorithms \texttt{cluster\_algo\_list} ,varying the input number of clusters in the list \texttt{k\_range}. The \texttt{iterations} is the \emph{list} of iterations indices and are useful when there is a random element in the algorithm. \texttt{sigma} is the $\sigma$ used for affinity matrix (\cite{NgJW01} in case the points (see section \ref{sec:dataio} are present in \texttt{dataFile}. The results of each algorithm is written a file in the \texttt{outdir} (with a default value used). If \texttt{plot\_points} is 1 then the results are displayed after each iteration for 2D points. (default 0). \subsection{Plotting graphs} \label{sec:plot} To the plot the graphs on the experiments ran using \texttt{run\_single\_experiment} use: \texttt{plot\_metric\_save(dataFile,cluster\_algo\_list,k\_range,iterations,metric,plot\_stdev,outdir)} The arguments mean the same as above. \texttt{metric} is used specify the metric to be used to compare clustering produced w.r.t. true clustering. The metrics available are \begin{itemize} \item \texttt{vi} : Variation of Information (\cite{stat418}). \item \texttt{ce} : Clustering Error (see \cite{VM03} for details). \item \texttt{wi} : One sided Wallace Index (\cite{wallas}, also see \cite{VM03}). \end{itemize} \section{Datasets} \label{sec:data} \subsection{Artificial} \label{sec:artdata} Some artificial datasets are provided in the \texttt{data} directory. All of them are 2D points which offers various levels of difficulty to the spectral algorithms. They are modelled after \cite{NgJW01}. To see these (or any other 2D) plots use \texttt{plot2Dpoints\_with\_clusters}. An interesting dataset (not in 2D) called \texttt{block-stochastic} (\cite{MeilaS00}) is also provided. It is a similarity matrix designed (\cite{VM03}) to illustrate the case when spectral methods work and linkage based methods fail. \subsection{Real Datasets} \label{sec:realdata} Coming soon.... \section{Demo} Run \texttt{spectral\_demo} in the \texttt{demo} directory for seeing typical use of the library functions. \small{ \bibliography{references} \bibliographystyle{alpha} } \end{document}
{ "alphanum_fraction": 0.7619363395, "avg_line_length": 31.4166666667, "ext": "tex", "hexsha": "0984976f97dca6c31ae3bc316fcbc2c51edc143d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5eae71e248b35dfc025fc3825410fc2959673f67", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "nejci/PRAr", "max_forks_repo_path": "Pepelka/misc/SpectraLib/doc/doc.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5eae71e248b35dfc025fc3825410fc2959673f67", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "nejci/PRAr", "max_issues_repo_path": "Pepelka/misc/SpectraLib/doc/doc.tex", "max_line_length": 128, "max_stars_count": 2, "max_stars_repo_head_hexsha": "5eae71e248b35dfc025fc3825410fc2959673f67", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "nejci/PRAr", "max_stars_repo_path": "Pepelka/misc/SpectraLib/doc/doc.tex", "max_stars_repo_stars_event_max_datetime": "2020-11-15T08:43:08.000Z", "max_stars_repo_stars_event_min_datetime": "2020-07-20T17:14:02.000Z", "num_tokens": 1257, "size": 4524 }
%!TEX TS-program = lualatex %!TEX encoding = UTF-8 Unicode \documentclass[11pt, addpoints]{exam} \usepackage{graphicx} \graphicspath{{/Users/goby/Pictures/teach/300/} {img/}} % set of paths to search for images \usepackage{geometry} \geometry{letterpaper, bottom=1in} %\geometry{landscape} % Activate for for rotated page geometry %\usepackage[parfill]{parskip} % Activate to begin paragraphs with an empty line rather than an indent \usepackage{amssymb, amsmath} \usepackage{mathtools} \everymath{\displaystyle} \usepackage{fontspec} \setmainfont[Ligatures={TeX}, BoldFont={* Bold}, ItalicFont={* Italic}, BoldItalicFont={* BoldItalic}, Numbers={Proportional}]{Linux Libertine O} \setsansfont[Scale=MatchLowercase,Ligatures=TeX]{Linux Biolinum O} \setmonofont[Scale=MatchLowercase]{Inconsolata} \usepackage{microtype} \usepackage{unicode-math} \setmathfont[Scale=MatchLowercase]{Asana Math} \newfontfamily{\tablenumbers}[Numbers={Monospaced}]{Linux Libertine O} \newfontfamily{\libertinedisplay}{Linux Libertine Display O} \usepackage{booktabs} %\usepackage{tabularx} %\usepackage{longtable} %\usepackage{siunitx} \usepackage{hanging} \usepackage{array} \newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}}p{#1}} \newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}p{#1}} \newcolumntype{R}[1]{>{\raggedleft\let\newline\\\arraybackslash\hspace{0pt}}p{#1}} %\usepackage{enumitem} %\usepackage{titling} %\setlength{\droptitle}{-60pt} %\posttitle{\par\end{center}} %\predate{}\postdate{} \renewcommand{\solutiontitle}{\noindent} \unframedsolutions \SolutionEmphasis{\bfseries} \pagestyle{headandfoot} \firstpageheader{BI 300: Evolution}{}{} \runningheader{}{}{\footnotesize{pg. \thepage}} \footer{}{}{} \runningheadrule %\printanswers \begin{document} \subsection*{Columbine Species and Pollinators} Most columbines are pollinated by only bumblebees, only hummingbirds or only hawkmoths. A few species have two pollinators, which is indicated in the table. Both pollinators contribute about equally to pollination success in columbines. \vspace{\baselineskip} \begin{tabular}[c]{@{}ll@{}} \toprule Species & Pollinator\tabularnewline \midrule BA & Hummingbirds / Hawkmoths\tabularnewline BR & Bumblebees\tabularnewline CA & Hummingbirds\tabularnewline CH.CHI & Hawkmoths\tabularnewline CH.NM & Hawkmoths\tabularnewline CHAP & Hawkmoths\tabularnewline COAL & Hawkmoths\tabularnewline COOC.CO & Bumblebees\tabularnewline COOC.UT & Hawkmoths\tabularnewline COOC.WY & Bumblebees\tabularnewline DE & Hummingbirds\tabularnewline EL & Hummingbirds\tabularnewline EX & Hummingbirds\tabularnewline FL & Hummingbirds\tabularnewline FO.E & Hummingbirds\tabularnewline FO.W & Hummingbirds\tabularnewline HI & Hawkmoths\tabularnewline JO & Bumblebees\tabularnewline LA & Bumblebees\tabularnewline LO.AZ & Hawkmoths\tabularnewline LO.TX & Hawkmoths\tabularnewline MI & Hummingbirds / Hawkmoths\tabularnewline PI & Hawkmoths\tabularnewline PU & Hawkmoths\tabularnewline SA & Bumblebees\tabularnewline SC & Hawkmoths\tabularnewline SH & Hummingbirds\tabularnewline SK & Hummingbirds\tabularnewline Sp. nov.\footnotemark & Hawkmoths\tabularnewline TR & Hummingbirds\tabularnewline \bottomrule \end{tabular} \footnotetext{Sp. nov. means \emph{species novum}, which designates a new species without a formal scientific name.} \end{document}
{ "alphanum_fraction": 0.7820886814, "avg_line_length": 30.8828828829, "ext": "tex", "hexsha": "9eee60c3ffd86c5006e7853853960c3e6172a817", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2721d0e2f33333ca5337ccae56508143bfa481d8", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mtaylor-semo/300", "max_forks_repo_path": "handouts/5_columbine_pollinators/columbine_pollinators.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2721d0e2f33333ca5337ccae56508143bfa481d8", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mtaylor-semo/300", "max_issues_repo_path": "handouts/5_columbine_pollinators/columbine_pollinators.tex", "max_line_length": 145, "max_stars_count": 1, "max_stars_repo_head_hexsha": "2721d0e2f33333ca5337ccae56508143bfa481d8", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mtaylor-semo/300", "max_stars_repo_path": "handouts/5_columbine_pollinators/columbine_pollinators.tex", "max_stars_repo_stars_event_max_datetime": "2020-03-19T03:16:10.000Z", "max_stars_repo_stars_event_min_datetime": "2020-03-19T03:16:10.000Z", "num_tokens": 1088, "size": 3428 }
\section{Debugger Design}% \label{section-design} In this section we describe the design of a visualizer for the ReactiveX (Rx) family of RP libraries to answer RQ2. Given the findings of RQ1, the requirements for our visualizer are: \begin{description} \itemsep0em \item[REQ1] Provide overview of Observable flows. This overview should support practices 1 and 2, by graphically representing the relations between Observables, such that a complete image is given of all Observables and how they interact. \item[REQ2] Provide detailed view inside flow. This view should support practices 3 and 4 by giving access to both data flow and life-cycle events and should be able to show the behavior of a operator visually. \end{description} To meet those requirements, we propose a visualizer consisting of two parts: (1) a Data Flow Graph and (2) a Dynamic Marble Diagram. The data flow graph satisfies REQ1, providing a high-level overview, showing how different flows are created, combined and used, while the marble diagram satisfies REQ2, offering a more in-depth look into a single selected data flow showing the contents (in terms of values and subscriptions) of the flows and can be used to learn the behaviors and interplay of operators. \subsection{Data Flow Graph} \paragraph{Simplified graphs} When running a RP program, Observables are created that depend on other Observables (their \emph{source}) and Observers are created to send their values to a defined set of Observers (their \emph{destination}). Figure~% \ref{chaincreate} shows these relations in a graph. For the simplest of programs, the relations between the Observables ($ O = {o_1, o_2, o_3} $) and those between Observers ($ S = {s_1, s_2, s_3} $) share an equally shaped sub-graph after a reversal of the Observer-edges. To provide more overview, we process the graph to merge the two Observable and Observer sequences together, simplifying it in the process, resulting in a \emph{Data Flow Graph} (DFG) as in Figure~% \ref{fiddlesimple}. The process is simple: we retain only the Observer-subgraph nodes, complementing them with the meta data of the corresponding Observable nodes. Higher order relations are retained, as shown in Figure~% \ref{fiddlehigher}. Figure~% \ref{screenshot-mergeAll}B shows the DFG in practice. \begin{figure}[ht] \centering \input{images/chainsimple} \caption{DFG of Figure~% \ref{chaincreate}}% \label{fiddlesimple} \end{figure} \begin{figure}[ht] \centering \input{images/fiddlehigher} \caption{DFG of Figure~% \ref{chainhigher}}% \label{fiddlehigher} \end{figure} \paragraph{Layout} Layout is used to add extra meaning to the graph. If multiple subscriptions on the same Observable are created, multiple flows are kept in the graph and they are bundled together in the resulting layout. This is designed to help developers find related flows. Also it is easy to see that for example an Observable is reused many times, hinting a possible performance improvement by sharing the computation (Rx has special \code{share}-operators to multicast). The layout is based on StoryFlow~\cite{liu2013storyflow}, which employs a hierarchical clustering before ordering the graph in a way to reduce crossings. Whereas StoryFlow clusters on physical character location, we cluster flows per Observable. Furthermore, StoryFlow supports interactivity in various layout stages of which we use the algorithms for \emph{straightening} and \emph{dragging} to support selecting a specific flow, which is then highlighted, straightened and positioned at the right in order to match the Marble Diagram, shown for the current highlighted flow. \paragraph{Color} Coloring the nodes can be used to identify the same Observable in multiple places in the graph, as Observables can be reused in different places of the stream. For example, in Figure~% \ref{chainhigher} the \code{inner} Observable is reused twice, which we denote visually by applying the same color to its two occurrences in the DFG. \subsection{Dynamic Marble Diagrams} In contrast to the original diagrams (Section% \ref{marblediagram}), we use dynamic diagrams which update live when new events occur and are stacked to show the data in the complete flow. This allows the developer to \emph{trace a value back through the flow}, a debug operation which is impossible using a classic debugger. Handcrafted marble diagrams can use custom shapes and colors to represent events, but for the generic debugger we use only three shapes: next-events are a green dot, errors are a black cross and completes are a vertical line, as shown in Figure~% \ref{screenshot-mergeAll}C. For our generic debugger, it is unfeasible to automatically decide which properties (content, shape and color) to apply to events, as the amount of events and distinguishing features might be unbounded. Instead the event values are shown upon hovering. \subsection{Architecture} To support the visualization, we design a debugger architecture consisting of two components: a host instrumentation and a visualizer. The \textbf{Host instrumentation} instruments the Rx library to emit useful execution events. Depending on the language and platform, specific instrumentation is required. Output of the instrumentation is a platform and language independent graph like Figure~% \ref{chainhigher}. By splitting the instrumentation from the visualization, the debugger can be used for the complete Rx family of libraries by only reimplementing the first component. The communication protocol for the instrumentation is shown in Table~% \ref{protocol}. Note that the user never needs to use this protocol, it is internal to the debugger. The \textbf{Visualizer} takes the output of the host instrumentation, the initial graph, and simplifies it into a Data Flow Graph. Then it lays out the Data Flow Graph and provides the debuggers User Interface. By separating the visualizer, we can safely export generated graphs and visualize them post mortem for example for documentation purposes. The components can run in their own environment. The instrumentation must run inside the host language, while the Visualizer can use a different language and platform. \begin{figure*} \begin{annotatedFigure} {\includegraphics[width=1.0\textwidth]{{images/screenshot.mergeAll2.crop}.png}} \annotatedFigureBox{0.039,0.7256}{0.329,0.8716}{A}{0.039,0.1256}%bl \annotatedFigureBox{0.363,0.1627}{0.553,0.8248}{B}{0.363,0.1627}%bl \annotatedFigureBox{0.568,0.0219}{0.9942,0.8247}{C}{0.568,0.0219}%bl \end{annotatedFigure} \caption{Screenshot of \href{http://rxfiddle.net/\#type=editor&code=Y29uc3Qgc291cmNlMSA9IFJ4Lk9ic2VydmFibGUKICAub2YoMSwgMiwgMywgNCkKCmNvbnN0IHNvdXJjZTIgPSBSeC5PYnNlcnZhYmxlCiAgLmNyZWF0ZShvID0+IHsKICAgIG8ubmV4dCgiYSIpCiAgICBvLm5leHQoImIiKQogICAgby5uZXh0KCJjIikKICAgIG8uZXJyb3IobmV3IEVycm9yKCkpCiAgfSkKClJ4Lk9ic2VydmFibGUKICAub2Yoc291cmNlMSwgc291cmNlMikKICAubWVyZ2VBbGwoKQogIC5za2lwKDIpCiAgLnN1YnNjcmliZSgKICAJY29uc29sZS5sb2csIAogICAgY29uc29sZS53YXJuCiAgKQ==} {RxFiddle.net}, showing the Code Editor (A), the DFG (B) and the Dynamic Marble Diagram (C)}% \label{screenshot-mergeAll} \end{figure*} \begin{table*}[t] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{|l|l|} \hline \code{addObservable(id, sourceIds)} & Adds a Observable node, with zero or more source Observable's \\ \hline \code{addObserver(id, observableId, destinationId)} & \begin{tabular}[c]{@{}l@{}}Add a Observer, \code{observableId} denotes the Observable it subscribed to, \\ optional \code{destinationId} adds an edge to the destination Observer \end{tabular} \\ \hline \code{addOuterObserver(observerId, outerDestination)} & \begin{tabular}[c] {@{}l@{}}Create a special edge between an existing Observer and the higher order \\ destination Observer \end{tabular} \\ \hline \code{addEvent(observerId, type, optionalValue)} & \begin{tabular}[c] {@{}l@{}}Add an event to the Observer denoted by \code{observerId}, of type (next, error, complete), \\ optionally with a value (for next / error events). \end{tabular} \\ \hline \code{addMeta(id, metadata)} & Add meta data such as the method call which created an Observable. \\ \hline \end{tabular} % } \caption{Instrumentation protocol}% \label{protocol} \end{table*} \subsection{Implementation} To validate the design and to provide an implementation to the developer community we created \url{RxFiddle.net}. The RxFiddle project is a reference implementation of our reactive debugger design. Besides the visualizer, the website also contains a code editor for JavaScript code with sharing functionality, for developers to share snippets with their peers, as shown in Figure~% \ref{screenshot-mergeAll}A. In this section we will explain different parts of the implementation. For RxFiddle, we initially focused on RxJS (JavaScript). \paragraph{Instrumentation} With JavaScript being a dynamic language, we use a combination of prototype patching and Proxies% \footnote{\url{https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Proxy}} to instrument the RxJS library: the Observable and Observer prototypes are patched to return Proxies wrapping the API method calls. The instrumentation passes every method entry and method exit to the Linking-step. \paragraph{Linking} We distinguish between method calls from the different phases (Section~% \ref{nutshell}). From the assembly phase, we detect when Observables are used as target or arguments of a call or as return value, and create a graph node for each detected Observable. We add an edge between the call target \& call arguments and returned Observables, denoting the \emph{source}-relation. Also, we tag the returned Observable with the call frame information (time, method name, arguments). In the subscription phase, we detect calls to the \code{subscribe}-method: the destination Observers are passed as arguments, so we create the graph nodes and save the relation as an edge. In the runtime phase we detect \code{next}-, \code{error}- and \code{complete}-calls on Observers and add these as meta data to the Observer nodes. \vspace{-1mm} \paragraph{Graph Loggers} From the Linking-step the graph mutations are streamed to the environment of the visualizer, where the graph is rebuilt. Depending on the host language a different protocol is used: RxFiddle's code editor executes the code in a Worker% \footnote{\url{https://developer.mozilla.org/docs/Web/API/Worker}} and transmits events over the postMessage protocol, while RxFiddle for \NodeJS{} transmits over WebSockets. Being able to support multiple protocols, extends the possible use cases, ranging from the code editor for small programs, to a \NodeJS{} plugin for server applications, to Chrome DevTool extensions% \footnote{\url{https://developer.chrome.com/extensions/devtools}} for web applications. \vspace{-1mm} \paragraph{Visualizer} The visualizer receives the current state in the form of a graph from the Logger. It then uses the Observers in the graph to create the DFG. To layout the DFG using StoryFlow~\cite{liu2013storyflow}, we first rank the graph using depth first search, remove slack~\cite{gansner1993technique} and reverse edges where necessary to create a directed acyclic graph. We then add dummy nodes to replace long edges with edges spanning a single rank. Finally we order and align the nodes in the ranks assigning coordinates for the visualization. It is important that layouting is fast, as it runs every time the DFG is changed. To render the Marble Diagrams, the flow \emph{to} and \emph{from} the selected Observer is gathered, by recursively traversing the graph in the direction of the edges, respectively the reversed direction.
{ "alphanum_fraction": 0.769668481, "avg_line_length": 51.164556962, "ext": "tex", "hexsha": "dd6a665f992cc0f09622d1b5718949c338d81d03", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "1a7959348acaa329de4d50929f7683a00e664bcf", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "vova1vova/RxFiddle", "max_forks_repo_path": "doc/chapters/5.design.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "1a7959348acaa329de4d50929f7683a00e664bcf", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "vova1vova/RxFiddle", "max_issues_repo_path": "doc/chapters/5.design.tex", "max_line_length": 461, "max_stars_count": null, "max_stars_repo_head_hexsha": "1a7959348acaa329de4d50929f7683a00e664bcf", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "vova1vova/RxFiddle", "max_stars_repo_path": "doc/chapters/5.design.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3127, "size": 12126 }
\documentclass[compress,pdf,mathserif]{beamer} \usetheme{chpc} \usepackage{tikz} \usepackage{epstopdf} \usepackage{ulem} \usepackage{movie15} \usepackage{amsmath} \usepackage{mathabx} \usepackage{xcolor} \usepackage{algorithm,algorithmic} \usepackage{pgfplots} \title{Comparison of discrete fiber and asymptotic homogenization methods for modeling of fiber-reinforced materials deformations} \author{\textit{Petr Zakharov}\\Petr Sivtsev} \institute{Multiscale and high-performance computing for multiphysical problems 2019} \date{June 24 - 25} \renewcommand{\arraystretch}{1.5} \begin{document} \maketitle \subsection{Content} \begin{frame} \begin{enumerate} \item Introduction \item Model problem \item Approximations \item Numerical simulations \item Conclusion \end{enumerate} \end{frame} \section{Introduction} \subsection{Introduction} \begin{frame} \begin{itemize} \item Fiber-reinforced materials are recommended as most of the strongest composite materials \item Numerical modeling of fiber-reinforced materials leads to huge grid size due to fiber size and count \item Discrete fiber method (discerte fracture method): one dimensional fibers \item Asymptotic homogenization method: averaged coarse problem \end{itemize} \end{frame} \section{Model problem} \subsection{Model problem} \begin{frame} \centering \begin{tikzpicture}[scale=1] \draw[gray] (0, 0) grid (4, 4); \draw (0, 0) rectangle (4, 4); \node at (2, -0.5) {$\Omega = \Omega_1 \cup \Omega_2$}; \foreach \i in {1,...,4} { \foreach \j in {1,...,4} { \foreach \k in {1,...,4} { \draw (\i-0.75, \j-\k/4+1/8)--(\i-0.25, \j-\k/4+1/8); } } } %\draw (3.5, 1.5) circle (0.75); \draw[->] (3.5, 1.5)--(8, 2); \draw[gray] (6, 0) rectangle (10, 4); \node at (8, -0.5) {$\Omega_2 = \bigcup_{i=1}^K \phi_i$}; \foreach \k in {1,...,4} { \draw (7, \k-0.55) rectangle (9, \k-0.45); \node at (9.5, 4.5-\k) {$\phi_{i+\k}$}; } \end{tikzpicture} \begin{itemize} \item $\Omega_1$ is the main material \item $\Omega_2$ is the fibers \end{itemize} \end{frame} \subsection{Stress-strain state} \begin{frame} \[ \nabla \cdot \bm{\sigma} = \bm{f}, \quad \bm{x} \in \Omega, \] \begin{itemize} \item $\bm{\sigma}=\bm{C}\bm{\varepsilon}$ is the stress tensor \item $\bm{C}$ is the elastic tensor \item $\bm{\varepsilon}$ is the strain tensor \item $\bm{f}$ is the force source \end{itemize} \end{frame} \subsection{Voight notation} \begin{frame} \centering Stress and elastic tensors \[ \bm{\sigma} = \left( \begin{matrix}\sigma_{11}\\ \sigma_{22}\\ \sigma_{12}\end{matrix} \right), \quad \bm{C} = \left( \begin{matrix} C_{1111} & C_{1122} & C_{1112} \\ C_{2211} & C_{2222} & C_{2212} \\ C_{1211} & C_{1222} & C_{1212} \end{matrix} \right) \] Strain tensor \[ \bm{\varepsilon} = \left( \begin{matrix}\varepsilon_{11}\\ \varepsilon_{22}\\ 2\, \varepsilon_{12}\end{matrix} \right) = \left( \begin{matrix}\frac{\partial u_1}{\partial x_1}\\ \frac{\partial u_2}{\partial x_2} \\ \frac{\partial u_2}{\partial x_1} + \frac{\partial u_1}{\partial x_2} \end{matrix} \right). \] \end{frame} \subsection{Lame parameters} \begin{frame} \centering Isotropic materials elastic tensor \[ \bm{C}_i = \left( \begin{matrix} \lambda_i+2\mu_i & \lambda_i & 0 \\ \lambda_i & \lambda_i+2\mu_i & 0 \\ 0 & 0 & \mu_i \end{matrix} \right), \quad \bm{x} \in \Omega_i, \quad i=1,2. \] \[ \lambda_i = \frac{E_i\, \nu_i}{(1 + \nu_i) (1 - 2 \nu_i)}, \quad \mu_i = \frac{E_i}{2 (1 + \nu_i)}, \quad \bm{x} \in \Omega_i, \quad i=1,2. \] \begin{itemize} \item $E_i$ is the Young modulus \item $\nu_i$ is the Poisson coefficient \end{itemize} \end{frame} \subsection{Boundary conditions} \begin{frame} \centering \begin{tikzpicture}[scale=0.75] \draw[gray] (0, 0) grid (4, 4); \draw (0, 0) rectangle (4, 4); \foreach \i in {1,...,4} { \foreach \j in {1,...,4} { \foreach \k in {1,...,4} { \draw (\i-0.75, \j-\k/4+1/8)--(\i-0.25, \j-\k/4+1/8); } \draw (-0.2, \j-\i/4+1/8)--(0.0, \j-\i/4+1/4); } } \draw[ultra thick] (0,0)--(0,4); \node at (-0.5, 2) {$\Gamma_L$}; \node at (4.5, 2) {$\Gamma_R$}; \end{tikzpicture} Dirichlet condition on the left border \[ \bm{u} = (0, 0), \quad \bm{x} \in \Gamma_L. \] Neumann condition on the right border \[ \bm{\sigma}_{\bm{n}} = \bm{g}, \quad \bm{x} \in \Gamma_R. \] $\bm{\sigma}_{\bm{n}}=\bm{\sigma}\,\bm{n}$ and $\bm{n}$ is the normal vector to the border \end{frame} \section{Approximations} \subsection{Finite element approximation} \begin{frame} \centering Bilinear form \[ a(\bm{u}, \bm{v}) = \int_{\Omega_1} \bm{C}_1 \, \bm{\varepsilon}(\bm{u}): \bm{\varepsilon}(\bm{v})\, {\rm d}\bm{x} + \int_{\Omega_2} \bm{C}_2 \, \bm{\varepsilon}(\bm{u}) : \bm{\varepsilon}(\bm{v}) \,{\rm d}\bm{x}, \] Linear form \[ L(\bm{v}) = \int_{\Omega}\bm{f}\, \bm{v}\, {\rm d}\bm{x} + \int_{\Gamma_R}\bm{g}\, \bm{v}\, {\rm d}\bm{s}, \] Trial and test function spaces \[ V = \widehat{V} = \{\bm{v} \in H^1(\Omega) : \bm{v} = (0, 0), \bm{x} \in \Gamma_L \}, \] $H^1$ is the Sobolev function space \end{frame} \subsection{Discrete fiber approximation} \begin{frame} \centering \begin{tikzpicture}[scale=0.75] \draw[gray] (5, 0) rectangle (9, 4); \foreach \k in {1,...,4} { \draw (6, \k-0.55) rectangle (8, \k-0.45); \node at (8.5, 4.5-\k) {$\phi_\k$}; } \node at (7, -1) {$\Omega_2=\bigcup_{i=1}^K \phi_i$}; \draw[->](9.5, 2)--(11.5,2); \draw[gray] (12, 0) rectangle (16, 4); \foreach \k in {1,...,4} { \draw (13, \k-0.5) -- (15, \k-0.5); \node at (15.5, 4.5-\k) {$\gamma_\k$}; } \node at (14, -1) {$\Gamma_2=\bigcup_{i=1}^K \gamma_i$}; \end{tikzpicture} \end{frame} \subsection{Discrete fiber approximation (2D)} \begin{frame} \centering Bilinear form \[ \begin{gathered} a(\bm{u}, \bm{v}) = \int_\Omega \bm{C}_1\, \bm{\varepsilon}(\bm{u}) : \bm\varepsilon(\bm{v})\, {\rm d}\bm{x}+\\ \int_{\Gamma_{2}} d\, (\lambda_2 + 2\mu_2 - \lambda_1 - 2\mu_1) (\nabla \bm{u}_{\bm{\tau}}\, {\bm{\tau}})(\nabla \bm{v}_{\bm{\tau}}\, {\bm{\tau}}) {\rm d}\bm{s}, \end{gathered} \] Linear form \[ L(\bm v) = \int_{\Omega}\bm{f}\, \bm{v}\, {\rm d}\bm{x} +\int_{\Gamma_R}\bm{g}\, \bm{v}\, {\rm{d}}\bm{x}, \] \begin{itemize} \item $d$ is thickness of fibers \item $\bm{u}_{\bm{\tau}} = \bm{u}\, \bm{\tau}$ and $\bm{\tau}$ is the tangent vector to a fiber line \item Function spaces same as in FEM \end{itemize} \end{frame} \subsection{Asymptotic homogenization approximation} \begin{frame} \centering \begin{tikzpicture}[scale=0.75] \draw[gray] (5, 0) rectangle (9, 4); \foreach \k in {1,...,4} { \draw (6, \k-0.55) rectangle (8, \k-0.45); } \node at (7, -1) {$\bm{C}_i, i=1,2$}; \draw[->](9.5, 2)--(11.5,2); \draw[gray] (12, 0) rectangle (16, 4); \node at (14, -1) {$\bm{C}^*$}; \end{tikzpicture} \centering \vspace{1em} $\bm{C}^*$ is the effective elastic tensor \end{frame} \subsection{Asymptotic homogenization approximation} \begin{frame} \centering Average of the stress tensor \[ \langle \bm{\sigma} \rangle = \langle \bm{C}\, \bm{\varepsilon} \rangle = \bm{C}^* \langle \bm{\varepsilon}\rangle, \] Average \[ \langle\psi\rangle = \frac{\int_\omega \psi \, {\rm d}\bm{x}}{\int_\omega \, {\rm d}\bm{x}}, \] $\omega$ is a periodic domain. \end{frame} \subsection{Periodic problems} \begin{frame} \centering $\bm{u}^k$ is the solution with $\bm{f}^k, k=1,2,3$ \begin{itemize} \item $\bm{f}^1 = -\nabla \cdot \bm{C}\, \bm{\varepsilon}((x_1, 0))$, \item $\bm{f}^2 = -\nabla \cdot \bm{C}\, \bm{\varepsilon}((0, x_2))$, \item $\bm{f}^3 = -\nabla \cdot \bm{C}\, \bm{\varepsilon}((x_2/2, x_1/2))$. \end{itemize} \vspace{1em} The effective elastic tensor \begin{itemize} \item $C^*_{ij11} = \langle \sigma_{ij}^1 \rangle, \quad ij=11, 22, 12$, \item $C^*_{ij22} = \langle \sigma_{ij}^2 \rangle, \quad ij=11, 22, 12$, \item $C^*_{ij12} = \langle \sigma_{ij}^3 \rangle, \quad ij=11, 22, 12$. \end{itemize} $\bm{\sigma}^k=\bm{C}\bm{\varepsilon}(\bm{u}^k),\quad k=1,2,3$ \end{frame} \subsection{Coarse problem} \begin{frame} \centering Bilinear form \[ a(\bm{u}, \bm{v}) = \int_\Omega \bm{C}^* \, \bm{\varepsilon}(\bm{u}) : \bm{\varepsilon}(\bm{v})\, {\rm d}\bm{x}\] Linear form \[ L(\bm{v}) = \int_\Omega \bm{f}\, \bm{v} \, {\rm d}\bm{x} + \int_{\Gamma_R} \bm{g}\, \bm{v}\, {\rm d}\bm{s}. \] We don't compute a higher order solution of the asymptotic homogenization method \end{frame} \section{Numerical simulations} \subsection{Numerical simulations} \begin{frame} \centering \begin{tikzpicture}[scale=0.75] \draw[gray] (0, 0) grid (4, 4); \draw (0, 0) rectangle (4, 4); \foreach \i in {1,...,4} { \foreach \j in {1,...,4} { \foreach \k in {1,...,4} { \draw (\i-0.75, \j-\k/4+1/8)--(\i-0.25, \j-\k/4+1/8); } } } \end{tikzpicture} \vspace{1em} \begin{itemize} \item $\Omega$ contains $n \times n$ equal subdomains $\omega$ ($n = 4$) \item Each $\omega$ contains uniformly distributed $k=K/n^2$ fibers ($k=4$) \item Fibers size $l \times d$, where $l=1/2n$, $d$ is the thickness ($l=1/8$) \item $d$ thickness correlates with grid size \item $\bm{g}=(0,-10^{-5})$ \end{itemize} \end{frame} \subsection{FEM solution} \begin{frame} \centering \includegraphics[width=0.45\linewidth]{data/ux.png} \hspace{1em} \includegraphics[width=0.45\linewidth]{data/uy.png}\\ $\hspace{2em} u_1^{fem} \hspace{13em} u_2^{fem}$ \end{frame} \subsection{DFM error} \begin{frame} \centering \includegraphics[width=0.45\linewidth]{data/edx.png} \hspace{1em} \includegraphics[width=0.45\linewidth]{data/edy.png} \\ $u_1^{dfm}-u_1^{fem} \hspace{10em} u_2^{dfm}-u_2^{fem}$ \end{frame} \subsection{AHM error} \begin{frame} \centering \includegraphics[width=0.45\linewidth]{data/eax.png} \hspace{1em} \includegraphics[width=0.45\linewidth]{data/eay.png} \\ $u_1^{ahm}-u_1^{fem} \hspace{10em} u_2^{ahm}-u_2^{fem}$ \end{frame} \subsection{Relative errors} \begin{frame} \centering DFM relative error \[ \epsilon^{dfm}_{L_\infty} = \frac{\Vert \bm{u}^{dfm} - \bm{u}^{fem} \Vert_{L_\infty}}{\Vert \bm{u}^{fem} \Vert_{L_\infty}}, \quad \epsilon^{dfm}_{L_2} = \frac{\Vert \bm{u}^{dfm} - \bm{u}^{fem} \Vert_{L_2}}{\Vert \bm{u}^{fem} \Vert_{L_2}}, \] \vspace{1em} AHM relative error \[ \epsilon^{ahm}_{L_\infty} = \frac{\Vert \bm{u}^{ahm} - \bm{u}^{fem} \Vert_{L_\infty}}{\Vert \bm{u}^{fem} \Vert_{L_\infty}}, \quad \epsilon^{ahm}_{L_2} = \frac{\Vert \bm{u}^{ahm} - \bm{u}^{fem} \Vert_{L_2}}{\Vert \bm{u}^{fem} \Vert_{L_2}}, \] \end{frame} \subsection{Number of fibers in $\omega$} \begin{frame} \centering \begin{tikzpicture}[scale=0.95] \begin{axis}[ xmode=log, ymode=log, log basis x=2, log basis y=2, xlabel={$k$}, grid=major, legend pos=south east, legend columns=2] \addplot[mark=square*,solid,red] table[y=edli]{data/k.txt}; \addplot[mark=square*,solid,blue] table[y=eali]{data/k.txt}; \addplot[mark=*,dashed,red] table[y=edl2]{data/k.txt}; \addplot[mark=*,dashed,blue] table[y=eal2]{data/k.txt}; \addlegendentry{$\epsilon^{dfm}_{L_\infty}$} \addlegendentry{$\epsilon^{ahm}_{L_\infty}$} \addlegendentry{$\epsilon^{dfm}_{L_2}$} \addlegendentry{$\epsilon^{ahm}_{L_2}$} \end{axis} \end{tikzpicture} \end{frame} \subsection{Thickness of fibers} \begin{frame} \centering \begin{tikzpicture}[scale=0.95] \begin{axis}[ xmode=log, ymode=log, log basis x=2, log basis y=2, xlabel={$d$}, grid=major, legend pos=south east, legend columns=2] \addplot[mark=square*,solid,red] table[y=edli]{data/d.txt}; \addplot[mark=square*,solid,blue] table[y=eali]{data/d.txt}; \addplot[mark=*,dashed,red] table[y=edl2]{data/d.txt}; \addplot[mark=*,dashed,blue] table[y=eal2]{data/d.txt}; \addlegendentry{$\epsilon^{dfm}_{L_\infty}$} \addlegendentry{$\epsilon^{ahm}_{L_\infty}$} \addlegendentry{$\epsilon^{dfm}_{L_2}$} \addlegendentry{$\epsilon^{ahm}_{L_2}$} \end{axis} \end{tikzpicture} \end{frame} \subsection{Ratio of Young modulus} \begin{frame} \centering \begin{tikzpicture}[scale=0.95] \begin{axis}[ xmode=log, ymode=log, log basis x=2, log basis y=2, xlabel={$\alpha$}, grid=major, legend pos=north west, legend columns=2] \addplot[mark=square*,solid,red] table[y=edli]{data/f.txt}; \addplot[mark=square*,solid,blue] table[y=eali]{data/f.txt}; \addplot[mark=*,dashed,red] table[y=edl2]{data/f.txt}; \addplot[mark=*,dashed,blue] table[y=eal2]{data/f.txt}; \addlegendentry{$\epsilon^{dfm}_{L_\infty}$} \addlegendentry{$\epsilon^{ahm}_{L_\infty}$} \addlegendentry{$\epsilon^{dfm}_{L_2}$} \addlegendentry{$\epsilon^{ahm}_{L_2}$} \end{axis} \end{tikzpicture} \end{frame} \subsection{Number of subdomains in one direction} \begin{frame} \centering \begin{tikzpicture}[scale=0.95] \begin{axis}[ xmode=log, ymode=log, log basis x=2, log basis y=2, xlabel={$n$}, grid=major, legend pos=north east, legend columns=2] \addplot[mark=square*,solid,red] table[y=edli]{data/n.txt}; \addplot[mark=square*,solid,blue] table[y=eali]{data/n.txt}; \addplot[mark=*,dashed,red] table[y=edl2]{data/n.txt}; \addplot[mark=*,dashed,blue] table[y=eal2]{data/n.txt}; \addlegendentry{$\epsilon^{dfm}_{L_\infty}$} \addlegendentry{$\epsilon^{ahm}_{L_\infty}$} \addlegendentry{$\epsilon^{dfm}_{L_2}$} \addlegendentry{$\epsilon^{ahm}_{L_2}$} \end{axis} \end{tikzpicture} \end{frame} \subsection{Grid step ($d=1/64, l=1/8$)} \begin{frame} \centering \begin{tikzpicture}[scale=1] \begin{axis}[ xmode=log, ymode=log, log basis x=2, log basis y=2, xlabel={$h$}, grid=major, legend style={at={(0.03,0.57)},anchor=north west}, legend columns=2] \addplot[mark=square*,solid,red] table[y=edli]{data/h.txt}; \addplot[mark=square*,solid,blue] table[y=eali]{data/h.txt}; \addplot[mark=*,dashed,red] table[y=edl2]{data/h.txt}; \addplot[mark=*,dashed,blue] table[y=eal2]{data/h.txt}; \addlegendentry{$\epsilon^{dfm}_{L_\infty}$} \addlegendentry{$\epsilon^{ahm}_{L_\infty}$} \addlegendentry{$\epsilon^{dfm}_{L_2}$} \addlegendentry{$\epsilon^{ahm}_{L_2}$} \end{axis} \end{tikzpicture} \end{frame} \section{Conclusion} \subsection{Conclusion} \begin{frame} \begin{itemize} \item DFM comparing to AHM showed better accuracy for a large ratio of Young modulus \item DFM is more convenient for thick fibers \item AHM solution is better for a large number of equal subdomains \item Using DFM we can solve on more coarse meshes \end{itemize} \end{frame} \subsection{Thank you} \begin{frame} \centering Thank you for your attention! \end{frame} \end{document}
{ "alphanum_fraction": 0.5601594588, "avg_line_length": 32.5265225933, "ext": "tex", "hexsha": "e054ffd37c45a3fc8883319d49cf6c8ed0d53376", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "9e8079896fdd1011ca769f9e8f5af06d788415e2", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "petch/elasticity2019", "max_forks_repo_path": "presentation/presentation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9e8079896fdd1011ca769f9e8f5af06d788415e2", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "petch/elasticity2019", "max_issues_repo_path": "presentation/presentation.tex", "max_line_length": 221, "max_stars_count": 1, "max_stars_repo_head_hexsha": "9e8079896fdd1011ca769f9e8f5af06d788415e2", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "petch/elsticity2019", "max_stars_repo_path": "presentation/presentation.tex", "max_stars_repo_stars_event_max_datetime": "2020-03-30T15:13:31.000Z", "max_stars_repo_stars_event_min_datetime": "2020-03-30T15:13:31.000Z", "num_tokens": 6009, "size": 16556 }
%compile with XeLaTeX \documentclass[9pt,a4paper,oneside]{report} \usepackage[margin=18mm,landscape]{geometry} \usepackage{xltxtra,fontspec,xunicode} %requires XeLaTeX \setromanfont{Source Sans Pro} \setsansfont{Source Sans Pro} \setmonofont{DejaVu Sans Mono} \usepackage{fancyhdr} \pagestyle{fancy} \chead{\url{http://reqT.org/reqT-cheat-sheet.pdf}} \usepackage{hyperref} \hypersetup{colorlinks=true, linkcolor=blue, urlcolor=blue} \usepackage[usenames,dvipsnames,svgnames,table]{xcolor} \definecolor{entityColor}{RGB}{0,100,200} \definecolor{attributeColor}{RGB}{0,100,50} \definecolor{relationColor}{RGB}{160,0,30} \usepackage{listings} \lstdefinestyle{reqT}{ %belowcaptionskip=1\baselineskip, breaklines=true, %showstringspaces=false, showspaces=false, %breakatwhitespace=true, basicstyle=\ttfamily\small, emph={Ent,Meta,Item,Label,Section,Term,Actor,App,Component,Domain,Module,Product,Release,Resource,Risk,Service,Stakeholder,System,User,Class,Data,Input,Member,Output,Relationship,Design,Screen,MockUp,Function,Interface,State,Event,Epic,Feature,Goal,Idea,Issue,Req,Ticket,WorkPackage,Breakpoint,Barrier,Quality,Target,Scenario,Task,Test,Story,UseCase,VariationPoint,Variant}, emphstyle=\bfseries\color{entityColor}, emph={[2]has,is,superOf,binds,deprecates,excludes,helps,hurts,impacts,implements,interactsWith,precedes,requires,relatesTo,verifies}, emphstyle={[2]\bfseries\color{relationColor}}, emph={[3]Attr,Code,Constraints,Comment,Deprecated,Example,Expectation,FileName,Gist,Image,Spec,Text,Title,Why,Benefit,Capacity,Cost,Damage,Frequency,Min,Max,Order,Prio,Probability,Profit,Value,Status}, emphstyle={[3]\bfseries\color{attributeColor}}, } \lstset{style=reqT} \usepackage{multicol} \setlength\parindent{0em} \usepackage{titlesec} \titlespacing{\section}{0pt}{5pt}{2pt} \titlespacing{\subsection}{0pt}{5pt}{2pt} \titlespacing{\subsubsection}{0pt}{5pt}{2pt} \usepackage{graphicx} %\frenchspacing \begin{document} \begin{multicols*}{4} \section*{Entity} \subsection*{General} \hangindent=1em\lstinline+Item+ An article in a collection, enumeration, or series. \hangindent=1em\lstinline+Label+ A descriptive name used to identify something. \hangindent=1em\lstinline+Meta+ A prefix used on a concept to mean beyond or about its own concept, e.g. metadata is data about data. \hangindent=1em\lstinline+Section+ A part of a (requirements) document. \hangindent=1em\lstinline+Term+ A word or group of words having a particular meaning. \subsection*{Context} \hangindent=1em\lstinline+Actor+ A human or machine that communicates with a system. \hangindent=1em\lstinline+App+ A computer program, or group of programs designed for end users, normally with a graphical user interface. Short for application. \hangindent=1em\lstinline+Component+ A composable part of a system. A reusable, interchangeable system unit or functionality. \hangindent=1em\lstinline+Domain+ An application area. A product and its surrounding entities. \hangindent=1em\lstinline+Module+ A collection of coherent functions and interfaces. \hangindent=1em\lstinline+Product+ Something offered to a market. \hangindent=1em\lstinline+Release+ A specific version of a system offered at a specific time to end users. \hangindent=1em\lstinline+Resource+ A capability of, or support for development. \hangindent=1em\lstinline+Risk+ Something negative that may happen. \hangindent=1em\lstinline+Service+ Actions performed by systems and/or humans to provide results to stakeholders. \hangindent=1em\lstinline+Stakeholder+ Someone with a stake in the system development or usage. \hangindent=1em\lstinline+System+ A set of interacting software and/or hardware components. \hangindent=1em\lstinline+User+ A human interacting with a system. \subsection*{Requirement} \subsubsection*{DataReq} \hangindent=1em\lstinline+Class+ An extensible template for creating objects. A set of objects with certain attributes in common. A category. \hangindent=1em\lstinline+Data+ Information stored in a system. \hangindent=1em\lstinline+Input+ Data consumed by an entity, \hangindent=1em\lstinline+Member+ An entity that is part of another entity, eg. a field in a in a class. \hangindent=1em\lstinline+Output+ Data produced by an entity, e.g. a function or a test. \hangindent=1em\lstinline+Relationship+ A specific way that entities are connected. \subsubsection*{DesignReq} \hangindent=1em\lstinline+Design+ A specific realization or high-level implementation description (of a system part). \hangindent=1em\lstinline+Screen+ A design of (a part of) a user interface. \hangindent=1em\lstinline+MockUp+ A prototype with limited functionality used to demonstrate a design idea. \subsubsection*{FunctionalReq} \hangindent=1em\lstinline+Function+ A description of how input data is mapped to output data. A capability of a system to do something specific. \hangindent=1em\lstinline+Interface+ A defined way to interact with a system. \hangindent=1em\lstinline+State+ A mode or condition of something in the domain and/or in the system. A configuration of data. \hangindent=1em\lstinline+Event+ Something that can happen in the domain and/or in the system. \subsubsection*{GeneralReq} \hangindent=1em\lstinline+Epic+ A large user story or a collection of stories. \hangindent=1em\lstinline+Feature+ A releasable characteristic of a product. A (high-level, coherent) bundle of requirements. \hangindent=1em\lstinline+Goal+ An intention of a stakeholder or desired system property. \hangindent=1em\lstinline+Idea+ A concept or thought (potentially interesting). \hangindent=1em\lstinline+Issue+ Something needed to be fixed. \hangindent=1em\lstinline+Req+ Something needed or wanted. An abstract term denoting any type of information relevant to the (specification of) intentions behind system development. Short for requirement. \hangindent=1em\lstinline+Ticket+ (Development) work awaiting to be completed. \hangindent=1em\lstinline+WorkPackage+ A collection of (development) work tasks. \subsubsection*{QualityReq} \hangindent=1em\lstinline+Breakpoint+ A point of change. An important aspect of a (non-linear) relation between quality and benefit. \hangindent=1em\lstinline+Barrier+ Something that makes it difficult to achieve a goal or a higher quality level. \hangindent=1em\lstinline+Quality+ A distinguishing characteristic or degree of goodness. \hangindent=1em\lstinline+Target+ A desired quality level or goal . \subsubsection*{ScenarioReq} \hangindent=1em\lstinline+Scenario+ A (vivid) description of a (possible future) system usage. \hangindent=1em\lstinline+Task+ A piece of work (that users do, maybe supported by a system). \hangindent=1em\lstinline+Test+ A procedure to check if requirements are met. \hangindent=1em\lstinline+Story+ A short description of what a user does or needs. Short for user story. \hangindent=1em\lstinline+UseCase+ A list of steps defining interactions between actors and a system to achieve a goal. \subsubsection*{VariabilityReq} \hangindent=1em\lstinline+VariationPoint+ An opportunity of choice among variants. \hangindent=1em\lstinline+Variant+ An object or system property that can be chosen from a set of options. \section*{RelationType} \hangindent=1em\lstinline+binds+ Ties a value to an option. A configuration binds a variation point. \hangindent=1em\lstinline+deprecates+ Makes outdated. An entity deprecates (supersedes) another entity. \hangindent=1em\lstinline+excludes+ Prevents a combination. An entity excludes another entity. \hangindent=1em\lstinline+has+ Expresses containment, substructure. An entity contains another entity. \hangindent=1em\lstinline+helps+ Positive influence. A goal helps to fulfil another goal. \hangindent=1em\lstinline+hurts+ Negative influence. A goal hinders another goal. \hangindent=1em\lstinline+impacts+ Some influence. A new feature impacts an existing component. \hangindent=1em\lstinline+implements+ Realisation of. A module implements a feature. \hangindent=1em\lstinline+interactsWith+ Communication. A user interacts with an interface. \hangindent=1em\lstinline+is+ Sub-typing, specialization, part of another, more general entity. \hangindent=1em\lstinline+precedes+ Temporal ordering. A feature precedes (is implemented before) another feature. \hangindent=1em\lstinline+requires+ Requested combination. An entity is required (or wished) by another entity. \hangindent=1em\lstinline+relatesTo+ General relation. An entity is related to another entity. \hangindent=1em\lstinline+superOf+ Super-typing, generalization, includes another, more specific entity. \hangindent=1em\lstinline+verifies+ Gives evidence of correctness. A test verifies the implementation of a feature. \section*{Attribute} \subsection*{StringAttribute} \hangindent=1em\lstinline+Code+ A collection of (textual) computer instructions in some programming language, e.g. Scala. Short for source code. \hangindent=1em\lstinline+Comment+ A note that explains or discusses some entity. \hangindent=1em\lstinline+Deprecated+ A description of why an entity should be avoided, often because it is superseded by another entity, as indicated by a 'deprecates' relation. \hangindent=1em\lstinline+Example+ A note that illustrates some entity by a typical instance. \hangindent=1em\lstinline+Expectation+ The required output of a test in order to be counted as passed. \hangindent=1em\lstinline+FileName+ The name of a storage of serialized, persistent data. \hangindent=1em\lstinline+Gist+ A short and simple description of an entity, e.g. a function or a test. \hangindent=1em\lstinline+Image+ (The name of) a picture of an entity. \hangindent=1em\lstinline+Spec+ A (detailed) definition of an entity. Short for specification \hangindent=1em\lstinline+Text+ A sequence of words (in natural language). \hangindent=1em\lstinline+Title+ A general or descriptive heading. \hangindent=1em\lstinline+Why+ A description of intention. Rationale. \subsection*{IntAttribute} \hangindent=1em\lstinline+Benefit+ A characterisation of a good or helpful result or effect (e.g. of a feature). \hangindent=1em\lstinline+Capacity+ The largest amount that can be held or contained (e.g. by a resource). \hangindent=1em\lstinline+Cost+ The expenditure of something, such as time or effort, necessary for the implementation of an entity. \hangindent=1em\lstinline+Damage+ A characterisation of the negative consequences if some entity (e.g. a risk) occurs. \hangindent=1em\lstinline+Frequency+ The rate of occurrence of some entity. \hangindent=1em\lstinline+Min+ The minimum estimated or assigned (relative) value. \hangindent=1em\lstinline+Max+ The maximum estimated or assigned (relative) value. \hangindent=1em\lstinline+Order+ The ordinal number of an entity (1st, 2nd, ...). \hangindent=1em\lstinline+Prio+ The level of importance of an entity. Short for priority. \hangindent=1em\lstinline+Probability+ The likelihood that something (e.g. a risk) occurs. \hangindent=1em\lstinline+Profit+ The gain or return of some entity, e.g. in monetary terms. \hangindent=1em\lstinline+Value+ An amount. An estimate of worth. \subsection*{StatusValueAttribute} \hangindent=1em\lstinline+Status+ A level of refinement of an entity (e.g. a feature) in the development process. \subsection*{VectorAttribute} \hangindent=1em\lstinline+Constraints+ A collection of propositions that restrict the possible values of a set of variables. \vspace{2em} \section*{Tree-like Model Structure} \includegraphics[width=6.3cm]{metamodel.pdf} \vspace{2em} \section*{Model Scripting} \subsection*{Model Construction} A Model has a body within parentheses with a comma-separated sequence of zero or more Elems. A relation links an Entity with a submodel body including a sequence of zero or more Elems. \begin{lstlisting} var m = Model( Title("example"), Feature("helloWorld") has Spec("Print hello msg."), Stakeholder("x") requires ( Req("nice") has ( Prio(10), Gist("gimme this")), Req("cool") has ( Prio(5), Gist("better have it") ) ) ) \end{lstlisting} \subsection*{Model Operations} Add element to a Model m: \begin{lstlisting} m + (Req("r") has Prio(42)) \end{lstlisting} Remove elements from a Model m: \begin{lstlisting} m - Req("nice") - Title \end{lstlisting} Collecting Int values in a Vector[Int]: \begin{lstlisting} m.collect{case Prio(i) => i} \end{lstlisting} Collecting entities in a new Model: \begin{lstlisting} m.collect{case r: Req => r}.toModel \end{lstlisting} Transforming Entity type in a new Model: \begin{lstlisting} m.transform{ case Req(id) => Feature(id) } \end{lstlisting} \subsection*{Release Constraint Solving} \begin{lstlisting}[basicstyle=\ttfamily\scriptsize] val simplePlan = Model( Stakeholder("X") has ( Prio(1), Feature("1") has Benefit(4), Feature("2") has Benefit(2), Feature("3") has Benefit(1)), Stakeholder("Y") has ( Prio(2), Feature("1") has Benefit(2), Feature("2") has Benefit(1), Feature("3") has Benefit(1)), Release("A") precedes Release("B"), Resource("dev") has ( Feature("1") has Cost(10), Feature("2") has Cost(70), Feature("3") has Cost(40), Release("A") has Capacity(100), Release("B") has Capacity(100)), Resource("test") has ( Feature("1") has Cost(40), Feature("2") has Cost(10), Feature("3") has Cost(70), Release("A") has Capacity(100), Release("B") has Capacity(100)), Feature("3") precedes Feature("1")) val problem = csp.releasePlan(simplePlan) val solution = problem.maximize(Release("A")/Benefit) val sortedSolution = solution.sortByTypes(Release, Feature, Stakeholder, Resource) \end{lstlisting} \subsection*{Model Export} \begin{lstlisting} reqT.export.toGraphVizNested(m). save("filename.dot") \end{lstlisting} Available exporters: \begin{lstlisting} toGraphVizNested toGraphVizFlat toPathTable toHtml toText toLatex toQuperSpec \end{lstlisting} \end{multicols*} \end{document}
{ "alphanum_fraction": 0.7765912003, "avg_line_length": 38.272479564, "ext": "tex", "hexsha": "d0c113d57a5c5f2a7386c551ee957f78e677e252", "lang": "TeX", "max_forks_count": 7, "max_forks_repo_forks_event_max_datetime": "2020-02-10T12:39:33.000Z", "max_forks_repo_forks_event_min_datetime": "2015-08-27T03:32:34.000Z", "max_forks_repo_head_hexsha": "8020a9e896487811870f914422c1d43fc05a838d", "max_forks_repo_licenses": [ "Apache-2.0", "BSD-2-Clause", "BSD-3-Clause" ], "max_forks_repo_name": "reqT/reqT", "max_forks_repo_path": "doc/cheat-sheet/reqT-cheat-sheet.tex", "max_issues_count": 11, "max_issues_repo_head_hexsha": "8020a9e896487811870f914422c1d43fc05a838d", "max_issues_repo_issues_event_max_datetime": "2020-09-27T18:45:25.000Z", "max_issues_repo_issues_event_min_datetime": "2015-02-05T10:28:18.000Z", "max_issues_repo_licenses": [ "Apache-2.0", "BSD-2-Clause", "BSD-3-Clause" ], "max_issues_repo_name": "reqT/reqT", "max_issues_repo_path": "doc/cheat-sheet/reqT-cheat-sheet.tex", "max_line_length": 376, "max_stars_count": 12, "max_stars_repo_head_hexsha": "8020a9e896487811870f914422c1d43fc05a838d", "max_stars_repo_licenses": [ "Apache-2.0", "BSD-2-Clause", "BSD-3-Clause" ], "max_stars_repo_name": "reqT/reqT", "max_stars_repo_path": "doc/cheat-sheet/reqT-cheat-sheet.tex", "max_stars_repo_stars_event_max_datetime": "2020-09-21T20:32:08.000Z", "max_stars_repo_stars_event_min_datetime": "2015-03-07T09:32:10.000Z", "num_tokens": 3850, "size": 14046 }
\documentclass{beamer} \mode<presentation> { \usetheme{Berkeley} % or ... \setbeamercovered{transparent} % or whatever (possibly just delete it) } \usepackage{tikz} \usepackage{graphicx} \usepackage[english]{babel} % or whatever \usepackage[utf8]{inputenc} % or whatever \usepackage{times} \usepackage[T1]{fontenc} % Or whatever. Note that the encoding and the font should match. If T1 % does not look nice, try deleting the line with the fontenc. \title[] % (optional, use only with long paper titles) {Tools for a Reproducible Workflow} \subtitle {} \author[Christensen] % (optional, use only with lots of authors) {Garret~Christensen\inst{1}} % - Give the names in the same order as the appear in the paper. % - Use the \inst{?} command only if the authors have different % affiliation. \institute[Universities of Somewhere and Elsewhere] % (optional, but mostly needed) { \inst{1}% UC Berkeley:\\ Berkeley Initiative for Transparency in the Social Sciences\\ Berkeley Institute for Data Science\\ } % - Use the \inst command only if there are several affiliations. % - Keep it simple, no one is interested in your street address. \date[BITSS2014] % (optional, should be abbreviation of conference name) {INSP, May 2017\\ Slides available online at \url{https://github.com/BITSS/INSP2017}} % - Either use conference name or its abbreviation. % - Not really informative to the audience, more for people (including % yourself) who are reading the slides online \subject{Research Transparency} % This is only inserted into the PDF information catalog. Can be left % out. \pgfdeclareimage[height=2cm]{university-logo}{../Images/BITSSlogo.png} \logo{\pgfuseimage{university-logo}} % If you have a file called "university-logo-filename.xxx", where xxx % is a graphic format that can be processed by latex or pdflatex, % resp., then you can add a logo as follows: % \pgfdeclareimage[height=0.5cm]{university-logo}{university-logo-filename} % \logo{\pgfuseimage{university-logo}} % Delete this, if you do not want the table of contents to pop up at % the beginning of each subsection: %\AtBeginSubsection[] %{ % \begin{frame}<beamer>{Outline} % \tableofcontents[currentsection,currentsubsection] % \end{frame} %} % If you wish to uncover everything in a step-wise fashion, uncomment % the following command: \beamerdefaultoverlayspecification{<+->} \begin{document} \begin{frame} \titlepage \end{frame} % Structuring a talk is a difficult task and the following structure % may not be suitable. Here are some rules that apply for this % solution: % - Exactly two or three sections (other than the summary). % - At *most* three subsections per section. % - Talk about 30s to 2min per frame. So there should be between about % 15 and 30 frames, all told. % - A conference audience is likely to know very little of what you % are going to talk about. So *simplify*! % - In a 20min talk, getting the main ideas across is hard % enough. Leave out details, even if it means being less precise than % you think necessary. % - If you omit details that are vital to the proof/implementation, % just say so once. Everybody will be happy with that. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section {Introduction} { % all template changes are local to this group. \setbeamertemplate{navigation symbols}{} \begin{frame}[plain] \begin{tikzpicture}[remember picture,overlay] \node[at=(current page.center)] { \href{https://www.bitss.org/}{\includegraphics[width=\paperwidth]{../Images/bitsslogo.png}} }; \end{tikzpicture} \end{frame} } %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Workflow} \begin{frame}{Workflow} ``Reproducibility is just collaboration with people you don't know, including yourself next week'' ---Philip Stark, UC Berkeley Statistics \end{frame} \begin{frame}{Workflow} \begin{itemize}[<.->] \item OSF \item Version Control \item Dynamic Documents \end{itemize} \end{frame} \begin{frame} Put your work all in one place with the Open Science Framework \href{http://osf.io}{\beamerbutton{Link}} \begin{itemize}[<.->] \item Pre-Registration \item Data \begin{itemize} \item Host \item Link to Dataverse \end{itemize} \item Version Control \item More to Come \end{itemize} \end{frame} { % all template changes are local to this group. \setbeamertemplate{navigation symbols}{} \begin{frame}[plain] \begin{tikzpicture}[remember picture,overlay] \node[at=(current page.center)] { \includegraphics[width=\paperwidth]{../Images/OSFnow.PNG} }; \end{tikzpicture} \end{frame} % all template changes are local to this group. \setbeamertemplate{navigation symbols}{} \begin{frame}[plain] \begin{tikzpicture}[remember picture,overlay] \node[at=(current page.center)] { \includegraphics[width=\paperwidth]{../Images/OSFsoon.PNG} }; \end{tikzpicture} \end{frame} % all template changes are local to this group. \setbeamertemplate{navigation symbols}{} \begin{frame}[plain, label=AEAreg] \begin{tikzpicture}[remember picture,overlay] \node[at=(current page.center)] { \includegraphics[height=\paperheight]{../Images/github-logo-transparent.JPG} }; \end{tikzpicture} \end{frame} } \begin{frame}{Dynamic Documents} Write your code and your paper in the same file so you won't lose information or make copy and paste mistakes. \begin{itemize}[<.->] \item Include tables by linking to a file, instead of a static image. \item Include number by linking to a value calculated by an analysis file, instead of a static number typed manually. \item Automatically update tables and numbers. \item Produce entire paper with one or two clicks. \end{itemize} \end{frame} \begin{frame}{Dynamic Documents} Possible in Python, R, and to a lesser extent, Stata \begin{itemize}[<.->] \item Jupyter---several (many?) languages \item R---use R Studio to manage projects with built-in version control, and R Markdown/knitr for publication-quality dynamic documents. \item Stata--combine with LaTeX for two click workflow \item Stata--use `\href{https://github.com/haghish/MarkDoc}{markdoc}' ado for some dynamic ability. \end{itemize} \end{frame} { % all template changes are local to this group. \setbeamertemplate{navigation symbols}{} \begin{frame}[plain, label=AEAreg] \begin{tikzpicture}[remember picture,overlay] \node[at=(current page.center)] { \includegraphics[width=\paperwidth]{../Images/RStudio-Logo-Blue-Gradient.png} }; \end{tikzpicture} \end{frame} % all template changes are local to this group. \setbeamertemplate{navigation symbols}{} \begin{frame}[plain, label=AEAreg] \begin{tikzpicture}[remember picture,overlay] \node[at=(current page.center)] { \includegraphics[height=\paperheight]{../Images/jupyter.png} }; \end{tikzpicture} \end{frame} } \begin{frame} Try them online: \begin{itemize}[<.->] \item \href{https://try.jupyter.org/}{Jupyter} \item \href{https://datascience.stackexchange.com/questions/2269/any-online-r-console}{R} \end{itemize} \end{frame} \begin{frame}{For the hardcore} \href{http://www.docker.com}{ \includegraphics[scale=0.4]{../Images/docker.png}} \end{frame} \section{Conclusion} \begin{frame}{Conclusion} OK, I'm convinced. How do I learn more? \begin{itemize}[<.->] \item Work through my demos.\href{https://github.com/BITSS/UCMerced2017}{\beamergotobutton{Link}} \item Software Carpentry's tutorials \href{http://www.software-carpentry.org/lessons}{\beamergotobutton{Link}} \end{itemize} \end{frame} \end{document}
{ "alphanum_fraction": 0.6856178933, "avg_line_length": 30.0864661654, "ext": "tex", "hexsha": "db8170afff0b8c4b49483c1eab2d141998ac5012", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5fa730a845bd00b5960345671497767753568d79", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "projectsUW/GitWorkshop", "max_forks_repo_path": "4-Tools-Intro/Tools-Intro-Slides.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5fa730a845bd00b5960345671497767753568d79", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "projectsUW/GitWorkshop", "max_issues_repo_path": "4-Tools-Intro/Tools-Intro-Slides.tex", "max_line_length": 136, "max_stars_count": null, "max_stars_repo_head_hexsha": "5fa730a845bd00b5960345671497767753568d79", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "projectsUW/GitWorkshop", "max_stars_repo_path": "4-Tools-Intro/Tools-Intro-Slides.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2076, "size": 8003 }
\documentclass{beamer} % Preamble: encoding, theme, colortheme, title, etc. \begin{document} \frame{\titlepage} \section{Section name} \subsection{Subsection name} \begin{frame}{Summary} ... \end{frame} \appendix \begin{frame}{References} ... \end{frame} \end{document}
{ "alphanum_fraction": 0.5718390805, "avg_line_length": 20.4705882353, "ext": "tex", "hexsha": "9a3be7f16aa1f1f0399b4c23beceb46dfb65ab99", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-01-20T17:52:16.000Z", "max_forks_repo_forks_event_min_datetime": "2019-01-20T17:52:16.000Z", "max_forks_repo_head_hexsha": "f8491351b2f74884689db24bbce2aa2270fa556a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Lavton/latexLectures", "max_forks_repo_path": "2019_skoltech_ISP/02_document_creation/sec02/code/structure.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f8491351b2f74884689db24bbce2aa2270fa556a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Lavton/latexLectures", "max_issues_repo_path": "2019_skoltech_ISP/02_document_creation/sec02/code/structure.tex", "max_line_length": 56, "max_stars_count": 5, "max_stars_repo_head_hexsha": "f8491351b2f74884689db24bbce2aa2270fa556a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Lavton/latexLectures", "max_stars_repo_path": "2019_skoltech_ISP/02_document_creation/sec02/code/structure.tex", "max_stars_repo_stars_event_max_datetime": "2020-11-24T11:30:48.000Z", "max_stars_repo_stars_event_min_datetime": "2019-01-11T08:19:44.000Z", "num_tokens": 98, "size": 348 }
%%%%%%%%%%%%%%%%%%%%%%%% % % Thesis template by Youssif Al-Nashif % % May 2020 % %%%%%%%%%%%%%%%%%%%%%%%% \section{Alternate Clustering Method with KDE} As an alternative to popular clustering methods that were used in the preceding section, another clustering method was attempted that features use of kernel density estimation. By using a kernel density estimation (KDE) on the kernel values, we can cluster the documents into similar groups, defined by local maxima. \\ First, the graph kernel matrix, $K$ is taken, and we extract a row, $i$, and we compute a KDE using R's \texttt{density()} function. Now, the default value for bandwidth will likely produce a smooth, unimodal or bimodal distribution, but this is not what the goal is. The goal is to use the KDE to find clusters through their value appearing in a local maxima. So, through producing a KDE with few local maxima, we produce very few clusters. If the number of clusters needs to increase, we can essentially overfit the KDE and abuse the use of the bandwidth parameter to create a KDE with many more local maxima and minima.\\ %% INSERT GRAPHIC HERE. **GRAPHIC**\\ Once a KDE with a sufficient number of local maxima, which is determined by the user, then cluster breaks are located. If we consider the estimated KDE to be a function $k(x)$, where $x$ is a kernel value, and $k(x)$ is the estimated density at a value $x$, then we can do calculus to locate the break points.
{ "alphanum_fraction": 0.7419354839, "avg_line_length": 76.6842105263, "ext": "tex", "hexsha": "058412ce1e810260e58c8a044aedf8d4a1292e2b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "dec1acb24a2ab42b46d161c92b69ad3a55fcc5ff", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Levi-Nicklas/GraphDocNLP", "max_forks_repo_path": "Thesis_Tex/Content/02_Chapters/Chapter 03/Sections/04_KDE_Cluster.tex", "max_issues_count": 7, "max_issues_repo_head_hexsha": "dec1acb24a2ab42b46d161c92b69ad3a55fcc5ff", "max_issues_repo_issues_event_max_datetime": "2021-02-25T14:18:51.000Z", "max_issues_repo_issues_event_min_datetime": "2021-02-18T16:07:14.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Levi-Nicklas/GraphDocNLP", "max_issues_repo_path": "Thesis_Tex/Content/02_Chapters/Chapter 03/Sections/04_KDE_Cluster.tex", "max_line_length": 624, "max_stars_count": 1, "max_stars_repo_head_hexsha": "dec1acb24a2ab42b46d161c92b69ad3a55fcc5ff", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Levi-Nicklas/GraphDocNLP", "max_stars_repo_path": "Thesis_Tex/Content/02_Chapters/Chapter 03/Sections/04_KDE_Cluster.tex", "max_stars_repo_stars_event_max_datetime": "2021-01-27T02:08:34.000Z", "max_stars_repo_stars_event_min_datetime": "2021-01-27T02:08:34.000Z", "num_tokens": 333, "size": 1457 }
\documentclass[smallextended]{svjour3} %\usepackage{wfonts} \usepackage{fullpage,epigraph} \usepackage{color} \usepackage[parfill]{parskip} \usepackage[british]{babel} \usepackage[bookmarks=false,colorlinks=true, citecolor=blue]{hyperref} \usepackage{cite} %\usepackage{amsfonts,amsmath,amssymb,amsthm} \usepackage{amsfonts,amsmath,amssymb} \usepackage[ruled, vlined, linesnumbered, nofillcomment]{algorithm2e} \providecommand{\DontPrintSemicolon}{\dontprintsemicolon} %\usepackage{algorithm} %\usepackage{algorithmic} %\usepackage{algpseudocode} %\newtheorem{definition}{Definition} \usepackage{numprint} \usepackage{textcomp} \usepackage{xfrac} \usepackage{url} \usepackage{verbatim} \usepackage{import} %\usepackage{rotating} \usepackage{pdflscape} \usepackage{multirow} \usepackage{tablefootnote} \usepackage{threeparttable} %\usepackage{bm} % debug %\usepackage{showframe} %\usepackage[showframe]{geometry} %\usepackage[export]{adjustbox} %% new tex versions no longer need this %\ifx\pdfoutput\undefined %% we are running LaTeX, not pdflatex %\usepackage{graphicx} %\else %% we are running pdflatex, so convert .eps files to .pdf %% pdflatex --shell-escape filename %\usepackage[pdftex]{graphicx} %\usepackage{epstopdf} %\fi %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % FOR EASY SWITCH BETWEEN 'WORKING' AND 'RELEASE' VERSION COMMENT THESE COMMANDS % THE SECONDS OPTION 'REMOVES' ALL COMMENTS FROM THE PDF % DO NOT INLINE COMMENTS --- PUT THEM ON A SEPARATE LINE % \newcommand{\todo}[1]{\textcolor{red}{[TODO: #1]}} \newcommand{\marvin}[1]{\textcolor{blue}{[#1]}} \newcommand{\arno}[1]{\textcolor{green}{[#1]}} % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % USING THESE COMMANDS MAKES IT TRIVIAL TO CHANGE FROM ONE STYLE TO ANOTHER % EG. CHANGE \dataset{\emph} TO \dataset{\textbf} TO PRINT NAMES IN BOLDFACE % qoutes, emphasis, and other default renderings \newcommand{\attribute}{\emph} \newcommand{\dataset}{\emph} \newcommand{\subgroup}[1]{\mbox{`$#1$'}} \newcommand{\qm}{\emph} \newcommand{\op}[1]{`$#1$'} % symbols - variables \newcommand{\ds}[1]{\mathcal{D}_{#1}} \newcommand{\extension}[1]{\mathcal{E}_{#1}} \newcommand{\intension}{I} \newcommand{\absz}{$\lvert$$z$-score$\rvert$} \newcommand{\hs}{H} \newcommand{\bw}{W} % beam-width \newcommand{\bb}{C} % best-width (number of best candidates, based on oi per ai) \newcommand{\dm}{D} % max-depth % symbols - dimensions \newcommand{\dimension}{\emph} \newcommand{\parameter}{\emph} \newcommand{\dyndis}{\parameter{dynamic discretisation}} \newcommand{\predis}{\parameter{pre-discretisation}} \newcommand{\binaries}{\parameter{binaries}} \newcommand{\nominal}{\parameter{nominal}} \newcommand{\fine}{\parameter{fine}} \newcommand{\coarse}{\parameter{coarse}} \newcommand{\all}{\parameter{all}} \newcommand{\best}{\parameter{best}} % strategy names - these are used in the imported tex tables \newcommand{\dbfa}[1]{1-dbfa} \newcommand{\dbfb}[1]{2-dbfb} \newcommand{\dbca}[1]{\ifthenelse{\equal{#1}{0}}{3-dbca}{3-dbca\textsuperscript{#1}}} \newcommand{\dbcb}[1]{\ifthenelse{\equal{#1}{0}}{4-dbcb}{4-dbcb\textsuperscript{#1}}} \newcommand{\dnca}[1]{\ifthenelse{\equal{#1}{0}}{7-dnca}{7-dnca\textsuperscript{#1}}} \newcommand{\dncb}[1]{\ifthenelse{\equal{#1}{0}}{8-dncb}{8-dncb\textsuperscript{#1}}} \newcommand{\pbfa}[1]{\ifthenelse{\equal{#1}{0}}{9-pbfa}{9-pbfa\textsuperscript{#1}}} \newcommand{\pbfb}[1]{\ifthenelse{\equal{#1}{0}}{10-pbfb}{10-pbfb\textsuperscript{#1}}} \newcommand{\pnca}[1]{\ifthenelse{\equal{#1}{0}}{15-pnca}{15-pnca\textsuperscript{#1}}} \newcommand{\dnfb}[1]{17-dnfb} % could change this to 17-mamp % old commands \newcommand{\sd}{SD} \newcommand{\emm}{EMM} \newcommand{\eh}{\textsc{EqualHeightBinning}} % MW-U tables commands \newcommand{\lmix}{$<$} % left wins overall, but mixed results, right wins some \newcommand{\lall}{$\vartriangleleft$} % left wins all \newcommand{\lasi}{$\blacktriangleleft$} % left wins all, and all results are significant \newcommand{\draw}{$=$} % left wins as often as right \newcommand{\same}{$=$} % same score for all results ($\equiv$) \newcommand{\rmix}{$>$} % right wins overall, but mixed results, left wins some \newcommand{\rall}{$\vartriangleright$} % right wins all \newcommand{\rasi}{$\blacktriangleright$} % right wins all, and all results are significant \newcommand{\rb}[2]{#1 vs. #2} % ignore, redefined in MW-U table files %\newcommand{}{} % % KEEP TRACK OF USED SYMBOLS % % generally, % \mathcal{x} is used for sets % single capital character is used for size % % b = bin % B = number of bins % \mathcal{B} = set of bin boundaries --- contains at most B-1 boundaries % N = data size % \mathcal{D} = dataset (used to be \Omega) % \vec{r} = record --- !!! vec{r} = record, r = refinement !!! % n = subgroup size % n^c = complement size % \mathbb{A} = unrestricted domain of description attributes (descriptors) % \mathcal{A} = set of unique values of a_i % A = number of description attributes --- equal to m % a_i = description attribute % a_i^j = j-th attribute value of a_i % E = subgroup extension --- this is a set, should use \mathcal{E} % \vec{a}_i^{E_j} = vector of values of a_i covered by E_j % \mathcal{A}_i^{E_j} = set of unique values of a_i covered by E_j % s = subgroup % \mathcal{S} = set of subgroups (candidates in Cortana speak) % \mathcal{O} = set of all numeric operators --- not used, would clash with Big O % O = number of operators % o = operator from description language % d = search depth % p = parameter (search constraint) % \mathcal{P} = set of search constraints % \mathcal{R} = set of Refinements % R = number of refinements --- not used? % r = refinement % \mathcal{F} = result set % F = size of result set % v_y = value in \mathcal{A}_i^{E_j} (also v_1 and v_2) % H = size of set of hypothesis % T = number of best-scoring hypothesis --- not used, always 1 in Cortana % T = cardinality of target % W = beam width % C = number of best (sum_{i=1}^m (number of operators for a_i)) % D = maximum search depth %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % k = top-k % U = Mann-Whitney U % \Sigma = rank sum for Mann Whitney U --- !!! \mathcal{R} = set of Refinements, R would denote its size !!! % z = Mann-Whitney U z-score % mu_U = mean for Mann-Whitney U % \sigma_U = standard deviation for Mann-Whitney U % % %%% UNUSED %%% % I = subgroup intension % \mathcal{I} description language % d=1: $\bigcup\limits_{i=1}^A (a_i \times \mathbb{O} \times \mathbb{A_i}) % \mathcal{C} = set of all basic conjuncts, C = size of set of ... % d=2: $bigcup\limits_{i_1}^C (bigcup\limits{j=1}^C(c_i \wedge c_j)) % \mathcal{C^+} = all basic conjuncts + \emptyset % % \mathbb{R} = set of reals % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % SPELLINGS % % dataset, not data set % result set, not resultset % Subgroup Discovery once, then \sd{} % Exceptional Model Mining once, then \emm{} % Section, not Sec. % Figure, not Fig. % Equation, not Eq. % Algorithm, not Alg. % 'i.e. ', not 'i.e.,' (UK vs. US) % 'e.g. ', not 'e.g.,' (UK vs. US) % rule of thumb, not rule-of-thumb % bin boundaries, not bin-boundaries % multidimensional, not multi-dimensional % description language/generator/..., not pattern language/generator/... % cut points, not cutpoints % modelling, not modeling % controlling, not controling % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % MISC % % point types for plots % top-1 (*), 10 (x), 100 (+), all (o) % all (*), best( x), bins (+), bestbins (o) % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} \title{A Systematic Analysis of Strategies for Dealing with Numeric Data in Subgroup Discovery} \author{Marvin Meeng, Arno Knobbe} \date{} \maketitle \begin{abstract} Subgroup Discovery algorithms make various choices when it comes to dealing with numeric description attributes. One such choice concerns, the operators of, the description language. For example, one can use any of the following operators when dealing with numeric data: \op{<}, \op{>}, \op{\leq}, \op{\geq} and \op{\in}. The first four would create subgroup descriptions using half-intervals like \subgroup{attribute \geq boundary}. The last would create subgroup descriptions involving bounded intervals like \subgroup{attribute \in [x;y]}. Next to deciding what operators to use, one also needs to decide how to deal with the numeric values in the domain of the description attributes. One choice would be to consider every unique value in the domain of the description attribute under consideration. For the first four operators listed above, for a single attribute, this would potentially lead to $N$ different descriptions per operator, where $N$ is the size of the dataset. For the \op{\in} operator, the number of different descriptions is quadratic in $N$. Typically, when dealing with numeric description attributes, Subgroup Discovery algorithms do not consider every unique value in the domain. The search space often becomes prohibitively large when all such values are allowed, especially when the search depth is larger than $1$, i.e.\@ allowing for combinations of descriptions. Therefore, most algorithms build descriptions using only a selected number of values from a numeric description attribute. Just how to make this selection is the focus of this work. Various strategies are compared and experimentally evaluated. One set of experiments will compare between dynamic and pre-experiment discretisation of numeric attributes. Another set of experiments investigates the effect of a number of strategies that control which subgroup descriptions are formed for numeric description attributes. \todo{this is an old abstract, the setup of the paper and experiments changed considerably} \end{abstract} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Introduction} \label{section:introduction} In this work, a number of dimensions are identified over which Subgroup Discovery (\sd{}) algorithms can vary in their treatment of numeric attributes. Although one could examine the effect of choices made for each of these dimensions in isolation, such an analysis would miss interactions between them. So, besides analysing the effects of various parameter choices within individual dimensions, this work also offers a systematic analysis of the combined effects of these choices over all examined dimensions, as is relevant in real-world analyses. Table \ref{table:dimensions} gives an overview of the different strategies that are examined in this work. Each line in the table is considered to be a separate strategy, and differs from all others in at least one parameter choice for one of the dimensions discussed below. The dimensions are \dimension{discretisation moment}, \dimension{interval type}, \dimension{granularity} and \dimension{selection strategy}, and the options for each are listed in the respective columns. Of these dimensions, the first three are related to \emph{hypothesis generation}, whereas the last relates to \emph{hypothesis selection}. Strategies are referred to by a combination of a number and an acronym formed from the first character of their values for the aforementioned dimensions. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Dimensions} \label{section:dimensions} In \sd{}, subgroup descriptions are generated that select a subset of the data, and these so-called subgroups are then evaluated using some quality measure of choice. So, one could consider subgroup descriptions to be hypotheses, that are subsequently tested, to determine if they are valid and useful given a number of further search constraints. When dealing with numeric description attributes, how to generate these hypotheses is open to a number of choices that can be set by the analyst. Below, the dimensions related to hypothesis generation that are examined in this work are described in more detail. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \paragraph{Discretisation Moment} \sd{} deals with numeric data by setting conditions on the included values, typically by requiring values to be above or below a certain threshold. In realistic, non-trivial data, the continuous domains will be extensive, and a subset of reasonable cut points will need to be selected in order for the \sd{} process to be remain tractable (discretisation). When selecting cut points, there is the choice of selecting and fixing the cut points \emph{prior} to analysis (\predis{}) or dynamically determining suitable cut points whenever a numeric attribute is encountered \emph{during} the search process (\dyndis{}). The dimension that distinguishes these two options is referred to as \dimension{discretisation moment}. \paragraph{Interval Type} The term \dimension{interval type} refers to the way in which a set of cut points is treated to produce candidate subgroups. In the context of discretisation, it is customary to take $B{-}1$ cut points, and create a single nominal feature to represent in which of the $B$ intervals (bins) the numeric value falls. Subgroups are then formed by setting the derived feature to one of these values. Although this approach is common, it is definitely not the only option, and in fact has fundamental limitations (detailed in Section \ref{section:interval-type} and \ref{section:granularity}). The alternative is to (conceptually) translate the $B{-}1$ cut points into $B{-}1$ binary features, each corresponding to a binary split on the respective cut point. The two values for the interval type, \nominal{} and \binaries{}, now correspond to the two approaches described here. \paragraph{Granularity} The term \dimension{granularity} is used to describe how (many) hypotheses are generated given a numeric input domain, and the possible choices are \fine{} and \coarse{}. In case of \fine{}, every value from the input domain is used to generate a hypothesis. For \coarse{}, only a selected number of values from the input domain is used to generate a hypothesis. For the latter process, discretisation or `binning' techniques can be used. Here the main advantages and drawbacks result from the trade-off between exploration precision and execution time. \paragraph{Selection Strategy} The dimensions above all relate to hypothesis generation, influencing which candidate subgroups are generated and evaluated by the pattern generator. Besides these \emph{hypothesis generation} dimensions, there will also be a \emph{hypothesis selection} dimension to an \sd{} algorithm. Hypothesis selection refers to the process used to include generated hypotheses into the final result set and/or use them at a later stage in the search process. On the set of all valid generated hypotheses, that is, those that did not violate any search constraints, either of two selection strategies can be applied, \all{} and \best{}. The \all{} strategy does not filter out any of the generated hypotheses, meaning that all valid hypotheses will be included in the result set, and/or will be available for the remainder of the search process. In contrast, the \best{} strategy allows only the best of all valid hypotheses for a given numeric attribute to continue. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Dimensions Table} \label{section:dimensions-table} Table \ref{table:dimensions} presents an overview of the different strategies that are examined in this work. Each line in the table is considered to be a separate strategy, and differs from all others in at least one parameter choice for one of the dimensions introduced above. Since the possible choices for each of the four dimensions described above is binary, this leaves a combined total of sixteen strategies. To these sixteen, one extra strategy is added, as it does not properly fit within the framework of dimensions described above. However, a number of strategies is considered to be not useful. For example, in a \nominal{} setting, consecutive bounded intervals are created, and when this is done for each unique value in the input domain, as per the \fine{} setting, this would result in single value intervals. In general, this would lead to uninformative subgroup descriptions, that are hard to generalise and that single out only very limited fraction of the data. As this is unaffected by the parameter settings for both the dimensions \dimension{discretisation moment} and \dimension{selection strategy}, all four strategies combining \nominal{} and \fine{} are omitted from the experiments below. \import{./res/}{table-dimensions.tex} Also not included are strategies involving the combination \predis{}, \binaries{} and \coarse{}. The reasoning here is that the static discretisation of \predis{} reduces the cardinality of the data, and therefore the number of possible cut points, before the search process commences. If the cardinality is then further reduced in a \coarse{} setting, the result is a reduction that could have been established by a more coarse discretisation to begin with. The final omission is the strategy combining \predis{}, \nominal{}, \coarse{} and \best{}. There are no fundamental issues preventing an implementation of this strategy. However, to make more clear why this strategy is not implemented, it is worth considering the \all{} variant first. The origin for this strategy can be found in the work by Atzm\"{u}ller et al.\@ \cite{atzmueller:2012:vikamine}. The proposed algorithm takes a numeric attribute as input, and considering its domain and a discretisation algorithm, outputs a number of bounded intervals. The numeric values in the original numeric attribute are then replaced by the interval to which they belong. This essentially transforms a numeric attribute into a nominal one, at least from the perspective of a search algorithm, where each interval is now treated as a nominal class label. Customarily, no filtering is applied to subgroup descriptions generated from the same nominal attribute, and this holds also for the aforementioned algorithm. % NOTE: dssd does not filter, it just selects using a different concept of best (not based on solely on score, but diversity also) % CHECK: Lavrac/Flach CN2-SD could be an exception that does have a selection strategy Since the original introduction of this strategy omits a reductionist \dimension{selection strategy}, all nominal labels, bounded intervals in this case, are used to form subgroup descriptions that are included in the result set or feature as candidates later in the search process. Consequently, no \best{} variant of this strategy is included. Finally, an extra strategy is added. It has its origin in the work of Mampaey et al.\@ \cite{mampaey:2012,mampaey:2015}, and is relevant only in the context of classification target settings. For nominal description attributes, this strategy creates nominal value sets, containing those class labels that maximise the quality score for the classification target (see Section \ref{section:pattern-language} for an example). For numeric description attributes, intervals are created. This strategy could be considered `optimal', at least with respect to a search depth of 1, and is therefore included in the experiments. But, as it is only relevant in a classification target setting, and deviates from the other strategies, it is mentioned separately. %The work of Mampaey et al.\@ \cite{mampaey:2012,mampaey:2015} describes the use of their \textsc{BestInterval} algorithm, for both classification and regression tasks. %This work only considers the former, as no implementation of the algorithm is available for the latter. % NOTE this strategy it is NOT dynamic, the interval is computed only once, and remains fixed after that, also it could be established before mining starts %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Preliminaries} \label{section:preliminaries} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Data} \label{section:data} Throughout this work, a dataset $\ds{}$ is assumed to be a bag of $N$ \emph{records} $\vec{r} \in \ds{}$ of the form: \begin{displaymath} \vec{r} = \left(a_1, \ldots, a_m, t \right), \end{displaymath} with $m$ a positive integer from $\mathbb{Z}^+$. Here $a_1, \ldots, a_m$ constitute the \emph{descriptive attributes} or \emph{descriptors} of $\vec{r}$, and $t$ is the \emph{target attribute} or \emph{target} of $\vec{r}$. In general, descriptors can be taken from an unrestricted domain $\mathbb{A}$, and $\mathcal{A}_i$ denotes the (unique values in the) domain connected to $a_i$. However, domain restrictions might be imposed in (subsets of) the experiments presented in Section \ref{section:experimental-setup} below. In roughly half of the experiments a nominal attribute serves as target, and of its domain of class labels, one is used as target value. Targets in the remaining experiments are formed by continuous numeric attributes, taken from $\mathbb{R}$, no target value is used in such a setting. Before moving to the definition of subgroups, it is stressed that a distinction should be made between the intensional and extensional part of a subgroup \cite{meeng:2014}. Informally, the intensional part of a subgroup is its description, and the extension consists of the actual records that make up the subgroup. \sd{} is about descriptions, and not unlike in redescription mining \cite{galbrun:2018}, different descriptions for the exact same extension can all represent new knowledge and valuable insights. More formally, \emph{descriptions} are functions $\intension{}: \mathbb{A}^m \to \left\{0,1\right\}$. A description $\intension{}$ \emph{covers} a record $\vec{r}^{\,i}$ if and only if $\intension{} \left(a_1^i, \ldots ,a_m^i\right) = 1$. Typically, the \emph{pattern language} $\mathcal{\intension{}}$ in \sd{} is that of (conjunctions of) conditions on descriptive attributes of the general form \subgroup{a_i\ operator\ value}. Examples would include \subgroup{Smokes = false} and \subgroup{Color = brown \wedge Length \geq 1.76}. Often a maximum number of conjuncts in such descriptions is enforced through a parameter of the \sd{} algorithm called search depth, designated by $d$, where, for instance, a search depth of $2$ would allow for descriptions of at most two conjuncts. In a sense, one could say a subgroup description \emph{precedes} a subgroup extension, in that it is through the description that a subset of records is selected from the dataset. Definition \ref{definition:extension} expresses this relation. \begin{definition}{(Extension)} \label{definition:extension} An extension $\extension{\intension{}}$ corresponding to a description $\intension{}$ is the bag of records $\extension{\intension{}} \subseteq \ds{}$ that $\intension{}$ covers: \begin{displaymath} \extension{\intension{}}=\left\{\vec{r}^{\,i} \in \ds{}\ \middle|\ \intension{}\left(a_1^i, \ldots, a_m^i\right) = 1\right\}. \end{displaymath} \end{definition} From now on the subscript $\intension{}$ is omitted if no confusion can arise, and a subgroup extension is referred to simply as $\extension{}$. Further, $\vec{a}_i^{\extension{}}$ denotes the selection of (indexed) values, or vector, of $a_i$ included in $\extension{}$, and $\mathcal{A}_i^{\extension{}}$ denotes the set of unique values in this selection. Sizes of $\vec{a}_i^{\extension{}}$ and $\mathcal{A}_i^{\extension{}}$ are equal if and only if $\vec{a}_i^{\extension{}}$ does not contain any duplicate values. Analogously, $\mathcal{A}_i^{\extension{}} \subseteq \mathcal{A}_i$, where, if the size of $\mathcal{A}_i$ is equal to $N$, equality holds only when $\extension{}=\ds{}$. The explicit differentiation of the intensional and extensional facets of a subgroup is required in some \sd{} algorithms \cite{leeuwen:2012}. However, for the remainder of this work, $s$ denotes a subgroup, conjointly encompassing its intension and extension, and only when compelled to for sake of clarity are referrals made to either of these individual aspects. For any particular subgroup $s$, $n$ denotes its size, i.e.\@ the number of records in that subgroup: $n=|s|$. % NOTE complement is not used in this paper %The complement of a subgroup is denoted by $s^c$, its size is denoted $n^c$. %Hence, $s^c = \ds{} \backslash s$, and $n^c = N-n$. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Subgroup Discovery Algorithm} \label{section:subgroup-discovery-algorithm} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsubsection{Pattern Language} \label{section:pattern-language} A critical component of any \sd{} algorithm is the pattern language used to generate subgroup descriptions. The most straightforward descriptions are constituted by a single condition on a descriptive attribute. They are of the general form \subgroup{a_i\ operator\ value}. But even for these most basic descriptions a number of choices need to be made. First, which attributes, or attribute types, are to be considered, and how are they dealt with. In part this is connected to the second choice, what operators are to be include in the pattern language. Common attribute types are binary, nominal, ordinal and numeric, though binary could be considered a specific instance of the nominal type. Some operators are valid, or useful, only for some types, and not others. For example, for nominal attributes the operators \op{<} and \op{>} would not make much sense, whereas \op{=}, and possibly \op{\neq}, would. Considering ordinal and numeric attributes, possible operators include \op{<}, \op{>}, \op{\leq} and \op{\geq}, negations of any of these are equivalent to one of the aforementioned examples. Even for the third part of a condition, the value, multiple alternatives are available. Most \sd{} algorithms would only have singular values occurring here, like `red' or `3.14'. However, some allow for internal disjunctions \cite{kloesgen:1999,atzmueller:2006} or sets of values \cite{mampaey:2012,mampaey:2015}. For nominal attributes, `\{red, green, blue\}' could be such a value set. For ordinal and numeric attributes, the value could now be a set of values, or an interval. With the introduction of sets, \op{\in} became a useful additional operator. Generally, pattern languages also offer means of constructing more complex descriptions by combining multiple conditions. Customarily, conditions are combined through conjunctions (ignoring the special case of internal disjunctions). Conjunctions are favoured over disjunctions, as their use results in a more predictable search lattice. That is, when extending a description through the addition of a new conjunct, the size of the subset of records covered by the original description serves as upper bound for the new selection. Moreover, endorsing only conjunctions that create strict subsets will enforce that subgroup size is strictly decreasing. Although this behaviour could be valued as merely attractive, this property can also be exploited to optimise search space exploration \cite{atzmueller:2009:ismis,boley:2017,grosskreutz:2009,lemmerich:2012}. In this work, the following choices concerning the pattern language are made. Attributes of the binary type are dealt with exactly like their nominal counterpart, and ordinal attributes are ignored altogether. Conditions involving a nominal attribute use the \op{=} operator, the operators used for numeric attributes differ per context. The operators \op{\leq} and \op{\geq} are used for `single value' conditions in the context of \binaries{}, creating half-intervals. The \op{\in} operator is applied in the \nominal{} contexts, indicating that a value lies within an \emph{interval}. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsubsection{Bins} \label{section:bins} As alluded to in Section \ref{section:dimensions}, the \coarse{} strategy employs a discretisation, or binning, technique. In the context of this work, all discretisation is performed using the \eh{} algorithm presented as Algorithm \ref{algorithm:equal-height-binning} below. \eh{} takes a few parameters. First, a vector of values that form the current input domain, and its size, $\vec{a}_i^{\extension{j}}$ and $n$, respectively. Next, parameter $B$, the number of desired bins. Finally, in the \binaries{} setting, an operator $o$ is also supplied (either \op{\leq} or \op{\geq}). All values in the domain under consideration will then be put into bins in such a way that each bin contains (approximately) the same number of values. This might fail for a number of reasons. First, if $n/B$ does not produce an integer, not all bins will be assigned the same number of values. Second, in case of duplicate values, a bin boundary to either the left or right of this value might leave the respective bins with too few or too much values. Also, when requesting more bins than there are values in the input domain, $B > n$, some bins will be empty. Customarily, $B$ is treated as a maximum, and an algorithm could return less than $B$ bins. The bin boundaries are chosen from the unique values present in the domain considered ($\mathcal{A}_i^{\extension{j}}$). That is, when a subgroup selects a subset of the data, only values that occur in that subset can serve as bin boundaries returned in set $\mathcal{B}$. For reasons of interpretability of subgroup descriptions, the use of values not present in the numeric domain is discouraged. Moreover, it is only occurring values one can make certain claims about. For example, for a description expressing a `less or equal than some bound'-condition, one can give the exact number of values less or equal than that bound, and assign some quality score based on it. Considering all values in the data domain, for the one immediately following the aforementioned bound, a similar procedure is valid. However, assigning any statistic to any value between these two occurring data values would not be based on the available data. Note also that, for reasons of interpretability, the use of the operators \op{<} and \op{>} is discouraged. Conditions sporting these operators do not express a precise bound, potentially inducing confusion. For example, `$> 1$' could mean `$\geq 1.001$', `$\geq 88.3$', `$\geq \numprint{9999}$', and many others, and its interpretations requires inspecting the (unique) data domain. Finally, the \texttt{Sort} operation on line \ref{eh:sort} yields values in ascending order for \op{\geq}, and descending order for \op{\leq}. The algorithm performs binning differently for \op{\leq} and \op{\geq}, as these are not complementary operators, and therefore bin boundaries obtained for one might not be suitable for the other. Though listed separately here, sorting of the data can be done before performing an experiment, reducing computational demands during the search. Further note that, as $\mathcal{B}$ is a set, the returned bin boundaries are unique, even if input vector $\vec{a}_i^{\extension{j}}$ contains duplicates. \begin{algorithm} \caption{\textsc{EqualHeightBinning}($\vec{a}_i^{\extension{j}}$, $n$, $B$, $o$)} \label{algorithm:equal-height-binning} \DontPrintSemicolon \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} % variables \SetKwData{Vector}{$\vec{a}_i^{\extension{j}}$} \SetKwData{SubgroupSize}{$n$} \SetKwData{NumberOfBins}{$B$} \SetKwData{Operator}{$o$} \SetKwData{BinBoundaries}{$\mathcal{B}$} \SetKwData{EmptySet}{$\varnothing$} \SetKwData{Bin}{$b$} \SetKwData{Index}{$x$} % functions \SetKwFunction{Sort}{Sort} % \Input{\Vector (vector of values of $a_i$ covered by $\extension{j}$), size of subgroup \SubgroupSize, number of bins \NumberOfBins, operator \Operator} \Output{set of bin boundaries \BinBoundaries} % \Blankline \Vector $\leftarrow$ \Sort{\Vector, \Operator} {\label{eh:sort}}\; \BinBoundaries $\leftarrow$ \EmptySet\; \For {$\Bin=1$ \KwTo \NumberOfBins-1}{ \Index $\leftarrow$ $\SubgroupSize \Bin$/\NumberOfBins\; \BinBoundaries $\leftarrow$ \BinBoundaries $\cup$ $\Vector[\Index]$\; } \Return \BinBoundaries \end{algorithm} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsubsection{Beams} \label{section:beams} A level-wise search is performed using a beam of size 100, allowing for some trade-off of search space exploration and focus on promising candidates. \todo{see Appendix Items to Discuss: beam is tricky} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsubsection{Quality Measure} \label{section:quality-measure} A \emph{quality measure} objectively evaluates a candidate description in a given dataset. For each description $\intension{}$ in the description language $\mathcal{\intension{}}$, a quality measure is a function that quantifies how exceptional the subgroup $s$ is. Note that the definition of a quality measure is description-oriented, as in some cases one might want to factor in the complexity of the description into the final result \cite{leeuwen:2012}. When only the subgroup extension is required, it can be obtained by means of the intension. \begin{definition}{(Quality Measure)} \label{definition:quality-measure} A \emph{quality measure} is a function $\varphi_{\ds{}}: \mathcal{\intension{}} \to \mathbb{R}$ that assigns a unique numeric value to a description $\intension{}$, given a dataset $\ds{}$. \end{definition} In principle, \sd{} algorithms aim to discover subgroups that score high on a quality measure. However, it is common practice to also impose additional {\em constraints} on subgroups that are found by \sd{} algorithms. Usually these constraints include lower bounds on the quality of the description ($\varphi_{\ds{}}(\intension{}) \geq p_1$) and the size of the induced subgroup ($\left|\extension{\intension{}}\right| \geq p_2$). More constraints may be imposed as the question at hand requires. For example, domain experts may request an upper bound on the complexity of the description, which can be controlled through the aforementioned search depth parameter $d$. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\subsection{Pseudo-code Subgroup Discovery Algorithm} %\label{section:pseudo-code-subgroup-discovery-algorithm} The pseudo-code presented in Algorithm \ref{algorithm:subgroup-discovery-algorithm} describes a very generic \sd{} algorithm. The main line of interest is line \ref{sd:sc}, \texttt{$\text{SelectCandidate}(\mathcal{P}, \mathcal{S})$}, as it is at this point in the \sd{} algorithm that selection strategies will show their differing effect. One should assume that the set of search constraints $\mathcal{P}$ holds any required parameters (like the selection strategy used, or the number of bins). As some strategies do not just simply consider every, or the first candidate in $\mathcal{S}$, the selection function can be more involved than just obtaining $s$ through a simple $\mathcal{S} \setminus {s}$ operation. Also, for the different selection strategies, not only the subgroup descriptions generated and evaluated will differ, but the fashion in which results from these evaluations are used will do as well. %For each of the selection strategies given above, the effects are described in more detail below. Further, to accommodate for different search space exploration strategies within a single generic algorithm, \texttt{GenerateRefinements$(s, \ds{}, \mathcal{P})$} on line \ref{sd:gr} should be assumed to adapts its behaviour accordingly. For a level-wise search, refinements, which should be considered to be subgroups themselves, are created only for the current search level, and relevant refinements are then added to the set of candidates $\mathcal{S}$ to serve as seeds for the next level. For a depth-first search, all refinements are created at once, an no candidate (beam) set is used. Refinements are created by adding a conjunct to the description of a candidate subgroup (seed). Finally, the addition of a subgroup to the final result set $\mathcal{F}$ (line \ref{sd:ar}) and candidate set $\mathcal{S}$ (line \ref{sd:ac}) is performed by specialised functions, that check against search constraints, and take care of trimming, re-ordering, or other post-processing of these sets, if required. Noteworthy in this last respect is that, through its canonical representation of descriptions, Cortana \cite{meeng:2011:cortana}, the \sd{} tool used for the experiments, does not require a separate post-processing procedure, like \texttt{RemoveDuplicates} in \cite{leeuwen:2012}, to remove equivalent descriptions like \subgroup{C_1 \wedge C_2} and \subgroup{C_2 \wedge C_1}.%\footnote{A refinement generator could prevent such cases, but in a multi-threaded environment this can be complicated.}. \begin{algorithm} \caption{\textsc{SubgroupDiscovery}($\ds{}$, $\varphi_{\ds{}}$, $\mathcal{P}$))} \label{algorithm:subgroup-discovery-algorithm} \DontPrintSemicolon \SetKwInOut{Input}{input} \SetKwInOut{Output}{output} % variables \SetKwData{Dataset}{$\ds{}$} \SetKwData{SearchConstraints}{$\mathcal{P}$} %\SetKwData{QualityMeasure}{$\varphi_{\ds{}}$} % defined as function below \SetKwData{ResultSet}{$\mathcal{F}$} \SetKwData{CandidateSet}{$\mathcal{S}$} \SetKwData{Subgroup}{$s$} \SetKwData{Quality}{${score}$} \SetKwData{RefinementSet}{$\mathcal{R}$} \SetKwData{Refinement}{$r$} \SetKwData{EmptySet}{$\varnothing$} % functions \SetKwFunction{QualityMeasure}{$\varphi_{\ds{}}$} \SetKwFunction{SelectCandidate}{SelectCandidate} \SetKwFunction{GenerateRefinements}{GenerateRefinements} \SetKwFunction{AddToResultSet}{AddToResultSet} \SetKwFunction{AddToCandidateSet}{AddToCandidateSet} % \Input{dataset \Dataset, quality measure \QualityMeasure, search constraints \SearchConstraints} \Output{final result set \ResultSet} % \BlankLine \ResultSet $\leftarrow$ \EmptySet, \CandidateSet $\leftarrow$ \EmptySet \;%\tcp*[r]{candidate set \CandidateSet} \emph{subgroup with empty description \intension{} = \EmptySet, i.e.\@ no conditions}\; \Subgroup $\leftarrow$ \Dataset \;%\tcp*[r]{subgroup with empty description \intension{} = \EmptySet, i.e.\@ no restrictions} \CandidateSet $\leftarrow$ \CandidateSet $\cup$ \Subgroup\; \While {\CandidateSet $\neq$ \EmptySet} { % \tcp{selection strategy determines seed for refinement generation} \Subgroup $\leftarrow$ \SelectCandidate{\SearchConstraints, \CandidateSet} {\label{sd:sc}} \;%\tcp*[r]{selection strategy determines seed for refinement generation} \CandidateSet $\leftarrow$ \CandidateSet $\setminus$ \Subgroup\; \RefinementSet $\leftarrow$ \EmptySet\; % assume generateRefinements() uses \SearchConstraints to generate only relevant, valid subgroups, this is unlike Cortana \RefinementSet $\leftarrow$ \GenerateRefinements{\Subgroup, \Dataset, \SearchConstraints} {\label{sd:gr}} \;%\tcp*[r]{for level-wise searches: generate valid refinements for \par\hspace{178pt} current depth level only, then adds $r$ to \CandidateSet;\par\hspace{178pt} for depth-first search: generate valid refinements for all \par\hspace{178pt} depth levels at once, nothing is added to |CandidateSet$} \ForEach {\Refinement $\in$ \RefinementSet} { \Quality $\leftarrow$ \QualityMeasure{\Refinement}\; % \tcp{add only accordant subgroups, post-process \ResultSet if needed} \AddToResultSet{\SearchConstraints, \ResultSet, \Refinement, \Quality} {\label{sd:ar}} \;%\tcp*[r]{add only accordant subgroups, post-process \ResultSet if needed} % \tcp{as above, but for \CandidateSet; only for level-wise searches} \AddToCandidateSet{\SearchConstraints, \CandidateSet, \Refinement, \Quality} {\label{sd:ac}} \;%\tcp*[r]{as above, but for \CandidateSet; only for level-wise searches} } } \Return \ResultSet \end{algorithm} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Discretisation Moment} \label{section:discretisation-moment} For the dimension \dimension{discretisation moment}, the options \predis{} and \dyndis{} control at what point of the data mining exercise the discretisation is employed. In a \predis{} setting, the discretisation takes place \emph{before} the actual search commences. This means that for any relevant attribute, all bin boundaries are determined prior to the actual discovery. The pre-discretisation could be achieved by modifying the data such that actual values are changed, or store the binning information for later use in the experiment. Obviously, storing data created by pre-discretisation makes it readily available for later (repeated) experiments. Dynamic discretisation is performed \emph{during} the actual search space traversal. Consequently, it has the benefit of optionally incorporating information available only while the mining is in progress. That is, when generating new refinements based on subgroup $s_j$ with extension $\extension{j}$, the dynamic nature allows it to establish, for any attribute $a_i$, the exact numeric domain $\mathcal{A}_i^{\extension{j}}$ relevant to $s_j$. Although not guaranteed, in general the size of $\mathcal{A}_i^{\extension{j}}$ is smaller than the size of $\mathcal{A}_i$. Consequently, this enables the description generator to create fewer refinements, by using only values that are sensible in the context of $s_j$. An example of when \dyndis{} is useful, is when the description generator creates refinements for every value in $\mathcal{A}_i^{\extension{}}$, as would be the case in the \fine{} setting for dimension \dimension{granularity} (Section \ref{section:granularity}). Moreover, with each refinement, subgroups become smaller, further reducing the domain and hence the number of refinements created for each subgroup. The other setting for dimension \dimension{granularity}, \coarse{}, uses discretisation to obtain the cut points used in subgroup descriptions. From \predis{} to \dyndis{}, there is not so much a reduction in the number of refinements, but an increase in focus of the discretisation step. As the interval covered by the values $\mathcal{A}_i^{\extension{}}$ is likely to be smaller than that covered by $\mathcal{A}_i$, the range over which bins are formed is smaller also. Thus, with the number of bins $B$ remaining unchanged, these bins will span ever smaller intervals, increasing the focus further at each higher depth of the search. % NOTE ignore the extreme scenario below, it holds only for the lower and upper most bin, as these are the only ones that span 1/B-th of the original interval. %In an extreme scenario, this could mean that discretisation using $B$ bins on an interval yields a subinterval that spans $1/B$-th of the original, and that this subinterval in turn produces a sub-subinterval spanning $1/B$-th of it. %This sub-subinterval now spans $1/B^2$, that is $1/B^d$, of the original interval, where $d$ refers to the search depth at which this discretisation is performed. Obviously, the benefit of incorporating extra information comes at the price of increased computation. For each subgroup to be refined, an extra effort needs to be made to determine $\mathcal{A}_i^{\extension{}}$, and, if required, bin boundaries. Fortunately, these operations can be performed in $\mathcal{O}(n)$, that is, time linear to the size of the subgroup under consideration. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Interval Type} \label{section:interval-type} The \dimension{interval type} dimension is connected to the fashion in which intervals are derived from a set of cut points. This work considers two types of approaches, \binaries{} and \nominal{}. The \binaries{} variant creates a (virtual) binary feature for each cut point, corresponding to the two open-ended intervals defined by the cut point. The \nominal{} variant instead creates a single nominal feature that has $B$ possible values, each corresponding to one of the intervals defined by the $B{-}1$ cut points. As introduced in Section \ref{section:pattern-language}, the choice for either of these brings along a change in the pattern language used to create subgroup descriptions. As an advantage of the \binaries{} approach, one could list its flexibility. The intervals created for each cut point are overlapping. For the numeric domain corresponding to the data or subgroup under consideration, the portion covered by the first interval is roughly $1/B$, for the next it is $2/B$, and so forth. Thus, a split can yield a selection as large or small as is appropriate for the modelling task at hand. Furthermore, increasing $B$ results in both smaller and larger intervals, and thus subgroups, to be created. Although the intervals give the impression of being open-ended, this is, obviously, just a matter of presentation. Creating bounded intervals for \binaries{} is as trivial as replacing the open-ended interval by one including the lowest or highest value for the domain under consideration. Furthermore, more focused, highly restricted, bounded intervals can be created using two `opposing' constraints on a single attribute, such as \subgroup{a_i \geq v_1 \wedge a_i \leq v_2}, where $v_1 \leq v_2$. However, creating bounded intervals this way is computationally more expensive, as one needs to increase the search depth. The \nominal{} setting is the one favoured by the Vikamine \sd{} tool \cite{atzmueller:2012:vikamine}. It creates consecutive, bounded, intervals. As such, increasing $B$ results in smaller subgroups, because the domain is divided into more consecutive intervals. With this method, there are also advantages and drawbacks. Selecting sub-intervals directly is more straightforward than using `opposing' constraints, as needed by the \binaries{} strategy. More importantly, transforming numeric attributes into nominal ones allows the use of fast and efficient \sd{} algorithms \cite{atzmueller:2009:ismis,boley:2017,grosskreutz:2009,lemmerich:2012}. And, although this is just a choice of presentation, descriptions presenting bounded intervals might simply be more intuitive to end users. But, while the flexibility of \binaries{} allows it to adapt to the modelling task at hand, \nominal{} is fundamentally incapacitated by its inflexibility when it comes to some tasks. Although the \nominal{} setting produces an intuitive set of $B$ bins, the size of each interval is governed by the choice of $B$ in an undesirable and too restrictive manner. Assuming an equal height method of discretisation, each bin will contain roughly $N/B$ records. This means that depth-$1$ subgroups (containing a single condition) will all have a size of approximately $n=N/B$. Such an approach excludes large subgroups, and immediately pushes the search towards fairly specific areas of the search space. Combining subgroup descriptions through conjunctions generally exacerbates this problem, such that at higher search depths only very small subgroups are available. The limitations of this setting are relevant for both the classification and regression target types, though impact is usually more severe for the former. In essence, the issue revolves around the concept of target share. Basically, the fact that this approach invariantly produces small subgroups, precludes it from handling well those targets for which it is beneficial to select a large subset of records. Concretely, for classification targets this means that when the portion of positive target records is larger than $N/B$, this setting is inherently incapable of selecting all of them. For regression targets, in spite of eschewing a concept of target labels, the limitation still shows when the subset of `good' records is larger than $N/B$. Here, depending on the task at hand, `good' refers to those records that show a large deviation from the target mean, in either the positive or negative direction, or both. The reason the classification setting is more affected by this limitation, small subgroups, lies in the nature of these tasks. Most quality measures for classification targets, including \qm{WRAcc}, emphasise the inclusion of positive target records, or reversely, penalise not including them. With smaller subgroups, chances increase that positive target records are not included, and this becomes even more of a problem when combining subgroup descriptions through conjunctions at higher search depths. For the technically inclined, because of the small size of these subgroups, all results are restricted to a limited section of ROC space, and it sets an upper limit on the true positive rate. Quality measures for regression targets generally have less of an all-or-nothing nature, and focus on statistics like mean. There are no (positive) target labels in this setting, and subgroups that do not cover, say, the highest values of the target attribute, can still produce a high mean. Moreover, small subgroups more easily achieve low variation, for example selecting a small set of high, but not the highest, target values. Statistics like $z$-score and $t$-test take variation into account. So, by the nature of the mining task, smaller subgroups are generally less of a problem, or even beneficial, in a regression target setting, as there is no notion of (not including) positive target records. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Granularity} \label{section:granularity} The \fine{} and \coarse{} options of the \dimension{granularity} dimension control how (many) hypotheses are generated given a numeric input domain. The \fine{} alternative is most straightforward, as every unique value in a given input domain will be considered. So, for a subgroup $s_j$, with extension $\extension{j}$, a description generator will generate refinements for every value in $\mathcal{A}_i^{\extension{j}}$. Obviously, this allows for a more extensive search, but computational cost are much higher also. % NOTE using the term exhaustive search would be misleading, the paper uses beam-search (so even fine-all is not exhaustive) (and in depth-first search, fine-best is not either) The greedy \coarse{} alternative leads to far fewer refinements being generated. First, by discretising the domain $\mathcal{A}_i^{\extension{j}}$, $B$ bins and $B{-}1$ cut points are obtained. Then, only these are used by the description generator to form hypotheses. In general, any method of discretisation can be used for this setting. So, this could be supervised techniques for classification targets \cite{fayyad:1993}, single \cite{kontkanen:2007} or multi-dimensional MDL-based methods \cite{nguyen:2014}, or database inspired alternatives like V-optimal discretisation \cite{ioannidis:2003}. In this paper, we focus on the \eh{} algorithm (Algorithm \ref{algorithm:equal-height-binning}), as it is fast, simple, and applicable in both the classification and regression target setting presented in Section \ref{section:experimental-setup}. % NOTE Vikamine also offers EH as discretisation option (though it differs from Algorithm 1 used by Cortana) In general, greedy methods trade in precision to achieve a reduction in computation. Sometimes even limits can be proven for the worst case performance of a certain greedy method with respect to the exhaustive alternative. No such bounds are derived in this work, as they depend on the exact combination of quality measure and search parameters. However, analyses will be performed comparing the quality of results obtain using the \coarse{} and \fine{} method. It will determine whether the \coarse{} method is capable of producing results sets that are comparable in quality to that of the \fine{} strategy, either for the top ranking subgroups, or the result set as a whole. A final note concerns the combination of dimensions \dimension{granularity} and \dimension{interval type}, and the influence of search constraints controlling the required minimum and maximum size of subgroups. Consider, for example, $p_2$, the minimum required size of a subgroup, set to $10\%$ of dataset size $N$, and an attribute $a_i$, for which the size of $\mathcal{A}_i$ equals $N$. For the \fine{} strategy, which is used exclusively in combination with \binaries{}, many of the possible hypotheses would result in subgroups that would not satisfy the minimum coverage search constraint. That is, $10\%$ of the descriptions involving $a_i$ would result in too small subgroups, for both the \op{\leq} and \op{\geq} operators. For the \coarse{} strategy similar effects hold. In combination with the \nominal{} strategy, setting $B$ to anything more than $10$ will yield only subgroups that are too small. For the experiments presented below, this is taken into consideration by never setting $B$ too high. For experiments involving the \binaries{} strategy, the same upper limit for $B$ is used, even though most of the (overlapping) intervals yield sufficiently large subgroups. First, it is consistent with the choice made for the \nominal{} setting. Moreover, using a large number of bins goes against the rationale behind the discretisation, which is to achieve a search space reduction. % NOTE % The statement below was originally in the experimental section, and it is false, though not for obvious reasons. % Section Items to Discuss in the appendix is also relevant here. % The statement is true only very narrowly, that is, with respect to the refinements that are evaluated for a single candidate, at that depth level. % In combination with any heuristic, the search space of fine, at any depth higher than 1, is no longer guaranteed to include any of the hypothesis that are in the search space of coarse. % % exhaustive : fine-all coarse-all : true % exhaustive : fine-best coarse-best: false, only one (the best) description per attribute is used for refinement at the next depth % for fine and coarse the best description might be a different one, this utterly changes the search space at the next depth % beam search: fine-all coarse-all : false, imagine that for fine the beam of width 10 is completely filled with subgroups from a single attribute (using say the lowest 10 values) % and coarse does not create any of those (the lowest bin boundary is say value number 20), % then the search space at the next depth is completely different for the two strategies % beam search: fine-best coarse-best: false, see above % % ORIGINAL STATEMENT: % Every hypothesis generated by a \coarse{} strategy also occurs in the equivalent \fine{} strategy, which has a much larger search space. % Therefore, the main interest is to determine if \coarse{} strategies produce results comparable to that of \fine{} strategies, while being computationally far less demanding. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Selection Strategy} \label{section:selection-strategy} The final dimension to discuss is \dimension{selection strategy}, with options \all{} and \best{}. Unlike the other dimensions, this does not directly influence the process of \emph{hypothesis generation}, but that of \emph{hypothesis selection}. Here, hypothesis selection should not be equated with beam selection, as \all{} and \best{} can be applied in exhaustive depth-first, and (level-wise) breadth-first and beam searches. For description attributes of any type, multiple hypotheses can be generated, one for each operator-value combination. As an example, consider a numeric attribute $a_i$, and operator \op{\leq}, then, for each value $v_y$ in the domain $\mathcal{A}_i^{\extension{j}}$, \subgroup{a_i \leq v_y} is a viable hypothesis. On the set of hypotheses that do not violate any search constraints, \all{} and \best{} can be applied. The \all{} strategy evaluates all hypotheses in the set, and considers all of them for further processing. The \best{} strategy also evaluates all hypotheses, but only the one(s) that obtained the top score will be considered. Here, further processing refers to either of, or both, the possible addition of the hypothesis to the result set, and keeping the hypothesis available for further refinement, either immediately, or later in the search process. Note that there might be further constraints, like a maximum size, that restrict possible additions to both the final result set $\mathcal{F}$ and the candidate set $\mathcal{S}$ containing possible refinements. Therefore, additions to these two sets are not simple set additions per se, and need to be performed by specialised functions. In terms of complexity, the two strategies do not differ with respect to the number of evaluations that is performed for a single attribute. However, there is a reduction in the number of hypotheses considered for inclusion in the result set and the candidate set. If these need to remain sorted, as is customary for a result set, this can already have a considerable effect. For a result set of size $F$, and a set of hypotheses of size $H^*$, \all{} would have a complexity of $\mathcal{O}(H^* \log{} F)$. For \best{} it would only be $\mathcal{O}(1 \log{} F)$, or $\mathcal{O}(\log{} F)$, if at most one of the hypotheses in the set is allowed to be added to the result set. Note that this is the complexity per single attribute-operator combination, as that is what forms the set of hypotheses considered here. The effect of \best{} on the search space is more dramatic when the search depth $d$ is larger than $1$, but this discussed in Section \ref{section:complexity-analysis}. % NOTE the part below is removed, it is only valid for exhaustive search; it is replaced by Section Complexity analysis below %Even more dramatic is the effect of \best{} on the search space, when the search depth $d$ is larger than $1$. %Let $H'$ denote the size of the complete set of hypotheses that can be formed using a single conjunct, that is, all possible hypotheses, for every attribute and operator on search depth $1$. %Then ${H'}^d$ would give the number of hypotheses for a search depth $d$. %For \all{}, $H'$ consists of the sum of all possible hypotheses for each attribute-operator combination. %Since the size of the domain for each attribute can be equal to $N$, this could result in $\mathcal{O}(O \cdot A \cdot N)$ single conjunct hypotheses, where $A = m$, the number of description attributes. %So effectively, this yields a search space complexity of $\mathcal{O}(N^d)$, assuming $O$ and $A$ are so small they are dominated by $N$. %On the other hand, the \best{} strategy results in only $\mathcal{O}(O \cdot A \cdot 1)$ single conjunct hypotheses. %Ignoring $O$, as it is generally $1$ or $2$, this results in a search space complexity of $\mathcal{O}(A^d)$. %Since $N$ is generally orders of magnitude larger than $A$, \best{} has the potential to significantly reduce the size of the search space. %Obviously, like with the \fine{} and \coarse{} alternatives for the \dimension{granularity} dimension, the main advantages and drawbacks result from the trade-off between exploration precision and execution time. %However, when \all{} and \best{} are combined with \coarse{}, one should replace $N$ with $B$ in the analysis above, and the reduction obtained through \best{} is far smaller. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Complexity analysis} \label{section:complexity-analysis} The sections above already described the computational complexity for individual options of the dimensions. But strategies combine options, and the compound effect of these choices influences the computational complexity. Additionally, the type of search space exploration is relevant. Table \ref{table:complexity-analysis} presents the complexity for various combinations of options. Not all four dimensions are listed explicitly, but all are accounted for. Dimension \dimension{discretisation moment} is omitted, because when an attribute is considered during search, its options differ only in which values of its domain are used, but not how many. The options of dimension \dimension{interval type} determine which operators are available for descriptions, and before the search commences, the type of operators, and their number, is set. The effect of this choice is incorporated through $\hs{}$, $\hs{}'$, and $\bb{}$. First, $\hs{}$ is the number of single condition hypotheses for the whole dataset, and this obviously takes the choice of operators into account. Then, $\hs{}'$ is the number of single condition hypotheses for the whole dataset, but in a \coarse{} setting, where $B$ controls the reduction of the number of values used for numeric description attributes. Finally, $\bb{}$ is created by summing the number of operators used for each description attribute. The latter is relevant in a \best{} setting, and for each refinement $r$, it indicates the number of results that can be added to final result set $\mathcal{F}$ and candidate set $\mathcal{S}$ (see Algorithm \ref{algorithm:subgroup-discovery-algorithm}). As the complexity can further be influenced by the type of search space exploration, this is also added to the table. It distinguishes between exhaustive depth-first search, and heuristic level-wise beam search. Symbol $\dm{}$ represents the maximum search depth, and $\bw{}$ indicates the beam width. For beam searches, the table shows that complexity for the two \fine{} combinations is the same, as is true for the two \coarse{} combinations. At every search level higher than $1$, only $\bw{}$ candidates of the previous level are combined with $\hs{}$ conditions. This is true for both \all{} and \best{}, erasing the difference between them. However, besides complexity there is also the aspect of subgroup set diversity. When a single attribute produces many good results, the result and candidate sets could become saturated with many similar descriptions. The use of \best{} avoids such saturation by retaining only one result per attribute-operator combination. \begin{table}[!h] \centering \caption{Complexity analysis.} \label{table:complexity-analysis} \begin{tabular}{ll|ll} \multicolumn{2}{c|}{dimension} & \multicolumn{2}{c}{search space exploration type}\\ granularity & selection & exhaustive & heuristic\\ & strategy & (depth-first) & (level-wise beam)\\ \hline \fine{} & \all{} & $\mathcal{O} \left( \hs{}^\dm{} \right)$ & $\mathcal{O} \left( \hs{} (\dm{}\bw{}{-}\bw{}{+}1) \right)$\\ \fine{} & \best{} & $\mathcal{O} \left( \hs{} \bb{}^{\dm{}\text{-}1} \right)$ & $\mathcal{O} \left( \hs{} (\dm{}\bw{}{-}\bw{}{+}1) \right)$\\ \coarse{} & \all{} & $\mathcal{O} \left( \hs{}'^\dm{} \right)$ & $\mathcal{O} \left( \hs{}' (\dm{}\bw{}{-}\bw{}{+}1) \right)$\\ \coarse{} & \best{} & $\mathcal{O} \left( \hs{}' \bb{}^{\dm{}\text{-}1} \right)$ & $\mathcal{O} \left( \hs{}' (\dm{}\bw{}{-}\bw{}{+}1) \right)$\\ \end{tabular} \end{table} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Related Work} \label{section:related-work} The strategies presented in this work relate to an extensive range of topics. Therefore, only a selection of relevant work is discussed. %A few topics are highlighted, mostly along the lines of the four dimensions. First, many papers make a comparison between \sd{} algorithms. This is done in overview papers like \cite{atzmueller:2015,herrera:2011}, and papers introducing new algorithms \cite{atzmueller:2006,atzmueller:2009:ismis,boley:2017,grosskreutz:2009,leeuwen:2012,mampaey:2012,mampaey:2015,meeng:2014,nguyen:2014}. % TODO MM check this list if all indeed compare strategies, or just introduce a new one (konijn:2013:kdd|qimie might be added to the list) However, these papers only include a subset of the strategies presented in this work, and then only a very specific implementation of this limited set. The exclusive aim of this work is to provide a systematic and comprehensive experimental evaluation and comparison of all presented strategies, and this sets it apart from earlier work. Furthermore, the focus of this work is on classical \sd{}, so its generalisation Exceptional Model Mining (\emm{}) \cite{duivesteijn:2013,leman:2008,lemmerich:2012} is not considered. The working assumption is that, for a certain target type in \sd{}, different quality measures exist, but that the models they use to gauge subgroup quality are comparable. More specifically, classification tasks typically use (counts from) contingency tables, and regression tasks use (variations of) simple distribution statistics like mean, median and standard deviation. This work does not include experiments evaluating multiple quality measures for a single target type, as results would be similar, and the experimental section too expansive. For the various \emm{} tasks, the model classes are radically different, and probably incomparably so. As the dimensions tested in this work could have different effects for each distinct \emm{} task, as is true now for the classification and regression tasks, experiments would have to be performed for each included task, increasing results manyfold. One could add to the two target types classification and regression at least correlation and (multiple-)regression \cite{leman:2008,duivesteijn:2012}, bayesian-networks \cite{duivesteijn:2010}, and uni- and multi-variate probability density functions (forthcoming). Surely, this becomes to much to present in a single paper, if it also requires introducing all concepts and methodology presented in the current one. \todo{ARNO: something like this could go into introduction} With respect to \predis{} and the \nominal{} interval type, notable works include those concerning Vikamine \cite{atzmueller:2012:vikamine} and optimistic estimates \cite{atzmueller:2009:ismis,boley:2017,grosskreutz:2009,lemmerich:2012}. % TODO MM check pre/dynamic implementation of boley:2017 and grosskreutz:2009 Most focus on (the design of) fast and efficient \sd{} algorithms, which is greatly facilitated by employing a \nominal{} strategy. Again, introducing new algorithms this is not the aim of this work. Nonetheless, the \dyndis{} variations of \nominal{} strategies appear to be novel, and their performance is evaluated. However, their creation should be considered a consequence of consummating the matrix of \sd{} strategies in Table \ref{table:dimensions}, and no effort is made to make them more efficient through optimistic estimates or other means. %Surely, optimistic estimates can be incorporated in \dyndis{} strategies, but effects are probably much smaller than for \predis{}, especially with respect to reduction of computational complexity. % TODO MM RealKrimp (hyperintervals); Witteveen et al.\@ (IDA) % TODO MM Richer Descriptions; Mampaey \cite{mampaey:2012,mampaey:2015} % NOTE the paragraph below does not really add anything %Section \ref{section:granularity} explained the pragmatic choice to use only equal height discretisation in this paper. %Although other techniques exist, that might sometimes be better, equal height discretisation is fast and can be employed for both target types, avoiding convolution of the experimental sections. %Furthermore, it is known to generally give good results, and only one of the four dimensions concerns the exact discretisation method. %Therefore no additional techniques are considered. % TODO numeric association rules; Arno? The work of \cite{leeuwen:2012} already came up above, but, it is not included in the experiments, as Diverse Subgroup Set Discovery (DSSD) does not have high quality scores as main objective, making for an unfair comparison in the context of this paper. Also, where \all{} and \best{} are applicable in both exhaustive and level-wise beam searches, DSSD is exclusively a \emph{beam} selection strategy. % and thus its use is more limited. %Notwithstanding, the \all{} and \best{} selection discussed above, could be incorporated into DSSD, and would occur in its \texttt{GenerateRefinements} function (line 5 of Algorithm 1 in the paper). %Then, all of the candidates generated by this function, that employed either the \all{} or \best{} strategy, will form the set of potential candidates to go into the beam for the next search level. %Only after generating all candidates for the current search level, a number of them, equal to beam width $W$, is selected for further processing through a process that considers both the quality scores of the individual candidates, and the diversity of the of the beam. %So, as a selection strategy, DSSD is both more limited than \all{} and \best{}, as it works only for beam searches, and more broad, as its performs additional processing. %Though, perversely, depth-first searches could be conceptualised as using a beam of unlimited size, that does not operate level-wise, but that orders its items (candidates) like a classical depth-first tree. Notwithstanding, the \all{} and \best{} selection discussed above, could be incorporated into DSSD. Moreover, it indicates that `best' should be considered a broader concept than just referring to subgroup quality score alone. It could encompass additional characteristics of an individual subgroup, or of a subgroup set. An example of the former is that a subgroup is required to present non-trivial, novel insights \cite{konijn:2013:pakdd}. Concerning subgroup sets, `best' could include the diversity criterion of DSSD. But `best' could also mean the single best per attribute, or the top-k, and, surely, many others variations have been, or could be, thought of. \marvin{ The work of Mampaey et al.\@ \cite{mampaey:2012,mampaey:2015} describes the use of their \textsc{BestInterval} algorithm, for both classification and regression tasks. This work only considers the former, as no implementation of the algorithm is available for the latter. } \begin{comment} % NOTE MW-U is only introduced below, in the next section, ignore the paragraph below Finally, a few notes about the use of the Mann-Whitney $U$ statistic. First, this statistic is used only to compare mean ranks. To compare medians, the distributions are required to have `similar' shape, and this is not the case more often than it is. Also, different statistical packages offer different versions of the Mann-Whitney $U$ test \cite{bergman:2000}. When computing the $z$-score for a result, this work applies both a tie-correction, and a continuity correction. % TODO MM bergman:2000 Different Outcomes of the Wilcoxon-Mann-Whitney Test from Different Statistics Packages https://www.tandfonline.com/doi/abs/10.1080/00031305.2000.10474513 Further, the independence of observations assumption could be considered violated as the same subgroup can occur in both groups (rankings). This could happen for some strategy comparisons, like those comparing the \all{} and \best{} variant of otherwise identical strategies. On the other hand, it does not usually happen for strategies that differ in dimension \dimension{interval type}, as \binaries{} and \nominal{} produce very different descriptions. But, this is under the assumption that observations refer to subgroup descriptions. Under the interpretation that subgroup extensions should be considered observations, the assumption is probably broken more often, as multiple descriptions can refer to the same extension. Nonetheless, the statistic is still used, and it seems a reasonable instrument, as the final conclusion will show that it produces the exact same ranking of strategies as a comparison based on mean scores. % NOTE the equivalence: AUC_1 = U_1/(n1*n2) \end{comment} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Experimental Setup} \label{section:experimental-setup} In the experiments described below, we analyse the benefits and drawbacks of the strategies listed in Table \ref{table:dimensions}. Since there are interactions between the four dimensions, we do not consider them in isolation, but rather perform experiments comparing various combinations. Thus, we hope to cover the various aspects of the general question of how to deal with numeric attributes in \sd{}. Also, considering a dimension in isolation might lead to nice (theoretical) complexity, result quality, and so forth, but the impact of any parameter choice might be negligible in real-world analyses (in which dimensions are combined). The experimental sections discuss two main themes, that can be interpreted as \emph{within}, and \emph{across}, strategy analyses. First, for each individual strategy, Section \ref{section:best-number-of-bins} analyses results for various settings of $B$, and determines the best number of bins. But, a lower (or higher) bin count for one strategy with respect to another says nothing about how the quality of their results compare. So subsequent experimental sections will focus on that question by comparing across different strategies. To keep the discussion focused, results from only one variant, or parameter setting of $B$, of each strategy is used, based on the choice of $B$ established in Section \ref{section:best-number-of-bins}. Before discussing individual experiments, we first give an overview of the experimental conditions and parameters, and list the datasets that will feature in the subsequent sections. All experiments were performed using the \sd{} tool Cortana \cite{meeng:2011:cortana}. Search was performed using the quality measure \qm{WRAcc} for classification targets \cite{lavrac:1999}, tested in a `target value versus rest' setting, and \qm{z-score} for regression targets \cite{pieters:2010}. The minimum score threshold for \qm{WRAcc} was set to $-0.25$, the lowest possible score for this measure, as this will produce full rankings, that is, every subgroup generated by the pattern generator is considered. The minimum score threshold for \qm{z-score} was set to $0.0$, for the same reason. Note that actually the absolute \qm{z-score}, \qm{\absz}, is used, so both subgroups with a higher, or lower, average on the target are considered. The search depth ranges from $1$ to a maximum of $3$, as \sd{} algorithms often do not produce much better subgroups when increasing the search depth further, and also, complex subgroup descriptions are in disagreement with the easy to interpret, exploratory, descriptive nature of the paradigm. \todo{ARNO: There are plots available showing this, see Appendix \ref{appendix:quality-increase}.} % NOTE producing full rankings exacerbates beam and result set saturation, which could be a potential weakness of the experiments/ objection by reviewers. Respectively, a minimum and maximum subgroup size of $0.1 N$ and $0.9 N$ is enforced for all subgroups, to avoid, overly small, subgroups attaining unrealistic high scores. Beam search is performed using a beam of size $100$. The pattern language will use the \op{=} operator for nominal attributes, and for numeric attributes, \op{\leq} and \op{\geq} are used in strategies involving \binaries{}, and \op{\in} in \nominal{} contexts. The bins for the \coarse{} strategies are determined using the \eh{} method of discretisation given in Algorithm \ref{algorithm:equal-height-binning}, and for each strategy the exact number is determined in Section \ref{section:best-number-of-bins}. Tables \ref{table:table-datasets-nominal} (classification tasks) and \ref{table:table-datasets-numeric} (regression tasks) list the datasets used in the experiments. Datasets are taken from the UCI repository \cite{uci}, and the set is chosen such that it gives a good mix with respect to the various statistics. It represents a range of sizes ($N$), number of numeric description attributes ($|$numeric$|$), (positive) target share (for classification datasets), and target cardinality ($T$) (for regression datasets). The \dataset{adult} and \dataset{pima-indians} datasets are customarily used with a classification target, here they are also used in a regression setting, using the \attribute{age} attribute as (regression) target. \import{./res/}{table-datasets-nominal.tex} \import{./res/}{table-datasets-numeric.tex} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Best Number of Bins} \label{section:best-number-of-bins} This section is dedicated to the `number of bins' parameter setting $B$. This parameter controls the number of cut points that is eventually used by the \sd{} algorithm. Setting this parameter such that results are optimal is a non-trivial task, as it is not immediately clear what the effect of this parameter is within the context of the various strategies. Furthermore, the possibility that effects differ amongst target type settings, and datasets, might further hinder a straightforward selection of the parameter value. Therefore, this section presents the results of experiments performed to obtain insights into the intrinsic complexities stemming from these compound effects. For the strategies under investigation, a number of experiments is performed. All experiments are otherwise identical, except for the parameter setting controlling the number of bins. For a single strategy, for each dataset, result sets of experiments using different parameter settings for $B$ are collected. Then, from the result set obtained for each experiment, the average score for the top-k subgroups is determined. Finally, by ranking these average scores, a ranking is determined for the parameter setting $B$. That is, the experiment that yields the highest score for the top-k is assigned rank number $1$, the second highest gets number $2$, and so forth. It is thus determined what value of $B$, ranging from $2$ to $10$, results in the highest score. \import{./res/all_experiments/bins_tables/}{strategies-max10-bins-nominal-bins-table-top-1.tex} \import{./res/all_experiments/bins_tables/}{strategies-max10-bins-numeric-bins-table-top-1.tex} This section examines all strategies that use the \coarse{} alternative for hypothesis generation, and the two strategies that combine \fine{} with \predis{} and \binaries{}. As described in Section \ref{section:dimensions-table}, the latter can be used in a \coarse{} setup. For depths $1$, $2$, and $3$, and a top-k of $1$, results are shown for the classification and regression target setting in Table \ref{table:strategies-max10-bins-nominal-bins-table-top-1} and Table \ref{table:strategies-max10-bins-numeric-bins-table-top-1}, respectively. Experiments involving \binaries{} strategies often led to multiple settings of $B$ yielding the highest score. In part, this results from the nature of the algorithm. Demonstratively, the complete set of cut points obtained when creating $B$ half-intervals will occur in the set of cut points obtained when creating $2B$ half-intervals, or more generally, any positive multiple of $B$. In such cases, only the lowest value of $B$ is reported. Furthermore, within each table, the value for $B$ at depth $1$ is equal for the \all{} and \best{} alternative of an otherwise similar strategy, this is true by design. %Depth $1$ results for \best{} strategies are presented nonetheless, as a comparison will also be made between tables (target settings). % NOTE one might expect dbc* and pbf*, both using B bins, to be identical also for d=1, but by the nature of Algorithm 1 this is not true per se, pre-FINE uses all B values, dyn-COARSE might not. The results show there is no universal rule that guarantees a good number of bins. Not only does the best number differ per strategy, for a single strategy it can even differ per target type. Nonetheless, a number of general observations that can serve as guideline are listed and discussed below. Thereafter, Table \ref{table:best-number-of-bins} lists, for each strategy and target type, the number of bins used in the experiments comparing distinct strategies. This number is determined by taking the value of $\mu(B)$/overall in the relevant result table and rounding it to the closest integer. The best number of bins for: \begin{enumerate} % 1 \item \binaries{} is higher than for \nominal{}, irrespective of target type,\\ \nominal{} is really low for classification targets, % 2 \item \nominal{} never varies over depths for a single dataset in classification tasks (one exception),\\ \binaries{} often increases over depths, especially from depth $1$ to $2$, in these situations, % 3 \item \nominal{} is stable over depths for a single dataset in regression tasks (for depth $1$ to $2$, some decrease),\\ \binaries{} with \dyndis{} always changes from depth $1$ to $2$, in these situations, % 4 \item \binaries{} is on par for \dyndis{} and \predis{} for classification targets,\\ \binaries{} is higher for \dyndis{} than \predis{} for regression targets, % X % MM do not think the following item is relevant: %\item \binaries{}-\all{} is higher than for \binaries{}-\best{} for classification targets,\\ % \binaries{}-\all{} is not higher than for \binaries{}-\best{} for regression targets, %5 \item almost every strategy varies greatly over datasets, irrespective of target type,\\ \nominal{} hardly varies for classification targets, and is an exception at that. \end{enumerate} Not every item from the list above is discussed in detail. But a first general conclusion concerns the clear difference between \binaries{} and \nominal{} strategies. Consistently, the former lists higher numbers. Section \ref{section:interval-type} described for both settings, and both target types, the effects of higher values of $B$ on the size of the subgroups, and how small subgroups impact result quality. For \nominal{}, when $B$ increases, the size of the subgroups decreases, something that is especially problematic in the classification target setting. For \binaries{}, it would be tempting to think that a higher $B$ would also result in smaller subgroups. But, remember that for \binaries{} $B{-}1$ overlapping virtual attributes are created, covering both small and large subsets of the data. And especially in conjunctions at higher search depths, including larger subgroups might actually be more useful than having only smaller ones, as combinations of the latter often become too small to meet the minimum coverage constraint. These observations cover items 1, 2, and the second part of 5. % NOTE check item number when changing list The above also explains why for almost all strategies a higher number of bins is listed for the regression target setting. The only deviations from this trend occur for the strategies combining \predis{} with \binaries{}. But again, remember that small subgroups are detrimental in a classification target setting, and, that for \binaries{}, a higher setting of $B$ actually also enables the formation of \emph{larger} subgroups. Unlike \dyndis{}, \predis{} is unable to adapt the bin boundaries selected for the description attributes to the current target distribution (of the subgroup). As a result, the joint probabilities created by conjunctions are build from conditions based on a limited set of pre-discretised values (bin boundaries), that might not be relevant in the context at hand. Descriptions using these values would then only select too small (or large) subgroups, or be otherwise unable to capture relevant aspects of the target. Given this inflexibility, a higher $B$ does not lead to better scores (item 4), as a lower $B$ already creates (small) subgroups with enough focus. %This is in part related to item 4, where \dyndis{} makes optimal use of added liberty to selected the most optimal subset of dataset currently under consideration. % NOTE item 4 / this text is referred to by Section \ref{section:ranking-subgroup-discovery-strategies} % NOTE check item number when changing list Obviously, the first part of item 5 is the most troubling. % NOTE check item number when changing list Although some general trends are discernible, the key problem of choosing a good setting of $B$ for all situations remains illusory. Here, also the dataset characteristics listed in Table \ref{table:table-datasets-nominal} and Table \ref{table:table-datasets-numeric} do not prove helpful. Limited relieve is offered by the fact that once a good setting of $B$ is found for a dataset, it is often useful for all depths. This holds true for both target types, and all strategies, except those combining \binaries{} with \dyndis{}. \begin{table} \centering \caption{Table showing, for each strategy and target type, the number bins used in subsequent experiments.} \label{table:best-number-of-bins} \begin{tabular}{l|ccccccc} target type & \dbca{} & \dbcb{} & \dnca{} & \dncb{} & \pbfa{} & \pbfb{} & \pnca{}\\ \hline classification & 7 & 6 & 2 & 2 & 7 & 6 & 2\\ regression & 7 & 7 & 3 & 4 & 6 & 5 & 4\\ \end{tabular} \end{table} % NOTE the information in this table could be put into Tables 4 and 5, listing the best number of bins for each target type %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Comparing Subgroup Discovery Strategies} \label{section:comparing-subgroup-discovery-strategies} Each of the experimental sections below focuses on a different dimension, but all follow a similar setup. First, it lists the strategies that are compared. Then, separately for each target type, results are discussed. And a conclusion closes of each section. Results are presented in two different forms, tables with (aggregated) mean scores and tables with Mann-Whitney $U$-scores. Each table lists the results for all strategies in Table \ref{table:dimensions}, but \dnfb{0} is only included for classification targets. Individual sections then contrast different pairs of strategies, depending on the dimension being discussed. For strategies that involve a $B$ parameter, a superscript over the name indicates what setting of $B$ was used to produce the result. All tables are available in Appendix \ref{appendix:tables}. Two mean tables, one per target setting, are produced by obtaining the results for each dataset and every experimental setting. Then, for each depth and top-k, the scores for each dataset are normalised by dividing them by the score of strategy \dbfa{0}. Consequently, results for all datasets are now comparable, and can be aggregated. Mean aggregates are produced for depths $1$, $2$, and $3$, and a \emph{k} of $1$ and $10$. % NOTE top-100 is not used , especially for coarse{} strategies top-100 results are somewhat misleading, as for some datasets not enough results are produced % NOTE \dbfa{0} is used as index because it produces the biggest result set, such that top-k is always available, and it is available for both classification and regression targets (where for the latter it is guaranteed no other strategy can score better) % NOTE % independence of observations assumption could be considered violated as the same subgroup can occur in both groups (distributions) % MWU is used only to compare mean ranks, to compare medians the distributions are required to have `similar' shape % https://statistics.laerd.com/spss-tutorials/mann-whitney-u-test-using-spss-statistics.php Besides the mean tables, two tables with Mann-Whitney $U$-scores \cite{mann-whitney:1947} are also presented, again, one per target setting. Although a mean can give a useful indication of how well (top) subgroups score, \sd{} is generally concerned with a ranked list of resulting subgroups. So, to give a better insight into the distribution of scores in the involved rankings, the Mann-Whitney $U$ statistic is used. This statistic compares two distributions to determine if one is stochastically greater than the other, in which case the probability of an observation from the first distribution exceeding that of the second is different from the reverse probability (of an observation from the second exceeding that of the first). In the extreme case of $U = 0$, all values from one distribution come before all values of the other distribution. Such an insight can not obtained by comparing the means, or medians, of two rankings. %Under more strict assumptions, a significant $U$ can be interpreted as showing a difference in medians. Further, $U$-scores can be used for significance testing. For small samples, the null hypothesis `the distributions are equal' is accepted or reject using a critical value table available in standard statistics books. For large samples, $U$ can be converted into a $z$-score, and a $p$-value can be determined. Although the same statistics can be derived using a standard normal distribution, the majority of score distributions in the result lists do not adhere to the normality criterion\footnote{This was tested, but is not presented here as to keep the discussion focused.}. For the Mann-Whitney $U$-test non-normality is not an issue. When comparing two strategies, all scores of their result sets $\mathcal{F}_1$ and $\mathcal{F}_2$, of size $F_1$ and $F_2$ respectively, are put together, sorted, and assigned combined ranks. Then, $U_1$ is computed for set $\mathcal{F}_1$ as follows: \begin{equation} \label{equation:mann-whitney-U} U_1 = \Sigma_1 - \frac{F_1 \left(F_1 + 1 \right)}{2}, \end{equation} where $\Sigma_1$ is the sum of ranks of result set $\mathcal{F}_1$. The same is done for $\mathcal{F}_2$, and the smaller of $U_1$ and $U_2$ is used as $U$ for significance testing. A critical value table for Mann-Whitney $U$-scores typically lists values for different levels of significance, and one and two-sided testing. The result tables below compare the top-$10$ rankings of different strategies, so $F_1 = F_2 = 10$. For a one-sided test, and a significance level of $5\%$, the critical value is $27$. So, when $U \leq 27$ the null hypothesis `the distributions are equal' is rejected. However, the tables do not list $U$, but $U_1$, as this shows which of the two strategies is better. Using the fact that $U2 = F_1 \cdot F_2 - U_1$, scores below $50$ indicate that the first (left) strategy is better, scores above $50$ mean the second (right) strategy is better, and $50$ means that two rankings ranking are identical. % NOTE mwu tables are not aggregated, an average U over datasets is not useful, a result is significant or not, the average of a set can not be use used for significance statements The final columns of the table, under `$\leq$ 27 / $\geq$ 73 / valid', indicate per depth how often the $U$-score is significant for the left and right strategy, respectively, and how many (valid) results were obtained in this setting. Note that the total number of $U$-scores is not always equal to the number of datasets, as in some experimental settings not enough (valid) subgroups are found to create a top-10 ranking. The columns under `wins' use a number of symbols to summarise which of the strategies is better over all tested datasets, for a given depth. Triangles point in the direction of the strategy that has a better ranking more often than the other, \draw\ means there is no `winner'. Symbols \lasi, \lall, and \lmix, indicate that the left strategy is: better for all datasets, and all results are significant; better for all datasets, but not all results are significant; better overall, but not better for all datasets. Right-pointing triangles have equivalent meanings for the right strategy. Unlike the number of (valid) results, the number of symbols is equal for all strategies, which allows for a straightforward comparison. In the classification setting, each strategy is compared to nine others, and there are eight such comparisons in the regression setting. %For reasons of presentation, results are not shown in a 10x10 (9x9) matrix, instead the upper half, sans diagonal, of this would-be matrix is presented as a list. % LEAVE THIS IN - CURRENTLY MW-U z-score IS NOT USED AS ONLY top-10 IS COMPARED, BUT IT MAY COME BACK % %\marvin{If this text is ever reinstated, update the formulas to take into account the continuity correction and tie correction used to produce the results. %A continuity correction of $-0.5$ is applied to $|U - \mu_U|$, as a continuous distribution is used to approximate a discrete one. %A tie correction is applied to $\sigma_U$ to account for rank ties: % \begin{equation} % \sigma_{corr} = \sqrt{ \frac{F_1 F_2}{12} \left( \left(F + 1\right) - {\sum_{i=1}^{k}{\frac{t_{i}^{3} - t_{i}}{F \left(F-1\right)}}} \right)}, % \end{equation} %where $F = F_1 + F_2$, and $k$ is the number of distinct ranks. %} %\begin{equation} %\label{equation:mann-whitney-z} % z_1 = \frac{U_1 - \mu_U}{\sigma_U} \text{, where\ } \mu_U = \frac{F_1 F_2}{2} \text{, and\ } \sigma_U = \sqrt{\frac{F_1 F_2 \left(F_1 + F_2 + 1 \right)}{12}}. %\end{equation} %We will be using this $z$-score to compare the top-k of both result sets (in the experiments k is either $10$ or $100$, ignoring a ranking of just $1$ result), so $F_1 = F_2$. %It can be proven that $z_1 = -z_2$. %A positive number for $z_1$ indicates that $\mathcal{F}_1$ is better than $\mathcal{F}_2$, and the inverse for a negative number for $z_1$. % NOTE % no relative versions of MW-U tables listing z-scores, instead of U-scores, are presented (for both non-aggregate and aggregate results) % two alternatives are possible, they are not the same % 1. use pivot to normalise all mean scores per strategy/depth/top-k and perform MW-U on the normalised mean values % 2. from MW-U table, pick one strategy:strategy column as pivot, and normalise all scores in the MW-U table using that %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % discretisation moment % (3-dbca,9-pbfa) / (4-dbcb,10-pbfb) / (7-dnca,15-pnca) \subsubsection{Discretisation Moment: Dynamic Discretisation versus Pre-Discretisation} \label{section:discretisation-moment-dynamic-discretisation-versus-pre-discretisation} This section compares options \predis{} and \dyndis{} of dimension \dimension{discretisation moment}. The following contexts are relevant to determine the best choice among these alternatives: \begin{itemize} \item \binaries{} with \coarse{} and \all{}. The relevant strategies are \dbca{0} and \pbfa{0}, where the latter is transformed into a \coarse{} strategy by using a low number of values, as described in Section \ref{section:dimensions-table}. \item \binaries{} with \coarse{} and \best{}. This compares \dbcb{0} with \pbfb{0}, here, using few values for \pbfb{0}. \item \nominal{} with \coarse{} and \all{}. It pits \dnca{0} against \pnca{0}. % (Vikamine) \end{itemize} Below, we summarise the detailed results of these comparisons as can be found by looking up the appropriate lines in Tables \ref{table:strategies-1-2-3-4-7-8-9-10-15-17-nominal-mean-table-tex-aggregate-relative}, \ref{table:strategies-1-2-3-4-7-8-9-10-15-numeric-mean-table-tex-aggregate-relative}, \ref{table:strategies-1-2-3-4-7-8-9-10-15-17-nominal-MW-U-U-top-10-final-landscape} and \ref{table:strategies-1-2-3-4-7-8-9-10-15-numeric-MW-U-U-top-10-final-landscape}, in appendix \ref{appendix:tables}. \paragraph{Classification Target} Within, and across, the three contexts listed above, Tables \ref{table:strategies-1-2-3-4-7-8-9-10-15-17-nominal-mean-table-tex-aggregate-relative} and \ref{table:strategies-1-2-3-4-7-8-9-10-15-17-nominal-MW-U-U-top-10-final-landscape} list very mixed results. In the first two contexts, involving \binaries{}, alternative \predis{} is better at depth $1$, and \dyndis{} at depth $2$ and $3$. At depth $1$, the top-1 mean scores for \predis{} are better by margins of 2.4\%, and 1.5\%, respectively. Conversely, \dyndis{} results are better at depths $2$ and $3$, by margins between 1.5\% and 4.0\%. Considering the Mann-Whitney $U$-scores for the top-10 rankings, \predis{} is better at depth $1$, but results are close, and thus never significant. At depth $2$ and $3$, \dyndis{} outperforms its non-dynamic counterpart 8 out of 12 times (4 significant), and 9 out of 12 times (2 significant), for the first two contexts, respectively. In the \nominal{} context, top-1 mean results for \dyndis{} (\dnca{0}) are equal to, better than, and worse than, that of \predis{} (\pnca{0}) over depths $1$ to $3$, though margins are never bigger than 2.4\%. The top-10 rankings are identical at depth $1$, and of the 12 results for depths $2$ and $3$, \dyndis{} is better 8 times (2 significant). Remarkably, all four times \predis{} is better, results are significant. % NOTE statements like 'mean is close, but U=0' are never made, such situations are not related to saturation per se, so interpretation requires inspection of the result set \paragraph{Regression Target} For the two \binaries{} contexts, \dyndis{} is the recommended choice, based on Tables \ref{table:strategies-1-2-3-4-7-8-9-10-15-numeric-mean-table-tex-aggregate-relative} and \ref{table:strategies-1-2-3-4-7-8-9-10-15-numeric-MW-U-U-top-10-final-landscape}. All mean results are better by margins of 12.0\% to 16.5\%. Also, 17 of the 18 wins, out of 21 results, are significant in the first context, and from a total of 19 results in the second, this is true for 14 of the 16 wins. In the \nominal{} context, results are mixed again. At depth $1$, the top-1 mean result for \predis{} is better, with a 2.7\% margin. At depths $2$ and $3$, \dyndis{} is better, by a margin of 4.5\% for both depths. Regarding the 20 results in the Mann-Whitney $U$ table, \dnca{0} and \pnca{0} draw once, and win 10, and 9 times, respectively. Results at depth $1$ are never significant, but at depth $2$ and $3$, 6 out of 8 wins for \dyndis{}, and 4 out of 6 wins for \predis{}, are. \paragraph{Conclusion} With respect to classification targets, \dyndis{} is the better option overall. At higher depths, and overall, it performed better. Only when considering depth $1$ exclusively, \predis{} performed better, though with margins for the means that are not big, much smaller than those in the regression setting, and with $U$-scores that do not differ much (contrary to the more pronounced differences at higher depths). For regression targets, \dyndis{} is the preferred choice also, as it performed clearly better in the \binaries{} contexts, and better, though less convincingly, in the \nominal{} context. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % interval type % (3-dbca,7-dnca) / (4-dbcb,8-dncb) / (9-pbfa,15-pnca) \subsubsection{Interval Type: Binaries versus Nominal} \label{section:interval-type-binaries-versus-nominal} In this section, dimension \dimension{interval type} is considered, with the two possible values \binaries{} and \nominal{}. The choice between these two settings is relevant in the following contexts: \begin{itemize} \item \dyndis{} with \coarse{} and \all{}. In this context, \dbca{0} is pitted against \dnca{0}. \item \dyndis{} with \coarse{} and \best{}. This comes down to comparing \dbcb{0} with \dncb{0}. \item \predis{} with \coarse{} and \all{}. Pitting \pbfa{0} against \pnca{0} (again, with a \coarse{} \pbfa{}). \end{itemize} \paragraph{Classification Target} As Tables \ref{table:strategies-1-2-3-4-7-8-9-10-15-17-nominal-mean-table-tex-aggregate-relative} and \ref{table:strategies-1-2-3-4-7-8-9-10-15-17-nominal-MW-U-U-top-10-final-landscape} demonstrate, the \binaries{} setting consistently outperforms the \nominal{} setting in all relevant contexts. In terms of mean scores for the top-1 result at depth $1$, \binaries{} produces results 6.9\%, 6.0\%, and 9.0\%, better for the three contexts, respectively. For depths $2$ and $3$, this margin is at least 21.3\%, 19.9\%, and 18.2\%, respectively. In terms of Mann-Whitney $U$-scores, \binaries{} is significantly better in 16 out of 17 results, 13 out of 14 results, and 16 out of 17 results, respectively. %The Mann-Whitney $U$-score of \binaries{} is significantly better in 16 out of 17 results, 13 out of 14 results and 16 out of 17 results, respectively. \paragraph{Regression Target} For two out of three contexts, \binaries{} is the clear preferred choice in all experiments (Tables \ref{table:strategies-1-2-3-4-7-8-9-10-15-numeric-mean-table-tex-aggregate-relative} and \ref{table:strategies-1-2-3-4-7-8-9-10-15-numeric-MW-U-U-top-10-final-landscape}). However, for the third context (\predis{}), mixed results have been obtained, and no clear preference can be discerned. For the first two contexts, involving \dyndis{}, margins between 1.7\% and 13.3\% can be observed for the top-1 mean scores. Comparing the obtained top-10 rankings, \binaries{} is significantly better 18 out of 20, and 10 out of 12 times, respectively. For the third context, results are more mixed, with essentially no clear preference for one or the other setting. For top-10 means, \binaries{} is better by at most 7.4\%, whereas \nominal{} is preferred for top-1, by at most 17.1\%. In terms of top-10 rankings, \binaries{} wins 11 out of 20 times (9 significant), and \nominal{} 9 out of 20 (4 significant). \paragraph{Conclusion} In the context of \dyndis{}, \binaries{} clearly outperforms \nominal{} for both targets types. In the context of \predis{}, results are mixed. Again, \binaries{} clearly outperforms \nominal{} for classification targets. But for regression targets, the best subgroup produced by \nominal{} (\pnca{0}) is clearly better than that of \binaries (\pbfa{0}), with the exception of the mean top-10 results. So only in fairly specific circumstances is a \nominal{} setting useful. % NOTE that for classification targets, pnca is already worse at d1, but it becomes really bad at d2,d3; while at d1 there is no beam effect yet %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % granularity % (1-dbfa,3-dbca) / (2-dbfb, 4-dbcb) \subsubsection{Granularity: Fine versus Coarse} \label{section:Granularity-fine-versus-coarse} This section contrasts options \fine{} and \coarse{} of dimension \dimension{granularity}. The relevant contexts are: \begin{itemize} \item \dyndis{} with \binaries{} and \all{}. This selects strategies \dbfa{0} and \dbca{}. \item \dyndis{} with \binaries{} and \best{}. Here, the relevant strategies are \dbfb{0} and \dbcb{}. \end{itemize} \paragraph{Classification Target} Without exception, \fine{} is the better option. Table \ref{table:strategies-1-2-3-4-7-8-9-10-15-17-nominal-mean-table-tex-aggregate-relative} shows that margins for the top-1 mean scores are between 2.2\% and 4.8\%. Concerning the top-10 rankings in Table \ref{table:strategies-1-2-3-4-7-8-9-10-15-17-nominal-MW-U-U-top-10-final-landscape}, 17 out of 18, and 10 out of 17, results are significant. % for \dbfa{0} and \dbfb{0}, respectively. \paragraph{Regression Target} Again, without exception, \fine{} is the better option, as can be gleaned from Tables \ref{table:strategies-1-2-3-4-7-8-9-10-15-numeric-mean-table-tex-aggregate-relative} and \ref{table:strategies-1-2-3-4-7-8-9-10-15-numeric-MW-U-U-top-10-final-landscape}. Here, margins for the top-1 mean scores are between 1.1\% and 2.6\%, and 20 out of 21, and 11 out of 19, results for the top-10 rankings are significant. \paragraph{Conclusion} Invariably, \fine{} is better. Considering that \coarse{} is a heuristic, this might not seem remarkable. % though in a beam setting, \fine{} is not guaranteed to perform better at higher search depths. Notwithstanding, the quality of the top subgroups \coarse{} produces is within a few percent of those of \fine{}. Unsurprisingly, differences are biggest at depth $1$, but at depth $3$ they are no more than 2.6\%, and 1.9\%, for the classification and regression target settings, respectively. This in spite of the much smaller search space, and accepting that heuristics trade-in quality for reduced exploration, \coarse{} fares pretty well. % NOTE saturation is always an issue with (all/best) and (fine/coarse); top-10 saturation is not checked, might interesting to check/ remark %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % selection strategy % (1-dbfa,2-dbfb) / (3-dbca,4-dbcb) / (7-dnca,8-dncb) / (9-pbfa,10-pbfb) \subsubsection{Selection Strategies: All versus Best} \label{section:selection-strategies-all-versus-best} Lastly, options \all{} and \best{} of dimension \dimension{selection strategy} are compared. The relevant contexts are: \begin{itemize} \item \dyndis{} with \binaries{} and \fine{}. Comparing \dbfa{0} with \dbfb{0}. \item \dyndis{} with \binaries{} and \coarse{}. Selecting \dbca{0} and \dbcb{0}. \item \dyndis{} with \nominal{} and \coarse{}. Pitting \dnca{0} against \dncb{0}. \item \predis{} with \binaries{} and \fine{}. Using both \pbfa{0} and \pbfb{0} with few values (\coarse{}). \end{itemize} \paragraph{Classification Target} Clearly, \all{} is the better option. In every context, \all{} outperforms \best{} with respect to the mean scores in Table \ref{table:strategies-1-2-3-4-7-8-9-10-15-17-nominal-mean-table-tex-aggregate-relative}. Margins for the top-1 are smaller in the \dyndis{} contexts (1.0\%, 1.9\%, 0.3\%), than in the \predis{} context (3.9\%). Also of note, the top-10 margins decline from between 23.3\% and 44.6\% at depth $1$, to at most $9.1\%$, and $5.5\%$, at depths $2$ and $3$, respectively, with some virtually disappeared (0.8\% and 0.2\%). Table \ref{table:strategies-1-2-3-4-7-8-9-10-15-17-nominal-MW-U-U-top-10-final-landscape} show that in the \binaries{} contexts, \all{} is better for 49 out of 51 results, of which 39 are significant (the two losses occur for dataset \dataset{ionosphere}). In the only \nominal{} context, results for \all{} and \best{} are basically identical, and thus never significant. \paragraph{Regression Target} Almost universally, \all{} results are better for this target type. With respect to Table \ref{table:strategies-1-2-3-4-7-8-9-10-15-numeric-mean-table-tex-aggregate-relative}, the \binaries{} and \nominal{} contexts should be considered separately. In the \binaries{} contexts, \all{} is better, with margins of at most 1.3\% for the top-1. For the top-10, margins are between 15.5\% and 25.2\% at depth $1$, and no more than 3.6\%, and 3.1\%, at depths $2$ and $3$, respectively. In the \nominal{} context, \all{} is better at depth $2$ and $3$, but not at depth $1$. At first, it might seem puzzling that \all{} and \best{} do not perform identically at depth $1$. However, the two strategies do not use same the number of bins, and in this case, this results in a better score for \best{}. Remarkably, margins for the top-10 \emph{increase} with depth, bucking the trend observed for both target types, and all contexts discussed in this section. Contrary to the mean results, results for the top-10 rankings in Table \ref{table:strategies-1-2-3-4-7-8-9-10-15-numeric-MW-U-U-top-10-final-landscape} are uniform. Collectively, \all{} wins 66 out of 69 times, with 51 significant results. \paragraph{Conclusion} Alternative \all{} performs better than \best{}. This is no surprise, however, the more interesting observations relate to the performance of \best{}. Out of the 24 results for the two target types combined, the score for the top subgroup is within 1.3\% of the \all{} score 17 times. Of the bigger deviations, some occur at depth $1$, where \all{} and \best{} pairs would score identically when using the same number of bins, and for the very deviant results in the \nominal{} context for the regression target (\dnca{} versus \dncb{}). With respect to the top-10 mean scores, the sharp decline in margins is also noteworthy. The above suggest that as a search heuristic, the \best{} selection strategy is a very capable counterpart of \all{} when considering result quality, often coming with 1\% of the \all{} result. %\marvin{complicated complexity analyses goes here, ignore for now} % NOTE saturation is always an issue with (all/best) and (fine/coarse); top-10 saturation is not checked, might interesting to check/ remark %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % special strategy 17-dnfb % (17-dnfb versus all others) \subsubsection{Special Strategy: Mampaey et al.\@ (17-dncb) versus all other strategies} \label{section:special-strategy} So far, strategy \dnfb{0} was omitted from the analyses, but it will be the exclusive focus of the current section. Unlike the previous sections, this one will not revolve around contexts. Still, a separation along dimensions will be instrumental when analysing the results. Most important is the differentiation between \binaries{} and \nominal{} strategies. The latter will be treated as a single group, the former is sometimes divided into subgroups, should this provide additional insights. Strategy \dnfb{0} was included in the experiments because it produces `optimal' results at depth $1$. Table \ref{table:strategies-1-2-3-4-7-8-9-10-15-17-nominal-mean-table-tex-aggregate-relative} confirms this, as it lists better top-1 results for this strategy than for all others. %Incidentally, all top-10 scores are also better, though this is not guaranteed to hold universally. In relation to \binaries{}, \dnfb{0} is better by margins of 7.9\% to 13.3\%, for \nominal{} the margins are a bit over 20\%. Predictably, \dnfb{0} is better are depth $1$, but the more interesting behaviour occurs depth $2$ and $3$. First, consider the four strategies that combine \binaries{} with \dyndis{}. For the two that combine with \fine{}, \dbfa{0} and \dbfb{0}, all scores are now better than that of \dnfb{0}, although margins for the top-1 are no more than 1.3\%. For those that involve \coarse{}, \dbca{0} and \dbcb{0}, top-1 scores are now within 3\%. The latter holds true also for one of the \binaries{} strategies that uses \predis{} (\pbfa{0}). For the other, \pbfb{0}, margins are below 7\%. % (6.7\% and 6.1\%). Next, consider the \nominal{} strategies \dnca{0}, \dncb{0}, and \pnca{0}. Interestingly, while margins decrease for all \binaries{} strategies, they increase for all \nominal{} strategies. Margins for the top-1 grow in the advantage of \dnfb{0}, from some 20\% at depth $1$, to more than 25\% at depth $2$ and $3$. With respect to the Mann-Whitney $U$ results in Table \ref{table:strategies-1-2-3-4-7-8-9-10-15-17-nominal-MW-U-U-top-10-final-landscape}, the distinction along dimensions is informative again. Collectively, there are 117 results for \dnfb{0}, these include 83 wins (71\%), of which 71 are significant, for an 86\% significant-to-win ratio. Against strategies combining \binaries{} with \dyndis{}, there are 52 results, 25 wins (48\%), and 16 significant results (64\%), indicating that these strategies compare more favourably to \dnfb{0} than others. Against \binaries{} with \predis{}, there are 26 results, 19 wins (73\%), and again 16 significant results (84\%), which is about average. These numbers are even skewed in favour of \dnfb{0}, as \dbcb{0} and \pbfb{0} have 0 wins. Most strikingly though, there are 39 \nominal{} results, all of which \dnfb{0} wins significantly (that is, 100\% out of 100\%). As for the mean scores, the strategies that combine \dyndis{} with \binaries{} and \fine{} outperform \dnfb{0} at depth $2$ and $3$. Where \dbfa{0} is better for all 12 results (8 significant), \dbfb{0} is better 8 out of 12 times, but only one results is significant. Finally, the \coarse{} strategy \dbca{0} wins 7 to 5, all others strategies, \binaries{} and \nominal{}, are worse for every, or most, results. % NOTE treatment of these strategies is done such that they can be included in the conclusion \paragraph{Conclusion} The fact that \dnfb{0} performs best at depth $1$ is unsurprising. However, how the results of various (broad groups of) strategies evolve over increasing depths is noteworthy. Strategies involving \binaries{} fare much better than those using a \nominal{} approach. Nonetheless, only the two computationally most demanding strategies outperform \dnfb{0}, and just one heuristic comes close. % (\dbca{0}). As such, this strategy should be the method of choice when seeking high quality subgroups in a classification setting. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Ranking Subgroup Discovery Strategies} \label{section:ranking-subgroup-discovery-strategies} After the focused discussions of the previous sections, a more holistic approach is taken in this section. First, Table \ref{table:strategies-1-2-3-4-7-8-9-10-15-17-nominal-mean-table-tex-aggregate-relative} shows that scores for \nominal{} strategies are markedly lower than those of \binaries{} for the classification targets. This was already observed in Section \ref{section:special-strategy}. Remarkably, for the regression targets, Table \ref{table:strategies-1-2-3-4-7-8-9-10-15-17-nominal-mean-table-tex-aggregate-relative} paints a different picture. The relative disadvantage of the \nominal{} strategies is much smaller, and now the relatively low scores of strategies combining \predis{} and \binaries{} stand out. A first general conclusion is that strategies combining \dyndis{} and \binaries{} show stable performance over target types. On the other hand, performance of \nominal{} strategies depends very much on target type, as was anticipated in Section \ref{section:interval-type}. For classification targets, they perform far worse than the \binaries{} strategies. Even compared to the heuristics that involve \coarse{}, many scores are lower by some 20\%. However, in case of the regression targets, the relative setback of \nominal{} scores is much smaller. Sure, they still trail those of the dynamic \binaries{}, but differences are now more in the range of 5\% to 10\%. Contrastingly, the \binaries{} strategies do not uniformly perform well for this target type. Apparently, the combination with \predis{} is a troubling one in this scenario, as scores declined sharply. Section \ref{section:best-number-of-bins} discussed this combination already, and determined that using a higher number of bins did not improve quality. Here it becomes clear this combination thus performs bad, irrespective of the number of bins. % 4 %\item \binaries{} is on par for \dyndis{} and \predis{} for classification targets,\\ % \binaries{} is higher for \dyndis{} than \predis{} for regression targets, Finally, its time to move towards a conclusion. For this, two tables are presented, based on the information in Tables \ref{table:strategies-1-2-3-4-7-8-9-10-15-17-nominal-MW-U-U-top-10-final-landscape} and \ref{table:strategies-1-2-3-4-7-8-9-10-15-numeric-MW-U-U-top-10-final-landscape}. The left table of Table \ref{table:mwu-wins} lists all strategies, and results, for the two target types. For each depth, it lists how often a strategy was the better one of the contrasted pair, as indicated by the triangles under `wins', and columns $\Sigma_{C}$ and $\Sigma_{R}$ then add the result of each depth for the classification and regression setting, respectively. For the classification setting, column $\Sigma_{C}^{-17}$ was added. It lists similar information as column $\Sigma_{C}$, but counts now ignore all strategy pairs that include strategy \dnfb{0}, which is only available in this setting. The use of $\Sigma_{C}^{-17}$, instead of $\Sigma_{C}$, makes for comparable settings. The table to the right then creates the final strategy ranking, by summing the results from $\Sigma_{C}^{-17}$ and $\Sigma_{R}$, and ordering by this sum descendingly. % NOTE every strategy has the same number of `wins' results, this has been described in section Comparison of Subgroup Discovery Strategy % NOTE two tabulars inside tabular, simpler for layout than using two minipages % NOTE outermost table, so it floats, and use a single caption and label \begin{table}[!hb] \centering \caption{These tables indicate how the various strategies compare to each other. The table on the left shows how often a strategy is better than the others. Results are based on the symbols in the `wins' columns, and count the number of left-pointing triangles (\lmix,\lall,\lasi) when a strategy is on the left, and the number of right-pointing triangles (\rmix,\rall,\rasi) when a strategy is on the right, of strategy pairs in Table \ref{table:strategies-1-2-3-4-7-8-9-10-15-17-nominal-MW-U-U-top-10-final-landscape} and \ref{table:strategies-1-2-3-4-7-8-9-10-15-numeric-MW-U-U-top-10-final-landscape}. Note that every strategy has the same number of results under `wins', which is the number of strategies for the involved target type times the number of distinct depths. The table on the right presents the final ranking of the tested \sd{} strategies by combining the results of the two target types listed in the table on the left. } \label{table:mwu-wins} \begin{tabular}{cc} %\centering %\begin{table} %\centering %\caption{Count how often strategy is better than others. %Counts the number of '\lmix,\lall,\lasi' when strategy is on the left, and '\rmix,\rall,\rasi' when strategy is on the right, of strategy pairs in Table \ref{table:strategies-1-2-3-4-7-8-9-10-15-17-nominal-MW-U-U-top-10-final-landscape} and \ref{table:strategies-1-2-3-4-7-8-9-10-15-numeric-MW-U-U-top-10-final-landscape}. %Note that every strategy has the same number of '$<,=,>$' results (number of strategies for target type $\times$ number of depths). %} %\label{table:mwu-wins-combined} \begin{tabular}{l|rrrrr|rrrr} strategy & \multicolumn{5}{c|}{classification} & \multicolumn{4}{c}{regression}\\ & \multicolumn{3}{c}{depth} & $\Sigma_{C}$ & $\Sigma_{C}^{-17}$ & \multicolumn{3}{c}{depth} & $\Sigma_{R}$\\ & 1 & 2 & 3 & (27) & (24) & 1 & 2 & 3 & (24)\\ \hline ~\:\dbfa{0} & 8 & 9 & 9 & 26 & 24 & 8 & 8 & 8 & 24\\ ~\:\dbfb{0} & 5 & 6 & 8 & 19 & 17 & 6 & 7 & 7 & 20\\ ~\:\dbca{0} & 6 & 8 & 6 & 20 & 19 & 7 & 6 & 6 & 19\\ ~\:\dbcb{0} & 3 & 4 & 4 & 11 & 11 & 5 & 5 & 5 & 15\\ ~\:\dnca{0} & 0 & 2 & 2 & 4 & 4 & 1 & 3 & 3 & 7\\ ~\:\dncb{0} & 0 & 1 & 1 & 2 & 2 & 0 & 0 & 0 & 0\\ ~\:\pbfa{0} & 7 & 6 & 5 & 18 & 17 & 2 & 4 & 4 & 10\\ \pbfb{0} & 4 & 3 & 3 & 10 & 10 & 0 & 1 & 2 & 3\\ \pnca{0} & 0 & 0 & 0 & 0 & 0 & 3 & 2 & 1 & 6\\ \dnfb{0} & 9 & 5 & 6 & 20 & & & & & \\ \end{tabular} %\end{table} & %\begin{table} %\centering %\caption{Ranks} %\label{table:mwu-wins-ranks} \begin{tabular}{l|rrrr} \multicolumn{5}{c}{} \\ % for alignment strategy & $\Sigma_{C}^{-17}$ & $\Sigma_{R}$ & sum & rank\\ \hline ~\:\dbfa{0} & 24 & 24 & 48 & 1\\ ~\:\dbca{0} & 19 & 19 & 38 & 2\\ ~\:\dbfb{0} & 17 & 20 & 37 & 3\\ ~\:\pbfa{0} & 17 & 10 & 27 & 4\\ ~\:\dbcb{0} & 11 & 15 & 26 & 5\\ \pbfb{0} & 10 & 3 & 13 & 6\\ ~\:\dnca{0} & 4 & 7 & 11 & 7\\ \pnca{0} & 0 & 6 & 6 & 8\\ ~\:\dncb{0} & 2 & 0 & 2 & 9\\ \end{tabular} %\end{table} \end{tabular} \end{table} Before discussing the table on the right, a note about the table on the left. Strategy \dnfb{0} is available only in the classification setting, and was discussed separately in Section \ref{section:special-strategy}. Referring to column $\Sigma_{C}$, it can be seen that this strategy scored 20 wins, out of the 27 possible. This ranks is somewhat behind the overall winner (\dbfa{0}), and equal to \dbca{0} (which wins 7 to 6, regarding the direct confrontations with \dnfb{0}). As seen before, \dnfb{0} is not the best strategy overall, even thought at depth $1$ it is better than all others. Also, the performance of heuristic \dbca{0} relative to that of \dnfb{0} should not be left unmentioned. Still, now, as then, results indicate that \dnfb{0} is a very capable strategy, that should be preferred in all but a few cases. The most remarkable finding in the table on the right would probably be the fact that the heuristic \dbca{0} performs so well. Surely, \dbfb{0} is a heuristic also, but its computational complexity is much higher nonetheless. Another non-trivial result is the scale at which the \nominal{} strategies perform worse than \binaries{} strategies. %For both target types, \nominal{} strategies rank at the bottom. For both target types, \nominal{} strategies rank at the bottom of the list. Generally, the table reaffirms some of the general trends that were observed before. To conclude the discussion of the experimental evaluation presented in this work, a concise list of findings is presented below. \begin{enumerate} \item \binaries{} performs better than \nominal{}, \item \dyndis{} triumphs over \predis{}, \item \fine{} outperforms \coarse{}, \item \all{} beats \best{}. \end{enumerate} % NOTE this is an excruciatingly unremarkable list %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Conclusions} \label{section:conclusions} In this paper, a host of \sd{} strategies was systematically examined. These strategies differ along a number of dimensions, and experiments were performed to gain insights into the effects of different options within these dimensions. Choices were not evaluated in isolation, but always in the context of other parameter settings, as this is required to gauge real-world performance. Most of the findings are not unexpected, for reasons pointed out in the sections introducing each dimension. However, the fact that a single parameter choice would show markedly different behaviour in the classification and regression target type settings was unforeseen. Furthermore, it is especially the scale at which some strategies perform worse than others that is remarkable. As a whole, this systematic evaluation both affirms some intuitions that, to the best of our knowledge, have never been rigorously tested, and garnered new insights into both existing strategies, as well as into how to improve future algorithm design. As such, it is of value for those seeking information guiding an informed choice regarding the analysis of real-world data. Additionally, its findings can be of benefit to researchers and algorithm designers alike. Personally, the authors have already started incorporating some of the observations into work to be presented in the future. %\clearpage %references \input{bib} \clearpage \pagebreak \appendix %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{New tables, all strategies in a single table.} \label{appendix:tables} The new setup puts results for all strategies in a single table, leading to two mean tables, and two MW-U tables. Each MW-U tables need to be placed on a page of its own, as they are too big. However, the mean tables can be put on a single page together, and still leave about half a page empty. This space can be used for a more general text. Also, captions for the tables could be kept to a minimum, and relevant text would be placed here. \import{./res/all_experiments/mean_tables/}{strategies-1-2-3-4-7-8-9-10-15-17-nominal-mean-table-tex-aggregate-relative-final.tex} \import{./res/all_experiments/mean_tables/}{strategies-1-2-3-4-7-8-9-10-15-numeric-mean-table-tex-aggregate-relative-final.tex} \begin{landscape} \import{./res/all_experiments/mwu_tables/strategies-1-2-3-4-7-8-9-10-15-17-nominal/}{strategies-1-2-3-4-7-8-9-10-15-17-nominal-MW-U-U-top-10-final-landscape.tex} \import{./res/all_experiments/mwu_tables/strategies-1-2-3-4-7-8-9-10-15-numeric/}{strategies-1-2-3-4-7-8-9-10-15-numeric-MW-U-U-top-10-final-landscape.tex} \end{landscape} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Quality Increase} \label{appendix:quality-increase} \begin{figure}[!hb] \centering %\begin{minipage}{.5\textwidth} \centering \includegraphics[width=0.76\linewidth]{{./res/all_experiments/mean_plots/strategies-1-2-3-4-7-8-9-10-15-17-nominal-maxdepth4/strategies-1-2-3-4-7-8-9-10-15-17-nominal-mean-table-long-aggregate.dat}.eps} % \label{figure:} %\end{minipage}% %HACKED IN MINIPAGE WITH MINIMAL TEXT TO SEPARATE TWO CAPTIONS %\begin{minipage}{.02\textwidth}~\end{minipage}% %\begin{minipage}{.5\textwidth} % \centering \includegraphics[width=0.76\linewidth]{{./res/all_experiments/mean_plots/strategies-1-2-3-4-7-8-9-10-15-17-nominal-maxdepth4/strategies-1-2-3-4-7-8-9-10-15-17-nominal-mean-table-long.dat.top-1}.eps} % \label{figure:} %\end{minipage} \caption{ Plots showing that for most datasets and strategies there is hardly a quality increase above depth $3$, justifying the decision to present only results up to depth $3$. The upper plot shows an aggregate over all datasets. The second plot show the quality of the top-1 result for each individual dataset. The latter plots shows that only for dataset $\ds{4}$ (\dataset{ionosphere}) there is still some increase, influencing the aggregate result. \todo{Plots could be replaced by a single sentence like: `from depth $1$ to $2$, quality improved by x\%; from $2$ to $3$ this was only y\%, $3$ to $4$ was 0.0\%'. It would remove two figures, and one page, from the paper. } } \label{figure:quality-increase-classification} \end{figure} \begin{figure}[!hb] \centering %\begin{minipage}{.5\textwidth} \centering \includegraphics[width=0.76\linewidth]{{./res/all_experiments/mean_plots/strategies-1-2-3-4-7-8-9-10-15-numeric-maxdepth4/strategies-1-2-3-4-7-8-9-10-15-numeric-mean-table-long-aggregate.dat}.eps} % \label{figure:} %\end{minipage}% %HACKED IN MINIPAGE WITH MINIMAL TEXT TO SEPARATE TWO CAPTIONS %\begin{minipage}{.02\textwidth}~\end{minipage}% %\begin{minipage}{.5\textwidth} % \centering \includegraphics[width=0.76\linewidth]{{./res/all_experiments/mean_plots/strategies-1-2-3-4-7-8-9-10-15-numeric-maxdepth4/strategies-1-2-3-4-7-8-9-10-15-numeric-mean-table-long.dat.top-1}.eps} % \label{figure:} %\end{minipage} \caption{ For regression targets there appears to be more improvement, but the aggregate result is severely skewed by dataset $\ds{2}$ (\dataset{adult}), for which improvement is much larger than for the other six datasets. For the other datasets, there is basically no increase in result quality for the top subgroup. } \label{figure:quality-increase-regression} \end{figure} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%% EXTRA MATERIALS - PART I. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \clearpage \pagebreak \hspace{0pt} \vfill \centerline{THE PAPER ENDS HERE} \centerline{BELOW ARE ADDITIONAL MATERIALS AND TEMPORARY TABLES} \textbf{Items to Discuss} Topics that should be addressed.\\ Possible objections from reviewers that need to be countered preemptively.\\ Considerations that need to be described in the paper. \textbf{Mean Tables} Extended versions of Tables \ref{table:strategies-1-2-3-4-7-8-9-10-15-17-nominal-mean-table-tex-aggregate-relative} and \ref{table:strategies-1-2-3-4-7-8-9-10-15-numeric-mean-table-tex-aggregate-relative}. Extended tables include ratios of mean scores used in Section \ref{section:comparing-subgroup-discovery-strategies}. These extended tables could replace currently used tables. The current tables do not include these ratios. They only include the \dbfa{0} index-based ratios. Including the extended tables would spare the reader from having to compute the ratios. \textbf{MW-U win table alternatives} The first two tables give information per target type. The left part of these tables includes the number of $U$-score `wins' per depth, and overall. The right half presents the same information, but orders the strategies best to worse. This presentation of the tables does not really add anything. The third table is currently used in the paper. It combines the information of the two target type settings in a single subtable. The ranking is put in another, and uses comparable score metrics. \textbf{Strategy rankings based on mean} The paper presents a ranking of strategies based on top-10 MW-U scores. No such ranking is shown presented based on top-1 and top 10 mean scores. Tables \ref{table:mean-top-1-ranks} and \ref{table:mean-top-10-ranks} rank strategies based on top-1 and top-10, respectively. Table \ref{table:mean-mwu-ranks} presents all rankings in a single table. \vfill %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%% EXTRA MATERIALS - PART II. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \clearpage \pagebreak \section*{Items to Discuss} \paragraph{Mampaey et al.\@ describes using regression targets with Richer Descriptions} Why do we not use it, why is it not available in Cortana? \paragraph{Beam search in combination with 15-pnca} The interpretation of \pnca{0} used in this paper is conceptually similar to Vikamine. However, Vikamine uses optimistic estimates, as do many other `\nominal{}'-style algorithms. These are typically exhaustive, whereas the paper uses a beam search. Objections could be made against evaluating \pnca{0} in such a setting, as it is not created this way. Essentially, the algorithm is designed to use \nominal{} precisely because that is what allows for exhaustive search. Thus, not using it in such a manner gives it an unfair disadvantage. Note, however, that results for depth $1$ are unaffected, regardless. \paragraph{Beam search} The use of a beam search, is a potential weakness in the experiments, and no good justification (experiments or otherwise) is given for the size of the beam. Beam search has an effect on the search space, controlling which subgroups are generated, and therefore on result set quality, and redundancy. The analyses in the paper focused on quality, so why not use exhaustive search. Especially for a beam search, the difference in the end result for \fine{} and \coarse{} could be mediated by the fact that the abundance of not so good scoring subgroups of \fine{}-\all{} will be ignored anyway. (It is not, there is still a lot of saturation for the \dbfa{0} strategy.) The use of a beam could limit the (\fine{})-\all{} strategies generally, and \dbfa{0} especially. Assume $P$ is the set of all possible single conjunct descriptions (subgroups) that can be generated for a dataset. At search depth $1$, all subgroups in $P$ will be tested, but only a limited number will end up in beam $B$. On the next level, only subgroups in $B$ will be refined. But, by the design of the algorithm, the set of subgroups created at depth $2$ is not the cross-product $B \times P$. Subgroups in $B$ only combine with only those subgroups in $P$ with which they intersect (share at least one record, or member). Conversely, subgroups in $P$ that do not have a common member with any of the subgroups in $B$, will no longer be used in the search process, that is, not at the current search level, and not at any higher level. So, conjunctions that include any of these left-out subgroups will never be created. This is unfortunate, as conjunctions of the these left-out subgroups with any of the subgroups in $P$ (that are not in $B$), could actually yield the very best subgroup possible. Note that this is true for all combinations of left-out subgroups with other subgroups that are not in $B$. For the latter, it does not matter whether they are also left out themselves, or that they do intersect with a subgroup in $B$. This effect could cause \best{} to perform better than \all{}, and \coarse{} better than \fine{}, at search level higher than one. For \all{}, it could happen that the subgroups in the beam never form high quality conjunctions. For \best{}, the subgroups in the beam could be of lower quality when compared to those in the \all{} setting, but nonetheless produce better quality conjunctions. Reasoning for \fine{} and \coarse{} is similar. \paragraph{Best number of bins per strategy} Per strategy, the best number of bins is determined. Then, for each experimental setting, results are produced using this number. However, for some settings (datasets, depth, top-k even) this number might not produce the best result. So why not just use the best result for each dataset/depth, irrespective of the number of bins. Moreover, all results are already available anyway. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%% EXTRA MATERIALS - PART III. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \clearpage \pagebreak %\section{Temporary Results Section for mean and MW-U bullets} \begin{landscape} \import{./res/all_experiments/mean_tables/}{strategies-1-2-3-4-7-8-9-10-15-17-nominal-mean-table-tex-aggregate-relative-extended.tex} \import{./res/all_experiments/mean_tables/}{strategies-1-2-3-4-7-8-9-10-15-numeric-mean-table-tex-aggregate-relative-extended.tex} \import{./res/all_experiments/mean_tables/}{strategies-1-2-3-4-7-8-9-10-15-17-nominal-mean-table-tex-aggregate-relative-extended-dncb.tex} \end{landscape} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%% EXTRA MATERIALS - PART IV. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \clearpage \pagebreak %\section{MW-U wins tables, alternatives} \begin{table} \centering \caption{Classification target. (Note for 3-dbca versus 17-dnfb, wins are 7 versus 6, respectively, so 3-dbca ranks higher.)} \label{table:mwu-wins-nominal} \begin{tabular}{l|rrrr|rlr} strategy & \multicolumn{3}{c}{depth} & total & rank & strategy & total\\ & 1 & 2 & 3 & (27) & & &\\ \hline ~\:\dbfa{0} & 8 & 9 & 9 & 26 & 1 & ~\:\dbfa{0} & 26\\ ~\:\dbfb{0} & 5 & 6 & 8 & 19 & 2 & ~\:\dbca{0} & 20\\ % 3 versus 17 = 7:6 wins, so 3 ranks higher ~\:\dbca{0} & 6 & 8 & 6 & 20 & 3 & \dnfb{0} & 20\\ ~\:\dbcb{0} & 3 & 4 & 4 & 11 & 4 & ~\:\dbfb{0} & 19\\ ~\:\dnca{0} & 0 & 2 & 2 & 4 & 5 & ~\:\pbfa{0} & 18\\ ~\:\dncb{0} & 0 & 1 & 1 & 2 & 6 & ~\:\dbcb{0} & 11\\ ~\:\pbfa{0} & 7 & 6 & 5 & 18 & 7 & \pbfb{0} & 10\\ \pbfb{0} & 4 & 3 & 3 & 10 & 8 & ~\:\dnca{0} & 4\\ \pnca{0} & 0 & 0 & 0 & 0 & 9 & ~\:\dncb{0} & 2\\ \dnfb{0} & 9 & 5 & 6 & 20 & 10 & \pnca{0} & 0\\ \end{tabular} \end{table} \begin{table} \centering \caption{Regression target.} \label{table:mwu-wins-numeric} \begin{tabular}{l|rrrr|rlr} strategy & \multicolumn{3}{c}{depth} & total & rank & strategy & total\\ & 1 & 2 & 3 & (24) & & &\\ \hline ~\:\dbfa{0} & 8 & 8 & 8 & 24 & 1 & ~\:\dbfa{0} & 24\\ ~\:\dbfb{0} & 6 & 7 & 7 & 20 & 2 & ~\:\dbfb{0} & 20\\ ~\:\dbca{0} & 7 & 6 & 6 & 19 & 3 & ~\:\dbca{0} & 19\\ ~\:\dbcb{0} & 5 & 5 & 5 & 15 & 4 & ~\:\dbcb{0} & 15\\ ~\:\dnca{0} & 1 & 3 & 3 & 7 & 5 & ~\:\pbfa{0} & 10\\ ~\:\dncb{0} & 0 & 0 & 0 & 0 & 6 & ~\:\dnca{0} & 7\\ ~\:\pbfa{0} & 2 & 4 & 4 & 10 & 7 & \pnca{0} & 6\\ \pbfb{0} & 0 & 1 & 2 & 3 & 8 & \pbfb{0} & 3\\ \pnca{0} & 3 & 2 & 1 & 6 & 9 & ~\:\dncb{0} & 0\\ \end{tabular} \end{table} % NOTE two tabulars inside tabular, simpler for layout than using two minipages % NOTE outermost table, so it floats, and use a single caption and label \begin{table} \centering \caption{\marvin{ I like this better, single table for both target types, side-by-side with rank table. Separate tables per target type do not really add anything. A single remark would be enough to state that 17-dnfb ranks second for classification targets. Column $\Sigma_{C}^{-17}$ is added, showing results when completely ignoring 17-dnfb results. This makes the two settings comparable. Conveniently, this also results in a complete ranking, whereas this would not be true when combining scores, or ranks, from the two separate tables. } \todo{ Actually, the left table does not really add much. For each strategy, results over various depths are pretty uniform, so results per depth are not discussed. Without depth information, the two tables are basically the same, with the left only including the extra column $\Sigma_{C}$, and extra row for 17-dnfb } } \label{table:mwu-wins-EXAMPLE} \begin{tabular}{cc} %\centering %\begin{table} %\centering %\caption{Count how often strategy is better than others. %Counts the number of '\lmix,\lall,\lasi' when strategy is on the left, and '\rmix,\rall,\rasi' when strategy is on the right, of strategy pairs in Table \ref{table:strategies-1-2-3-4-7-8-9-10-15-17-nominal-MW-U-U-top-10-final-landscape} and \ref{table:strategies-1-2-3-4-7-8-9-10-15-numeric-MW-U-U-top-10-final-landscape}. %Note that every strategy has the same number of '$<,=,>$' results (number of strategies for target type $\times$ number of depths). %} %\label{table:mwu-wins-combined} \begin{tabular}{l|rrrrr|rrrr} strategy & \multicolumn{5}{c|}{classification} & \multicolumn{4}{c}{regression}\\ & \multicolumn{3}{c}{depth} & $\Sigma_{C}$ & $\Sigma_{C}^{-17}$ & \multicolumn{3}{c}{depth} & $\Sigma_{R}$\\ & 1 & 2 & 3 & (27) & (24) & 1 & 2 & 3 & (24)\\ \hline ~\:\dbfa{0} & 8 & 9 & 9 & 26 & 24 & 8 & 8 & 8 & 24\\ ~\:\dbfb{0} & 5 & 6 & 8 & 19 & 17 & 6 & 7 & 7 & 20\\ ~\:\dbca{0} & 6 & 8 & 6 & 20 & 19 & 7 & 6 & 6 & 19\\ ~\:\dbcb{0} & 3 & 4 & 4 & 11 & 11 & 5 & 5 & 5 & 15\\ ~\:\dnca{0} & 0 & 2 & 2 & 4 & 4 & 1 & 3 & 3 & 7\\ ~\:\dncb{0} & 0 & 1 & 1 & 2 & 2 & 0 & 0 & 0 & 0\\ ~\:\pbfa{0} & 7 & 6 & 5 & 18 & 17 & 2 & 4 & 4 & 10\\ \pbfb{0} & 4 & 3 & 3 & 10 & 10 & 0 & 1 & 2 & 3\\ \pnca{0} & 0 & 0 & 0 & 0 & 0 & 3 & 2 & 1 & 6\\ \dnfb{0} & 9 & 5 & 6 & 20 & & & & & \\ \end{tabular} %\end{table} & %\begin{table} %\centering %\caption{Ranks} %\label{table:mwu-wins-ranks} \begin{tabular}{l|rrrr} \multicolumn{5}{c}{} \\ % for alignment strategy & $\Sigma_{C}^{-17}$ & $\Sigma_{R}$ & sum & rank\\ \hline ~\:\dbfa{0} & 24 & 24 & 48 & 1\\ ~\:\dbca{0} & 19 & 19 & 38 & 2\\ ~\:\dbfb{0} & 17 & 20 & 37 & 3\\ ~\:\pbfa{0} & 17 & 10 & 27 & 4\\ ~\:\dbcb{0} & 11 & 15 & 26 & 5\\ \pbfb{0} & 10 & 3 & 13 & 6\\ ~\:\dnca{0} & 4 & 7 & 11 & 7\\ \pnca{0} & 0 & 6 & 6 & 8\\ ~\:\dncb{0} & 2 & 0 & 2 & 9\\ \end{tabular} %\end{table} \end{tabular} \end{table} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%% EXTRA MATERIALS - PART III.2 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \clearpage \pagebreak \begin{table} \centering \caption{Based on top-1 mean scores in the mean tables, strategies are ranked for both target types, per depth. The sum of the ranks per depth is in the $\Sigma$ column, and $r$ shows the rank for each strategy based on the results in $\Sigma$. Then, $\Sigma_{C+R}^{17}$ sums the two $\Sigma$ columns, and `rank' is the rank for each strategy based on that. (Note summing the $r$ for each target type gives the same ordering of the result, with one tie for 7/15, instead of 15 before 7.) Finally, $\mu_1$ and $U_{10}$ show the ranking of all strategies based on the top-1 mean and the top-10 $U$-scores. \todo{NOTE ranks per depth just order the strategies based on mean score. An alternative would be an a-better-than-b setup, like that for to MW-U. It would count how often a strategy is better that the others. This might give a slightly different result, as it deals differently with ties. But overall the result is probably the same. Also, the information needed for that is not presented in the paper. } } \label{table:mean-top-1-ranks} \begin{tabular}{l|rrr|rr||rrr|rr||rr||ll} strategy & \multicolumn{5}{c||}{classification} & \multicolumn{5}{c||}{regression} & \multicolumn{2}{c||}{combined} & \multicolumn{2}{c}{mean and MW-U}\\ & 1 & 2 & 3 & $\Sigma_{C}^{-17}$ & $r_{C}^{-17}$ & 1 & 2 & 3 & $\Sigma_{R}$ & $r_{R}$ & $\Sigma_{C+R}^{-17}$ & rank & \multicolumn{1}{c}{$\mu_1$} & \multicolumn{1}{c}{$U_{10}$}\\ \hline ~\:\dbfa{0} & 1.5 & 1 & 1 & 3.5 & 1 & 1 & 1 & 1 & 3 & 1 & 6.5 & 1 & ~\:\dbfa{0} & ~\:\dbfa{0}\\ ~\:\dbfb{0} & 1.5 & 2 & 2 & 5.5 & 2 & 2 & 2 & 2 & 6 & 2 & 11.5 & 2 & ~\:\dbfb{0} & ~\:\dbca{0}\\ ~\:\dbca{0} & 5 & 3 & 3 & 11 & 3 & 3.5 & 3 & 3 & 9.5 & 3 & 21.5 & 3 & ~\:\dbca{0} & ~\:\dbfb{0}\\ ~\:\dbcb{0} & 6 & 5 & 4 & 15 & 5 & 3.5 & 4 & 4 & 11.5 & 4 & 26.5 & 4 & ~\:\dbcb{0} & ~\:\pbfa{0}\\ ~\:\dnca{0} & 8 & 8 & 8 & 24 & 8 & 7 & 5 & 5 & 17 & 5 & 41 & 7 & ~\:\pbfa{0} & ~\:\dbcb{0}\\ ~\:\dncb{0} & 8 & 9 & 9 & 26 & 9 & 5.5 & 7 & 9 & 21.5 & 7 & 47.5 & 9 & \pnca{0} & \pbfb{0}\\ ~\:\pbfa{0} & 3 & 4 & 5 & 12 & 4 & 8 & 8.5 & 7 & 23.5 & 8 & 35.5 & 5 & ~\:\dnca{0} & ~\:\dnca{0}\\ \pbfb{0} & 4 & 6 & 6 & 16 & 6 & 9 & 8.5 & 8 & 25.5 & 9 & 41.5 & 8 & \pbfb{0} & \pnca{0}\\ \pnca{0} & 8 & 7 & 7 & 22 & 7 & 5.5 & 6 & 6 & 17.5 & 6 & 39.5 & 6 & ~\:\dncb{0} & ~\:\dncb{0}\\ \end{tabular} \end{table} \begin{table} \centering \caption{Based on top-10 mean scores. Note that the two rankings are exactly equal. Further note that top-10 $U$ can be used for significance testing, whereas this is not possible for using top-10 mean scores. } \label{table:mean-top-10-ranks} \begin{tabular}{l|rrr|rr||rrr|rr||rr||ll} strategy & \multicolumn{5}{c||}{classification} & \multicolumn{5}{c||}{regression} & \multicolumn{2}{c||}{combined} & \multicolumn{2}{c}{mean and MW-U}\\ & 1 & 2 & 3 & $\Sigma_{C}^{-17}$ & $r_{C}^{-17}$ & 1 & 2 & 3 & $\Sigma_{R}$ & $r_{R}$ & \multicolumn{1}{c}{$\Sigma_{C+R}^{-17}$} & rank & \multicolumn{1}{c}{$\mu_{10}$} & \multicolumn{1}{c}{$U_{10}$}\\ \hline ~\:\dbfa{0} & 1 & 1 & 1 & 3 & 1 & 1 & 1 & 1 & 3 & 1 & 6 & 1 & ~\:\dbfa{0} & ~\:\dbfa{0}\\ ~\:\dbfb{0} & 4 & 3 & 2 & 9 & 3 & 3 & 3 & 2 & 8 & 3 & 17 & 3 & ~\:\dbca{0} & ~\:\dbca{0}\\ ~\:\dbca{0} & 3 & 2 & 3 & 8 & 2 & 2 & 2 & 3 & 7 & 2 & 15 & 2 & ~\:\dbfb{0} & ~\:\dbfb{0}\\ ~\:\dbcb{0} & 6 & 5 & 5 & 16 & 5 & 5 & 4 & 4 & 13 & 4 & 29 & 5 & ~\:\pbfa{0} & ~\:\pbfa{0}\\ ~\:\dnca{0} & 7.5 & 7 & 8 & 22.5 & 7 & 8 & 5 & 6 & 19 & 6 & 41.5 & 7 & ~\:\dbcb{0} & ~\:\dbcb{0}\\ ~\:\dncb{0} & 9 & 8 & 9 & 26 & 9 & 4 & 9 & 9 & 22 & 7 & 48 & 9 & \pbfb{0} & \pbfb{0}\\ ~\:\pbfa{0} & 2 & 4 & 4 & 10 & 4 & 6 & 6 & 5 & 17 & 5 & 27 & 4 & ~\:\dnca{0} & ~\:\dnca{0}\\ \pbfb{0} & 5 & 6 & 6 & 17 & 6 & 9 & 7 & 7 & 23 & 8.5 & 40 & 6 & \pnca{0} & \pnca{0}\\ \pnca{0} & 7.5 & 9 & 7 & 23.5 & 8 & 7 & 8 & 8 & 23 & 8.5 & 46.5 & 8 & ~\:\dncb{0} & ~\:\dncb{0}\\ \end{tabular} \end{table} \begin{table} \centering \caption{Table showing the ranking of strategies based on scores for top-1 mean, top-10 mean, and top-10 $U$, respectively.} \label{table:mean-mwu-ranks} \begin{tabular}{ccc} $\mu_{1}$ & $\mu_{10}$ & $U_{10}$\\ \hline ~\:\dbfa{0} & ~\:\dbfa{0} & ~\:\dbfa{0}\\ ~\:\dbfb{0} & ~\:\dbca{0} & ~\:\dbca{0}\\ ~\:\dbca{0} & ~\:\dbfb{0} & ~\:\dbfb{0}\\ ~\:\dbcb{0} & ~\:\pbfa{0} & ~\:\pbfa{0}\\ ~\:\pbfa{0} & ~\:\dbcb{0} & ~\:\dbcb{0}\\ \pnca{0} & \pbfb{0} & \pbfb{0}\\ ~\:\dnca{0} & ~\:\dnca{0} & ~\:\dnca{0}\\ \pbfb{0} & \pnca{0} & \pnca{0}\\ ~\:\dncb{0} & ~\:\dncb{0} & ~\:\dncb{0}\\ \end{tabular} \end{table} \begin{comment} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%% EXTRA MATERIALS - PART IV. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % The highlighted text below give a number of consideration that should be discussed in the paper. % \begin{landscape} \paragraph{Bins} \marvin{ Note, $B$ is used as a maximum. So, when it is not possible to create $B$ for an attribute, less are returned. This would not show in the results, as number $B$ is reported, even if no attribute has that many values. Also, $B$ is a global setting, used for every numeric attribute. So, when a result benefits from using a high number $B$ for a single attribute, that $B$ will be reported. Even when no other attribute has that many values. } \marvin{ For both the top-10 and top-100, the binary settings often list 10, the maximum number of bins. This could be a potential weakness in the paper, as for top-1, the numbers listed are often lower. It could give a (sense of) false impression. Using these lower numbers later on in the paper could be considered misleading, as they occur only for top-1. Probably the cause of this is result set redundancy and for higher depths greater than 1, beam redundancy also. That is, the top-10 or top-100 would include many variations of the same attribute. If a certain attribute produces high quality subgroups, having many minor variations of its description results in many high scoring subgroups. Inspection of the result sets can confirm or invalidate this suspicion (this is not done yet). But probably, the paper will only present the top-1 results. } \pagebreak % bin tables that-use 2-10 bins - only top-1 fits on portrait page, others need landscape \import{./res/all_experiments/bins_tables/}{strategies-max10-bins-nominal-bins-table-top-1.tex} \import{./res/all_experiments/bins_tables/}{strategies-max10-bins-nominal-bins-table-top-10.tex} \import{./res/all_experiments/bins_tables/}{strategies-max10-bins-nominal-bins-table-top-100.tex} \import{./res/all_experiments/bins_tables/}{strategies-max10-bins-numeric-bins-table-top-1.tex} \import{./res/all_experiments/bins_tables/}{strategies-max10-bins-numeric-bins-table-top-100.tex} \end{landscape} \end{comment} \end{document}
{ "alphanum_fraction": 0.7230167811, "avg_line_length": 74.1201413428, "ext": "tex", "hexsha": "b5870e20efaa72289ee43b38e460cf67650ec917", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d6fb8d3584dbf42d68d5d0c85659753809731146", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "SubDisc/subdisc", "max_forks_repo_path": "manual/complexity-numeric-strategies/paper.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "d6fb8d3584dbf42d68d5d0c85659753809731146", "max_issues_repo_issues_event_max_datetime": "2021-11-18T13:56:52.000Z", "max_issues_repo_issues_event_min_datetime": "2021-11-17T10:56:53.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "SubDisc/subdisc", "max_issues_repo_path": "manual/complexity-numeric-strategies/paper.tex", "max_line_length": 503, "max_stars_count": null, "max_stars_repo_head_hexsha": "d6fb8d3584dbf42d68d5d0c85659753809731146", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "SubDisc/subdisc", "max_stars_repo_path": "manual/complexity-numeric-strategies/paper.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 38536, "size": 146832 }
%----------------------------------------------------------------------------- % % Template for sigplanconf LaTeX Class % % Name: sigplanconf-template.tex % % Purpose: A template for sigplanconf.cls, which is a LaTeX 2e class % file for SIGPLAN conference proceedings. % % Guide: Refer to "Author's Guide to the ACM SIGPLAN Class," % sigplanconf-guide.pdf % % Author: Paul C. Anagnostopoulos % Windfall Software % 978 371-2316 % [email protected] % % Created: 15 February 2005 % %----------------------------------------------------------------------------- \documentclass[]{sigplanconf} % The following \documentclass options may be useful: % preprint Remove this option only once the paper is in final form. % 10pt To set in 10-point type instead of 9-point. % 11pt To set in 11-point type instead of 9-point. % authoryear To obtain author/year citation style instead of numeric. \usepackage{amsmath} %\usepackage{silence} \usepackage{pgf} \usepackage{tikz} \usetikzlibrary{arrows,automata} \usepackage{url} \usepackage[hidelinks]{hyperref} \usepackage{fancyvrb} \usepackage[numbers]{natbib} \usepackage{microtype} %\WarningsOff \newcommand{\useverbtb}[1]{\begin{tiny}\BUseVerbatim{#1}\end{tiny}} \newcommand{\useverb}[1]{\begin{small}\BUseVerbatim{#1}\end{small}} \newcommand{\useverbb}[1]{\begin{small}\BUseVerbatim{#1}\end{small}} \begin{document} \fvset{fontsize=\small} \newcommand{\idris}{\textsc{Idris}} \newcommand{\idata}{\textsf{iData}} \newcommand{\itasks}{\textsf{iTasks}} \newcommand{\Idris}{\textsc{Idris}} \special{papersize=8.5in,11in} \setlength{\pdfpageheight}{\paperheight} \setlength{\pdfpagewidth}{\paperwidth} \exclusivelicense \conferenceinfo{IFL '13}{August 28--31, 2013, Nijmegen, The Netherlands} \copyrightyear{2013} \copyrightdata{978-1-nnnn-nnnn-n/yy/mm} \doi{nnnnnnn.nnnnnnn} % Uncomment one of the following two, if you are not going for the % traditional copyright transfer agreement. %\exclusivelicense % ACM gets exclusive license to publish, % you retain copyright %\permissiontopublish % ACM gets nonexclusive license to publish % (paid open-access papers, % short abstracts) %\authorversion \titlebanner{} % These are ignored unless \preprintfooter{} % 'preprint' option specified. \title{Dependent Types for Safe and Secure Web Programming} \authorinfo{Simon Fowler \and Edwin Brady} {School of Computer Science, University of St Andrews, St Andrews, Scotland} {Email: \{sf37, ecb10\}@st-andrews.ac.uk} \maketitle \begin{abstract} Dependently-typed languages allow precise types to be used during development, facilitating static reasoning about program behaviour. However, with the use of more specific types comes the disadvantage that it becomes increasingly difficult to write programs that are accepted by a type checker, meaning additional proofs may have to be specified manually. Embedded domain-specific languages (EDSLs) can help address this problem by introducing a layer of abstraction over more precise underlying types, allowing domain-specific code to be written in a verified high-level language without imposing additional proof obligations on an application developer. In this paper, we apply this technique to web programming. Using the dependently typed programming language \Idris{}, we show how to use EDSLs to enforce resource usage protocols associated with common web operations such as CGI, database access and session handling. We also introduce an EDSL which uses dependent types to facilitate the creation and handling of web forms, reducing the scope for programmer error and possible security implications. \end{abstract} \category{D.3.2}{Programming Languages}{Language Classifications---Applicative (functional) Languages} % general terms are not compulsory anymore, % you may leave them out %\terms %term1, term2 \keywords Dependent Types, Web Applications, Verification %keyword1, keyword2 \input{introduction} \input{effects} \input{protocols} \input{forms} \input{messageboard} \input{conclusion} %\appendix %\section{Appendix Title} %This is the text of the appendix, if you need one. %----------------------------- %----------------------------- \acks This work has been supported by the Scottish Informatics and Computer Science Alliance (SICSA) and the EPSRC. We would like to thank the contributors to the \idris{} language, especially the authors of the original \texttt{Network.Cgi} and \texttt{SQLite} libraries. We are very grateful to Peter Thiemann and the anonymous reviewers for their insightful and constructive comments and suggestions. % SICSA / EPSRC (grant number? % #idris % Idris contributors, in particular Melissa for the SQLite bindings and whoever wrote Network.Cgi % We recommend abbrvnat bibliography style. \bibliographystyle{plainnat} % The bibliography should be embedded for final submission. \bibliography{refs} %\begin{thebibliography}{} %\softraggedright % %\bibitem[Smith et~al.(2009)Smith, Jones]{smith02} %P. Q. Smith, and X. Y. Jones. ...reference text... % %\end{thebibliography} \end{document} % Revision History % -------- ------- % Date Person Ver. Change % ---- ------ ---- ------ % 2013.06.29 TU 0.1--4 comments on permission/copyright notices
{ "alphanum_fraction": 0.6895132427, "avg_line_length": 35.3670886076, "ext": "tex", "hexsha": "4798abcacf6f7c7211b47a87977955795133f90b", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2019-12-13T00:33:52.000Z", "max_forks_repo_forks_event_min_datetime": "2016-05-21T12:02:25.000Z", "max_forks_repo_head_hexsha": "735d0fc15cd0e95d51ce93bcbbaa1757dca2e8c1", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "idris-hackers/IdrisWeb", "max_forks_repo_path": "paper/main.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "735d0fc15cd0e95d51ce93bcbbaa1757dca2e8c1", "max_issues_repo_issues_event_max_datetime": "2016-05-18T02:02:14.000Z", "max_issues_repo_issues_event_min_datetime": "2015-12-05T19:43:15.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "idris-hackers/IdrisWeb", "max_issues_repo_path": "paper/main.tex", "max_line_length": 381, "max_stars_count": 80, "max_stars_repo_head_hexsha": "735d0fc15cd0e95d51ce93bcbbaa1757dca2e8c1", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "idris-hackers/IdrisWeb", "max_stars_repo_path": "paper/main.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-07T02:16:48.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-09T16:44:34.000Z", "num_tokens": 1359, "size": 5588 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % LaTeX Template: Curriculum Vitae % % Source: http://www.howtotex.com/ % Feel free to distribute this template, but please keep the % referal to HowToTeX.com. % Date: July 2011 % % Modified by Nicholas Wilde, 0x08b7d7a3 % Source: https://github.com/nicholaswilde/curriculum-vitae % Date: March 2021 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \documentclass[paper=letter,fontsize=11pt]{scrartcl} % KOMA-article class \usepackage[english]{babel} \usepackage[utf8x]{inputenc} \usepackage[protrusion=true,expansion=true]{microtype} \usepackage{amsmath,amsfonts,amsthm} % Math packages \usepackage{graphicx} % Enable pdflatex \usepackage[svgnames]{xcolor} % Colors by their 'svgnames' \usepackage{geometry} % Saving trees \headheight=-20px \headsep=0px \marginparwidth=0px \textheight=760px \textwidth=480px \hoffset=-33px \usepackage{url} \usepackage[hidelinks]{hyperref} \usepackage{tgtermes} \usepackage{tgbonum} \frenchspacing % Better looking spacings after periods \pagestyle{empty} % No pagenumbers/headers/footers %%% Custom sectioning (sectsty package) %%% ------------------------------------------------------------ \usepackage{sectsty} %\sectionfont{ % \usefont{OT1}{phv}{b}{n}% % bch-b-n: CharterBT-Bold font % \sectionrule{0pt}{0pt}{-5pt}{3.5pt}} \sectionfont{% % Change font of \section command \usefont{OT1}{qbk}{bx}{n}% % qbk-b-n: TeX Gyre Bonum Bold \sectionrule{0pt}{0pt}{-8pt}{3.5pt}} %%% Macros %%% ------------------------------------------------------------ \newlength\spacebox \settowidth\spacebox{8888888888} % Box to align text \newcommand\sepspace{\vspace*{1em}} % Vertical space macro \definecolor{dark-grey}{gray}{0.15} % TeX Gyre Bonum bold \newcommand\BonumBold[1]{\usefont{OT1}{qbk}{bx}{n} #1} % TeX Gyre Bonum \newcommand\Bonum[1]{\usefont{OT1}{qbk}{m}{n} #1} % TeX Gyre Termes \newcommand\Termes[1]{\usefont{OT1}{qtm}{m}{n} #1} \newcommand\MyName[1]{ % Name \Huge \BonumBold{\hfill #1} \par \normalsize \normalfont} \newcommand\MySlogan[1]{ % Slogan (optional) \large \Termes{\hfill \textit{#1}} \par \normalsize \normalfont} \newcommand\NewPart[1]{\section*{\lowercase{#1}}} %\newcommand{\DeobfsAddr}[6]{{#1}{#5}{#4}{#3}{#2}{#6}} \newcommand\PersonalEntry[2]{ \noindent\hangindent=2em\hangafter=0 % Indentation \parbox\spacebox{ % Box to align text \textit{#1}} % Entry name (address, email, etc.) \hspace{2em}{\color{dark-grey}\footnotesize #2 }\par} % Entry value \newcommand\SkillsEntry[2]{ % Same as \PersonalEntry \noindent\hangindent=2em\hangafter=0 % Indentation \parbox\spacebox{ % Box to align text \textit{#1}} % Entry name \parbox[t][2.5em]{12.5cm}{% \noindent\hangindent=30px\hangafter=0{% \footnotesize #2}}% %Entry value \normalsize \par} \newcommand\EducationEntry[4]{ \noindent \textbf{#1} \hfill % Study \colorbox{Black}{% \makebox(100,10){% \color{White}\textbf{#2}}} \par % Duration \noindent \textit{#3} \par % School \noindent\hangindent=2em\hangafter=0 {% \color{dark-grey}% \Bonum{\footnotesize #4}}% % Description \normalsize \par} \newcommand\WorkEntry[4]{ % Same as \EducationEntry \noindent \textbf{#1} \hfill % Jobname \colorbox{Black}{% \makebox(100,10){% \color{White}\textbf{#2}}} \par % Duration \noindent \textit{#3} \par % Company \noindent\hangindent=2em\hangafter=0{% \color{dark-grey}% \Bonum{\footnotesize #4}}% % Description \normalsize \par} %%% Begin Document %%% ------------------------------------------------------------ \begin{document} % you can upload a photo and include it here... %\begin{wrapfigure}{l}{0.5\textwidth} % \vspace*{-2em} % \includegraphics[width=0.15\textwidth]{photo} %\end{wrapfigure} \MyName{Nicholas Wilde} \MySlogan{r\'esum\'e} %%% Personal details %%% ------------------------------------------------------------ \NewPart{Personal details}{} \PersonalEntry{Profile} {\href{https://nicholaswilde.io}{https://nicholaswilde.io}} \PersonalEntry{LinkedIn} {\href{http://www.linkedin.com/in/nicholaswilde}{http://www.linkedin.com/in/nicholaswilde}} \PersonalEntry{Email} {\href{mailto:[email protected]}{[email protected]}} %%% Work experience %%% ------------------------------------------------------------ \NewPart{Professional experience highlights}{} \hspace{0.6cm} \textit{Please request \href{https://github.com/nicholaswilde/curriculum-vitae}{my full curriculum vitae} for more expansive details.} \WorkEntry{Director, Automation Design}{2017 -- present} {Applied Medical - Rancho Santa Margarita, CA}{Leading a team of 26 mechanical engineers to design and troubleshoot automated equipment that range from laser cutters to inspection machines to assembly machines.} \sepspace \WorkEntry{Manager, Automation Design}{2011 -- 2017} {Applied Medical - Rancho Santa Margarita, CA}{Lead a team of 6 mechanical engineers to design and troubleshoot automated equipment.} \sepspace \WorkEntry{Design Engineer}{2010 -- 2011} {Applied Medical - Rancho Santa Margarita, CA}{Designed and troubleshot automated equipment.} \sepspace \WorkEntry{Technical Officer}{2009 -- 2010}{CRANN - Dublin, Ireland}{Worked with Principal Investigators to help support laboratories by maintaining equipment and managing inventory.} \sepspace \WorkEntry{Mechanical Engineer, MTS-I}{2007 -- 2008}{Panasonic Avionics Corp. - Lake Forest, CA}{Designed sheet metal enclosures for in-flight entertainment systems that had to meet FAA vibratory and thermal regulations.} %%% Education %%% ------------------------------------------------------------ \NewPart{Education}{} \EducationEntry{BSc. Mechanical Engineering}{2002 -- 2006}{California Polytechnic State University - San Luis Obispo}{Received a Bachelor of Science from the learn by doing school, Cal Poly SLO, with a general concentration.} \sepspace \EducationEntry{Drivetrain Team}{2004 -- 2006}{SAE Supermileage - Cal Poly SLO}{Helped revive the club from a 10 year hiatus and achieved first place at the 2007 Shell Eco-marathon Americas.} %%% Skills %%% ------------------------------------------------------------ %\NewPart{Skills}{} %% In order of how willing I am to write them: %\SkillsEntry{Languages}{\textsc{Rust}, \textsc{C/C++}, % \textsc{Python}, \textsc{Swift}, \textsc{Julia}, % \textsc{Go}, \textsc{x86/ARM/MIPS ASMs}, \textsc{Common Lisp}, \textsc{Bash}, \textsc{Lua}, % \textsc{Javascript}, HTML5, CSS3, and \textsc{Java}} %% In no particular order and clearly incomplete because it says nothing about %% bicycle mechanics, asciiart, or bytebytes: %\SkillsEntry{Software}{Low- to High-Level Cryptographic Design, Cryptographic Engineering, % Network Programming, Asynchronous Programming, Distributed Systems Design, % Misuse-Resistant API Design, Security Best Practices} %% %% x=''if(t%2)else'';python3 -c''[print(t>>15&(t>>(2$x 4))%(3+(t>>(8$x 11))%4)\ %% +(t>>10)|42&t>>7&t<<9,end='')for t in range(2**20)]''|aplay -c2 -r4 %% %% ↑↑ CLEARLY MY GREATEST SKILL ↑↑ %%% References %%% ------------------------------------------------------------ %\NewPart{References}{} %\hspace{0.6cm} \textit{Available upon request} \end{document}
{ "alphanum_fraction": 0.6186221186, "avg_line_length": 39.8041237113, "ext": "tex", "hexsha": "1199203998327a52b4300b52e37f548734d97181", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a1903e88ce8103b60bb530f7f0ea6a14eeb2447c", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "nicholaswilde/curriculum-vitae", "max_forks_repo_path": "resume.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a1903e88ce8103b60bb530f7f0ea6a14eeb2447c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "nicholaswilde/curriculum-vitae", "max_issues_repo_path": "resume.tex", "max_line_length": 225, "max_stars_count": null, "max_stars_repo_head_hexsha": "a1903e88ce8103b60bb530f7f0ea6a14eeb2447c", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "nicholaswilde/curriculum-vitae", "max_stars_repo_path": "resume.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2214, "size": 7722 }
\chapter{Low-Energy Structures and Near-Zero Energy Structures} \label{chap:LES-NZES} Over the past four chapters we have built up a semiclassical theory of tunnel ionization, based on the Analytical $R$-Matrix framework, and we have shown how to tackle several difficulties that come up within it. In this chapter we turn to one of the crucial concepts that emerged as a harsh test of our integration-path toolset -- soft recollisions, where multiple sets of branch cuts and closest-approach times converge and interact in ways that required additional tooling -- and we relate them to specific features in experimental photoelectron spectra known as Low Energy Structures (LES) and (Near-)Zero Energy Structures (NZES). After reviewing in section~\ref{sec:LES-review} the known experimental features of this structures, and the existing theoretical explanations for these low-energy features, we will show in section~\ref{sec:ARM-soft-recollisions} that the soft recollisions we met in chapter~\ref{chap:quantum-orbits} give rise to photoelectron peaks that correspond to the LES, and which have a dynamically equivalent analogue at much lower energy that is consistent with the NZES. We then show, in section~\ref{sec:classical-soft-recollisions}, that these trajectories also admit a simple classical description, whose scaling can be analysed easily to suggest that the NZES should become easier to probe using target species with higher ionization potential. The material in this chapter has appeared previously in references \begin{enumerate} \item[{\hypersetup{citecolor=black}\citealp{Pisanty_slalom_2016}}.] \textsc{E.~Pisanty and M.~Ivanov}. \newblock Slalom in complex time:\ emergence of low-energy structures in tunnel ionization via complex time contours. \newblock \href{http://dx.doi.org/10.1103/PhysRevA.93.043408}{ \emph{Phys. Rev. A} \textbf{93} no.~4, p.~043\,408 (2016)}. \newblock \href{http://arxiv.org/abs/1507.00011}{{arXiv}:1507.00011}. \item[{\hypersetup{citecolor=black}\citealp{Pisanty_kinematic_2016}}.] \textsc{E.~Pisanty and M.~Ivanov}. \newblock Kinematic origin for near-zero energy structures in mid-{IR} strong field ionization. \newblock \href{http://dx.doi.org/10.1088/0953-4075/49/10/105601}{ \emph{J. Phys. B: At. Mol. Opt. Phys.} \textbf{49} no.~10, p.~105\,601 (2016)}. \end{enumerate} \section{Low-Energy Structures in tunnel ionization} \label{sec:LES-review} As we saw in the Introduction, the basics of ionization in strong, long-wavelength fields were mostly worked out in the 1960s by Keldysh, Faisal and Reiss, and then further refined by Popov, Perelomov and Terent'ev. Collectively, these theories describe ionization in regimes of high intensity, with the Keldysh adiabaticity parameter $\gamma=\kappa \omega /F = \sqrt{I_p/2U_p}$ distinguishing what is known as the multiphoton regime at $\gamma \gg 1$ from the tunnelling regime at $\gamma \ll 1$. Most importantly, the expectation from this background is that at longer wavelengths and stronger fields, as $\gamma$ becomes smaller, the tunnelling picture becomes more and more appropriate, and its predictions become more and more accurate. In terms of the photoelectron spectrum, this describes a smooth gaussian envelope, modulated by discrete rings at energies $E_n=n\omega -I_p - U_p$ coming from absorption of discrete numbers of photons or, alternatively, from the interference of wavepackets emitted at different cycles of the laser pulse. In most long-wavelength experiments, though, the spacing $\omega$ between the different rings becomes smaller, and eventually they wash out: each atom emits electrons redshifted by the ponderomotive potential $U_p$ coming from a Stark shift in the continuum~\cite{muller_ponderomotive-shift_1983}, and this intensity-dependent shift can vary across the laser focus, averaging out the rings and leaving a smooth distribution that follows the SFA envelope. Given this expectation of progressively smoother electron distributions at longer wavelengths, it came as a surprise when, in 2009, experiments at $\SI{1}{\micro\meter}$ and longer wavelengths observed a large spike in photoelectrons at very low energies~\cite{blaga_original_LES, faisal_ionization_surprise}, as shown in \reffig{f6-blaga-original-figure}. Quickly christened Low-Energy Structures (LES), these spikes form in energy regions much smaller than the usual scales considered in such experiments: in the conditions of \reffig{f6-blaga-original-figure}, the direct electrons have a typical energy scale of $2U_p\approx\SI{110}{\electronvolt}$ and the rescattered electrons are typically at the order of $10U_p\approx\SI{550}{\electronvolt}$, whereas the LES has a higher edge $E_\mathrm{H}$ at about $\SI{5}{\electronvolt}$. \begin{figure}[thb] \centering \includegraphics[scale=0.6]{6-LES/Figures/figure6A.jpg} \hspace{2mm} \caption[ Initial observation of Low-Energy Structures by C.I. Blaga et al. ]{ Detection of low-energy structures by Blaga and coworkers, showing a large spike at low electron energies that is not predicted by the Keldysh-Faisal-Reiss (KFR) strong-field approximation treatment. The results are shown for atomic argon and molecular nitrogen and hydrogen, under a $\SI{2}{\micro\meter}$ field of intensity $\SI{1.5e14}{W/cm^2}$, having a Keldysh parameter of approximately $\gamma \approx 0.36$. Figure excerpted from \citer{blaga_original_LES}. } \label{f6-blaga-original-figure} \end{figure} \copyrightfootnote{ \reffig{f6-blaga-original-figure} reprinted by permission from Macmillan Publishers Ltd: % {\hypersetup{urlcolor=black}% \href{http://www.nature.com/nphys}{% \emph{Nature Phys.} \textbf{5}, p. 335 © 2009}. }} %% As per NPG T&Cs Moreover, this upper edge was tested from the beginning to scale, roughly, as $\frac{1}{10}U_p$, which points to a dynamical origin for the structures~\cite{ faisal_ionization_surprise, agostini_ionization-review_2012}, a fact that gets completely missed by the SFA treatment. On the other hand, numerical simulations by Blaga and coworkers~\cite{blaga_original_LES,catoire_angular-distributions_2009} also showed that the structure can be reproduced within numerical time-dependent Schrödinger equation (TDSE) simulations in the single-active-electron approximation, so the problem becomes one of finding a suitable mechanism behind the structures. The discovery of the LESs sparked a significant effort on the part of both theory and experiment, to better characterize the observed features of the structures and to produce a solid understanding of the mechanisms behind them. Experimentally, the LES was quickly joined by a wealth of intricate structures at that energy range and below, known as Very Low Energy Structures~(VLES), and subsequently yet another peak known as the \mbox{(Near-)Zero} Energy Structure~(NZES). On the theory side, there is a strong consensus that most structures in this range are directly caused by the Coulomb effect of the field, especially when acting on the soft recollisions we explored in chapter~\ref{chap:quantum-orbits} -- trajectories with a turning point close to the ionic core. In this chapter we will begin by exploring, in section \ref{sec:LES-experiment}, the known experimental features of the LES and associated structures, before moving on in section \ref{sec:LES-theory} to review the current models for their origin, as well as the proposed explanation for the NZES in \ref{sec:NZES-theory}. We will then move on, in section \ref{sec:ARM-soft-recollisions} to the role of soft recollisions in the ARM theory we developed in the previous chapters, and how they impact the ARM predictions on photoelectron spectra; specifically, we will show how the LES peak arises from soft recollisions within ARM theory, and that it is mirrored by a second peak that is consistent with the NZES. Finally, in section \ref{sec:classical-soft-recollisions}, we will distil the ARM results into two paired sets of classical trajectories, at the energy ranges of the LES and NZES, with radically different scaling properties, which then suggests avenues for further testing of the connection. \subsection{Experimental observations of low-energy structures} \label{sec:LES-experiment} On the experimental side, the discovery of the LES was rather quickly followed by the observation of more structures at even lower energy, which were rather quickly dubbed Very Low Energy Structures (VLES)~\cite{VLES_initial, VLES_characterization}. These detections, shown in Figs.~\ref{f6-quan-original-figure} and \ref{f6-wu-original-figure}, unearthed a second set of peaks at even lower energy, showing that there was still more dynamics to be unearthed in the low-energy region of ionization by mid-IR pulses. The initial observations were relatively noisy, but in addition to the spike found by Blaga et al., which corresponds to the gentle hump at ${\sim}\SI{3}{\electronvolt}$ for the black curve at $\SI{2}{\micro\meter}$ in \reffig{f6-quan-original-figure-a}, for example, there was clear evidence of a second structure at lower energy, perhaps even more marked than the original LES peak in some cases. Similarly, later measurements \cite{VLES_characterization} examining the structures found them to be universal features, appearing in multiple different noble gases and at a range of intensities and wavelengths, generally in the tunnelling regime of low $\gamma$ (${\sim}0.65$ for neon, ${\sim}0.8$ for krypton, and ${\sim}0.85$ for xenon). Unfortunately, however, the initial experiments had relatively little resolving power on these structures due to the relatively small volume of data they were able to accumulate. The regimes at $\gamma \sim 1$ and higher have been explored quite thoroughly, and the low-energy set of structures appears in the tunnelling regime where $\gamma = \omega \kappa / F$ is low. This in turn requires a high intensity (which is bounded above by saturation of the sample) or a long wavelength, which is the regime that's the LES/VLES experiments explored. \begin{figure}[t!] \centering \subfigure{\label{f6-quan-original-figure-a}} \subfigure{\label{f6-quan-original-figure-b}} \subfigure{\label{f6-quan-original-figure-c}} \subfigure{\label{f6-quan-original-figure-d}} \subfigure{\label{f6-quan-original-figure-e}} \includegraphics[scale=0.8]{6-LES/Figures/figure6B.png} \caption[ Experimental observation of Very Low Energy Structures by W. Quan et al. ]{ Experimental observation of Very Low Energy Structures~\cite{VLES_initial}, showing in \protect\subref{f6-quan-original-figure-a} the rise of a spike in low-energy electrons in the ionization of xenon at $\SI{8e13}{\watt/\centi\meter^2}$ and wavelengths between $\SI{800}{nm}$ and $\SI{2}{\micro\meter}$. For the longer wavelengths at $\SI{2}{\micro\meter}$ and $\SI{1.5}{\micro\meter}$, shown in \protect\subref{f6-quan-original-figure-c} and \protect\subref{f6-quan-original-figure-e} respectively for two different intensities, two distinct humps (the LES and the VLES) are visible, marked by the dashed lines. Parts \protect\subref{f6-quan-original-figure-b} and \protect\subref{f6-quan-original-figure-d} show classical Monte Carlo simulations for the parameters of \protect\subref{f6-quan-original-figure-a} and \protect\subref{f6-quan-original-figure-c}. Figure excerpted from \citer{VLES_initial}. } \label{f6-quan-original-figure} %\end{figure} % % % % %\begin{figure}[htb] \vspace{2mm} % \centering \subfigure{\label{f6-wu-original-figure-a}} \subfigure{\label{f6-wu-original-figure-b}} \subfigure{\label{f6-wu-original-figure-c}} \subfigure{\label{f6-wu-original-figure-d}} \subfigure{\label{f6-wu-original-figure-e}} \subfigure{\label{f6-wu-original-figure-f}} \includegraphics[scale=0.6]{6-LES/Figures/figure6C.png} \caption[ Characterization of Very Low Energy Structures by C.Y. Wu et al. ]{ Photoelectron energy spectra and longitudinal momentum distributions for the ionization of neon (\protect\subref{f6-wu-original-figure-a},\,\protect\subref{f6-wu-original-figure-d}), krypton (\protect\subref{f6-wu-original-figure-b},\,\protect\subref{f6-wu-original-figure-d}) and xenon (\protect\subref{f6-wu-original-figure-c},\,\protect\subref{f6-wu-original-figure-f}) at the wavelengths and intensities shown~\cite{VLES_characterization}. The VLES appear as the peaks marked with solid arrows, while the LES are marked with dashed arrows. Figure excerpted from \citer{VLES_characterization}. } \label{f6-wu-original-figure} \end{figure} \copyrightfootnote{ \reffig{f6-quan-original-figure} reprinted with permission from W. Quan et al., {% \hypersetup{urlcolor=black}% \href{http://dx.doi.org/10.1103/PhysRevLett.103.093001}{% \emph{Phys. Rev. Lett.} \textbf{103}, 093001 (2009)}. % ©~2009 by the American Physical Society. } } %%% As per APS T&Cs \copyrightfootnote{ \reffig{f6-wu-original-figure} reprinted with permission from C.Y. Wu et al., {% \hypersetup{urlcolor=black}% \href{http://dx.doi.org/10.1103/PhysRevLett.109.043001}{% \emph{Phys. Rev. Lett.} \textbf{109}, 043001 (2012)}. % ©~2012 by the American Physical Society. } } %%% As per APS T&Cs However, producing intense laser pulses away from the comfort zone around $\SI{800}{\nano\meter}$ afforded by titanium-sapphire laser systems is rather challenging, because to reach the required intensities it is generally necessary to have a very short pulse, and in turn this requires a very broad bandwidth. Generally speaking, there are few laser systems with a bandwidth as broad as titanium-sapphire amplifiers that can produce the required power. To reach longer wavelengths, then, most experiments turn to systems that use optical parametric chirped-pulse amplification (OPCPA), where a strong laser pump is used to amplify a lower-frequency pulse by difference-frequency generation. Unfortunately, though, OPCPAs are generally challenged when compared with ti\-ta\-nium-sapphire systems in terms of the repetition rate they are able to produce, and this means that the initial experiments could only collect a limited amount of data which was insufficient for doubly-differential measurements (like angle- and energy-resolved photoelectron spectra) that would help better discern the origin of the structures. This is only a technological problem and not a fundamental limitation, and it was solved over the span of a few years, but it continues to be one of the limitations on what sorts of measurements can be performed in this regime. Once the repetition-rate limitation was overcome, it became possible to obtain multi-dimensional views on the photoelectron momentum distribution~\cite{ dura_ionization_2013}, which began to exhibit evidence of angular variation in the VLES structures, with a hint of a V-shaped structure; this was then confirmed when kinematically complete measurements of the photoelectron momentum distribution were performed~\cite{pullen_kinematically_2014}, taking full advantage of improvements in detector technology in the form of the Cold Target Recoil Ion Momentum Spectroscopy (COLTRIMS) technique implemented in reaction microscope~(ReMi) experiments~\cite{moshammer_ReMi_2003,reaction_microscope}. More specifically, the VLES energy range is associated with a V-shaped structure with its cusp near the zero of momentum, as shown in \reffig{f6-pullen-original-full-spectrum}, which is the dominating feature of the low-energy photoelectron momentum distributions, along with yet another peak at even lower energy. \begin{figure}[htb] \centering \includegraphics[scale=1]{6-LES/Figures/figure6D.jpg} \caption[ Observation of (Near-)Zero Energy Structures by Pullen et al. ]{ Low-energy momentum distribution (in linear and log scale for the longitudinal and transverse momentum, respectively) for unaligned molecular nitrogen ionized by a $\SI{3.1}{\micro\meter}$ field at $\SI{e14}{\watt/\centi\meter^2}$~\cite{ pullen_kinematically_2014}, showing peaks and structures at the LES and VLES ranges, highlighted in the side inset, and an additional peak at even lower photoelectron energy. Figure excerpted from \citer{pullen_kinematically_2014}. } \label{f6-pullen-original-full-spectrum} \end{figure} \copyrightfootnote{ \reffig{f6-pullen-original-full-spectrum} © IOP Publishing. Reproduced with permission. All rights reserved. } %% As per terms in IOP email Upon its discovery, this peak was dubbed a Zero-Energy Structure (ZES), since the centre of the peak is consistent with zero to within the available experimental precision both at the time of its discovery~\cite{pullen_kinematically_2014} and to date. However, as we shall see below, there is reason to suspect that the centre may not be at zero but only close to it, so a much better name for the structure is Near-Zero-Energy Structure (NZES), which we will use throughout and as a synonym for ZES, and which better reflects the fact that in physics it is rather rare to have values exactly at zero instead of merely consistent with it. The angle-resolved photoelectron spectra of Refs.~\cite{dura_ionization_2013} and \cite{pullen_kinematically_2014}, as well as later publications, have several interesting features worth emphasizing. The first is the relatively trivial observation that, because of the volume element of the cylindrical coordinates being employed, it is naturally harder for electrons to fall exactly on-axis (at $\pt=0$) and on the volume element around it, which explains the detection probability on the lower part of \reffig{f6-pullen-original-full-spectrum}. This effect is also responsible for making the NZES form as a distinct spot separate from the axis, even though the structure is consistent with having its centre at the origin of the momentum plane; it also makes the high detection counts at the NZES spot, and above it, all the more remarkably high, particularly when compared to similar~$\pt$ at~higher $\pp$. The second important feature is that, in experiments performed in Reaction Microscope detector configuration, the VLES peak which would be expected at the $\SI{100}{\milli\electronvolt}$ range essentially vanishes. This is due to the fact that the previous observations were performed using time-of-flight (TOF) electron spectrometers~\cite{VLES_initial, VLES_characterization}, which have a very narrow acceptance cone of about $\SI{6}{\degree}$ centred on the laser polarization, and this leaves out a large fraction of the produced photoelectrons and the features in their distribution. This acceptance cone is shown as a dashed white line in \reffig{f6-pullen-original-full-spectrum}, and the electrons shown in Figs.~\ref{f6-quan-original-figure} and \ref{f6-wu-original-figure} all originate \textit{below} the dashed line. On the other hand, if the full three-dimensional data is post-selected to only the electrons within that acceptance angle, the VLES peaks reappear~\cite[p.~5]{pullen_kinematically_2014}. This means, then, that the VLES as originally reported are not quite an experimental artefact, but the initial detections are certainly only a very partial look at much richer structures. Finally, it is important to remark that the upper limit of the electron spectrum at around $\pt\approx\SI{0.3}{\au}$ is an artefact of the detection apparatus, which is configured for low-energy electrons at high resolution, and therefore leaves out higher momenta. The first of these features also points out an important aspect of the photoelectron spectra in the low-energy region, which is the fact that the transverse momentum distribution will vary wildly for different longitudinal momenta. Indeed, the experimental transverse distributions, shown in \reffig{f6-pullen-original-transverse-spectrum}, show large differences in photoelectron spectra taken over broad $\pp$ ranges and the slice at $|\pp|<\SI{0.02}{\au}$, where the NZES congregates. In this view, the NZES is clearly visible as a spike of width $\Delta\pt\approx\SI{0.05}{\au}$ (though this also includes some electrons from the V-like structure of the VLES). Since the original detection of the NZES, several improvements in the measurement stability and resolution have enabled better characterizations of the photoelectron distribution structures over momentum space~\cite{ZES_paper}. This includes a series of additional LES peaks, each with a distinct structure at relatively constant transverse momentum, and with progressively smaller longitudinal position, as shown in \reffig{f6-wolter-original-figure} and subsequently refined \cite{Wolter_PRX} as shown in \reffig{f6-wolter-prx-original-figure}. These features were indeed expected, as we shall show below, coming from different members of a family of trajectories. \begin{figure}[htb] \centering \includegraphics[scale=1.2]{6-LES/Figures/figure6E.jpg} \caption[ Experimental transverse photoelectron momentum spectra at different longitudinal momenta, observed by Pullen et al.]{ Transverse photoelectron momentum distributions at different longitudinal momentum for the data displayed in \reffig{f6-pullen-original-full-spectrum}. Integrating over a broad $\pp$ range yields a cusp with a smooth drop-off, whereas a smaller range around zero longitudinal momentum brings out a sharp peak at the origin coming out of a gaussian background. Figure excerpted from \citer{pullen_kinematically_2014}. } \label{f6-pullen-original-transverse-spectrum} \end{figure} \copyrightfootnote{ \reffig{f6-pullen-original-transverse-spectrum} © IOP Publishing. Reproduced with permission. All rights reserved. } \begin{figure}[h!t] \vspace{2mm} \centering \subfigure{\label{f6-wolter-original-figure-a}} \subfigure{\label{f6-wolter-original-figure-b}} \subfigure{\label{f6-wolter-original-figure-c}} \subfigure{\label{f6-wolter-original-figure-d}} \includegraphics[scale=0.7]{6-LES/Figures/figure6F.png} \caption[ Measured and CTMC high-resolution photoelectron momentum maps showing LES, VLES and ZES structures, observed by Wolter et al. ]{ Measured momentum maps for the ionization of argon in a $\SI{3.1}{\micro\meter}$ 6.5-cycle pulse at $\SI{9e13}{\watt/\centi\meter^2}$ \cite{ZES_paper}, showing the V-shaped VLES and the NZES peak, as well as two distinct LES structures, in both linear (left) and logarithmic (right) transverse momentum scales. Most of the features are reasonably well reproduced by a classical trajectory Monte Carlo simulation (bottom row). Figure excerpted from \citer{ZES_paper}. } \label{f6-wolter-original-figure} \end{figure} \copyrightfootnote{ \reffig{f6-wolter-original-figure} reprinted with permission from B. Wolter et al., {% \hypersetup{urlcolor=black}% \href{http://dx.doi.org/10.1103/PhysRevA.90.063424}{% \emph{Phys.\ Rev.\ A} \textbf{90}, 036424 (2014)}. % © 2014 by the American Physical Society. } } %% As per APS T&Cs As of this writing, the measurements in \citer{Wolter_PRX}, as showcased for example in \reffig{f6-wolter-prx-original-figure}, essentially represent the state of the art in the experimental observations of the low-energy region of above-threshold ionization in mid-IR fields. In particular, there is clear evidence of multiple LES features, well-resolved V-shape VLES structures, and strong NZES peaks, though the information on the latter is limited mostly to only its presence. \begin{figure}[htb] \centering \subfigure{\label{f6-wolter-prx-original-figure-a}} \subfigure{\label{f6-wolter-prx-original-figure-b}} \includegraphics[width=\textwidth]{6-LES/Figures/figure6G.png} \caption[ Measured photoelectron momentum map showing multiple members of the LES series, observed by Wolter et al. ]{ Momentum map \protect\subref{f6-wolter-prx-original-figure-a} for the ionization of xenon in a $\SI{3.1}{\micro\meter}$ pulse of intensity $\SI{4e13}{\watt/\centi\meter^2}$ \cite{Wolter_PRX}, showing clear LES but slightly muddled VLES V-shape and NZES peak. Line-outs at several different transverse momenta $\pt$ produce longitudinal profiles with distinct LES peaks shown inside dashed circles in \protect\subref{f6-wolter-prx-original-figure-b}. The gray region in \protect\subref{f6-wolter-prx-original-figure-b} is a rough approximation of the features excluded by the TOF acceptance cone, shown as the white dashed line in \protect\subref{f6-wolter-prx-original-figure-a} as in \reffig{f6-pullen-original-full-spectrum}. The data have been symmetrized about $\pp=0$. Figure excerpted from \citer{Wolter_PRX}. } \label{f6-wolter-prx-original-figure} \end{figure} \copyrightfootnote{ \reffig{f6-wolter-prx-original-figure} reused under its {% \hypersetup{urlcolor=black}% \href{https://creativecommons.org/licenses/by/3.0/}{CC BY licence}, from B. Wolter et al., \href{http://dx.doi.org/10.1103/PhysRevX.5.021034}{% \emph{Phys.\ Rev.\ X} \textbf{5}, 021034 (2015)}. % } } \subsection{Theoretical explanations for low-energy structures} \label{sec:LES-theory} In terms of the available theoretical explanations for the structures in the low-energy region of mid-{IR} strong-field ionization, the field is rather more varied and offers a less linear story. There is a general consensus that the LES is caused by the Coulomb potential acting on the mostly classical motion of the electron, and specifically centred on soft recollisions. On the other hand, there are several alternative mechanisms to go from this class of trajectories to peaks in the photoelectron spectrum. %%% TDSE As we saw earlier, from its initial detection the LES was reproducible from within TDSE simulations~\cite{blaga_original_LES,catoire_angular-distributions_2009}, and this has been carried forward with TDSE calculations showing the VLES, in their original sense as a single peak under a constrained acceptance angle~\cite{VLES_characterization}. In addition to this there have been some attempts at further exploration of the low-energy region within the TDSE~\cite{telnov_TDSE_with_and_without_Coulomb, lemell_classicalquantum_2013}, but generally the consensus is that those features, and most markedly the LES, are well explained by the TDSE and therefore captured completely by the single-atom Schrödinger equation. Unfortunately, this yields relatively little insight on the origin of the structures, and most of the effort has been directed at building simplified models that explain the structures. %%% Overview These efforts largely fall along three lines of inquiry. On the classical side, one can study the global properties of the classical propagation map, and one can also use statistical Monte Carlo methods to predict photoelectron spectra. On a more explicitly quantum side, there is the Coulomb-Corrected SFA (CCSFA), which we discussed in the Introduction and in chapter~\ref{chap:quantum-orbits}, and which embeds the classical trajectory dynamics directly within the quantum SFA framework. Finally, a class of methods known as Improved SFA (ISFA), include a single term of interaction with the core in a Born series and then perform an expanded SFA treatment. This varied set of methods generally agrees on the causes for the LES, in terms of the classes trajectories involved, but they provide multiple interpretations for how those trajectories translate into peaks in photoelectron spectra. %%% CTMC One prominent feature of this set of explanations is that much of the structure that is present can be explained rather well using only classical trajectories, building on electron populations that are built up throughout the laser cycle as ionization bursts given by the quasi-static ADK tunnelling probability. This is known as the Classical Trajectory Monte Carlo (CTMC) method: a large number of electron trajectories are randomly generated, weighed by the ADK rate, and they are then propagated until the end of the pulse using the newtonian equation of motion in the laser field plus the (effective) ionic potential. In essence, the Schrödinger dynamics of the photoelectron are replaced by Liouvillian statistical mechanics with an ADK source term. This approach is able to reproduce the LES and VLES peaks \cite{CTMC1, CTMC2, zhi_Coulomb-LES_2014, lemell_lowenergy_2012} and, moreover, it is able to dissect those structures by selecting the electrons that do end up inside the relevant structures and exploring their characteristics in terms of ionization time~\cite{VLES_characterization, zhi_Coulomb-LES_2014}, angular momentum~\cite{lemell_lowenergy_2012, lemell_classicalquantum_2013}, and overall shape~\cite{lemell_classicalquantum_2013, xia_near-zero-energy_2015}, a level of access into the internal details of the components of a simulation that is denied to TDSE calculations. \newlength{\figuresixHheight} \setlength{\figuresixHheight}{5.5cm} \begin{figure}[hb] \centering \subfigure{\label{f6-xia-original-figure-a}} \subfigure{\label{f6-xia-original-figure-b}} \subfigure{\label{f6-xia-original-figure-c}} \subfigure{\label{f6-xia-original-figure-d}} \subfigure{\label{f6-xia-original-figure-e}} \subfigure{\label{f6-xia-original-figure-f}} \subfigure{\label{f6-xia-original-figure-g}} \subfigure{\label{f6-xia-original-figure-h}} \subfigure{\label{f6-xia-original-figure-i}} \subfigure{\label{f6-xia-original-figure-j}} \subfigure{\label{f6-xia-original-figure-k}} \subfigure{\label{f6-xia-original-figure-l}} \begin{tabular}{ccc} \includegraphics[height=\figuresixHheight]{6-LES/Figures/figure6Ha.png} & \hspace{0mm} & \includegraphics[height=\figuresixHheight]{6-LES/Figures/figure6Hb.png} \end{tabular} \caption[ CTMC simulations of VLES V-shaped structure and NZES-like peak, performed by Q.Z. Xia et al. ]{ Photoelectron momenta for argon ionized by a $\SI{e14}{\watt/\centi\meter^2}$ laser at $\SI{3.1}{\micro\meter}$, as obtained via CTMC simulations using \protect\subref{f6-xia-original-figure-a} no Coulomb potential, \protect\subref{f6-xia-original-figure-b} the full Coulomb interaction, and \protect\subref{f6-xia-original-figure-c} Coulomb interactions with a trajectory interference term, as compared to the experimental data from \citer{dura_ionization_2013} shown in \protect\subref{f6-xia-original-figure-d}. The results can be divided into zones and explored with the Coulomb interaction turned on and off, as shown for the different zones of \protect\subref{f6-xia-original-figure-d} over longitudinal momentum (\hyperref[f6-xia-original-figure-e]{e}-\hyperref[f6-xia-original-figure-h]{h}) and kinetic energy (\hyperref[f6-xia-original-figure-i]{i}-\hyperref[f6-xia-original-figure-l]{l}). Figure excerpted from \citer{xia_near-zero-energy_2015}. } \label{f6-xia-original-figure} \end{figure} \copyrightfootnote{ \reffig{f6-xia-original-figure} adapted (labels (\hyperref[f6-xia-original-figure-e]{e}-\hyperref[f6-xia-original-figure-l]{l}) shifted for clarity), under its {% \hypersetup{urlcolor=black}% \href{https://creativecommons.org/licenses/by/3.0/}{CC BY licence}, from Q.Z. Xia et al., \href{http://dx.doi.org/10.1038/srep11473}{% \emph{Sci.\ Rep.}~\textbf{5}, 11473 (2015)}. % } } Further, CTMC simulations can also reproduce much of the V-shaped VLES and a NZES-like peak in the photoelectron momentum spectrum~\cite{xia_near-zero-energy_2015}, shown in \reffig{f6-xia-original-figure-b}, and remarkably close to the equivalent experimental result from \citer{dura_ionization_2013} shown in \reffig{f6-xia-original-figure-d}. In addition to this, CTMC results have conclusively shown that the Coulomb field of the remaining ion is essential to the emergence of LES and related structures~\cite{zhi_Coulomb-LES_2014, xia_near-zero-energy_2015}, as shown for example in \reffig{f6-xia-original-figure-a}: here the ionic potential is completely ignored after the ADK tunnelling stage, completely eliminating the features of \reffig{f6-xia-original-figure-b}. (Similar differences are exhibited in \reffig{f6-xia-original-figure-e}-\subref{f6-xia-original-figure-l}.) In addition to the statistical look provided by the CTMC method, the classical mechanics of post-tunnelling electrons can also provide deeper, structural looks at what causes the LES peaks, by examining the dynamical maps of the newtonian evolution, from the conditions after ionization to the electron momenta after several laser periods~\cite{Rost_PRL, Rost_JPhysB}. Under this lens, the LES peak is caused by dynamical focusing: the bunching together of electrons that come from a wide array of initial conditions into a relatively small interval, as shown in \reffig{f6-kastner-dynamical-focusing}. \begin{figure}[htbp] \centering \subfigure{\label{f6-kastner-original-figure-a}} \subfigure{\label{f6-kastner-original-figure-b}} \subfigure{\label{f6-kastner-original-figure-c}} \subfigure{\label{f6-kastner-original-figure-d}} \includegraphics[width=\textwidth]{6-LES/Figures/figure6I.png} \caption[ Dynamical maps for classical trajectories showing `finger'-like structures and the associated photoelectron bunching, as calculated by A. Kästner~et~al. ]{ Dynamical maps for a classical electron released into a monochromatic field $A'=A_0\sin(\omega t)$ under the influence of a Coulomb potential~\cite{Rost_PRL}. The electron is released with ADK rates at a time $t'$, indexed by the vector potential $A'$ at ionization, with initial transverse momentum $p'_\rho$ and zero transverse velocity. The colour scale shows the electron longitudinal momentum $p_z$ as a function of the initial conditions at one \protect\subref{f6-kastner-original-figure-a}, two \protect\subref{f6-kastner-original-figure-b} and three \protect\subref{f6-kastner-original-figure-c} laser periods after ionization. The `fingers' in \protect\subref{f6-kastner-original-figure-b} and \protect\subref{f6-kastner-original-figure-c} represent depletion caused by a soft recollision, and this is accompanied by peaks in the spectrum \protect\subref{f6-kastner-original-figure-d} caused by dynamical focusing: the crossed, looping contours around the saddle points of $p_z(p_\rho',A')$, where the electrons congregate. Figure excerpted from \citer{Rost_PRL}. } \label{f6-kastner-dynamical-focusing} \end{figure} \copyrightfootnote{ \reffig{f6-kastner-dynamical-focusing} reprinted with permission from A. Kästner et al., {% \hypersetup{urlcolor=black}% \href{http://dx.doi.org/10.1103/PhysRevLett.108.033201}{% \emph{Phys.\ Rev.\ Lett.} \textbf{108}, 033201 (2012)}. % ©~2012 by the American Physical Society. } } %% As per APS T&Cs This occurs at momenta close to (but not exactly at) the soft-recollision momenta, for which the electron returns to the core, $\vbr(t_r) \approx 0$, for a close interaction with the ion, and moreover it does so with a very low velocity, $\vbv(t_r)\approx 0$, as shown in \reffig{f6-rost-soft-recollisions}. For the full classical trajectories, the soft recollision itself is often `burned' out of the spectrum and sent to radically different momenta, shown as the `fingers' of Figs.~\ref{f6-kastner-original-figure-b} and \subref{f6-kastner-original-figure-c}, but the strong effect on the momentum-momentum mapping causes spots nearby to fold a flat initial distribution into a peak at the zeroes of the derivative of the mapping. Moreover, once this cause is recognized, it is easy to see that the soft recollisions of \reffig{f6-rost-soft-recollisions} come in multiple types. The principal trajectory type, which causes the main LES peak, has a soft recollision at one and a half periods after it is ionized: it swings past the ion once (at a velocity too high to be meaningfully deflected) and then has a soft recollision on the turning point of its next backwards swing, as shown by the green curve~of~\reffig{f6-rost-soft-recollisions}. It is possible, however, for trajectories to have a soft recollision later on in the cycle, like the purple, dashed curve, which spends two periods oscillating at relatively large distances from the origin (and, again, passing the ion too quickly to get too deflected), and having a soft recollision on its second backwards turning point. This then generates a series of trajectories starting with the LES peak and going to lower and lower momentum (which we will explore in more depth in section~\ref{sec:classical-soft-recollisions}), causing the series of peaks seen in \reffig{f6-kastner-original-figure-d}; these predicted peaks do appear in experimental spectra~\cite{ZES_paper, Wolter_PRX}, as discussed above, and they are visible in the experimental results shown in Figs.~\ref{f6-wolter-original-figure} and \ref{f6-wolter-prx-original-figure}. \begin{figure}[t] \centering \includegraphics[scale=1]{6-LES/Figures/figure6J.png} \caption[ Soft recollisions as originally presented by A. Kästner et al. ]{ Soft recollisions are trajectories where the electron returns to the vicinity of its parent ion with very small velocity, near a turning point. This can be after a single pass (green curve) or after two (purple curve) or more passes, forming a family of trajectories at different final momenta. Figure excerpted from \citer{Rost_JPhysB}. } \label{f6-rost-soft-recollisions} \end{figure} \copyrightfootnote{ \reffig{f6-rost-soft-recollisions} © IOP Publishing. Reproduced with permission. All rights reserved. } %% As per terms in IOP email Going some way beyond this analysis, the dynamical map of the full Coulomb-plus-laser trajectories, i.e. the mapping from the momentum at ionization to the momentum after one laser period and beyond, is a rather complicated quantity~\cite[cf.][Fig.~7]{Becker_rescattering}, but if examined in detail it can show some very interesting regularities, shown in~\reffig{f6-kelvich-dynamical-map}. \begin{figure}[htb] \centering \subfigure{\label{f6-kelvich-original-figure-a}} \subfigure{\label{f6-kelvich-original-figure-b}} \includegraphics[width=392pt]{6-LES/Figures/figure6K.png} \caption[ High-resolution dynamical map of full classical photoelectron trajectories, showing LES bunching as the result of a caustic in the dynamical map, as calculated by S.A. Kelvich et al. ]{ Dynamical map of electrons ionized from argon by a $\SI{1.5e14}{\watt/\centi\meter^2}$ field at $\SI{2}{\micro\meter}$, taking a regular grid in momentum space to a complicated shape~\cite{kelvich_coulomb-focusing_2015}. The decrease in width compared to the non-Coulomb case (black rectangle) showcases the known Coulomb focusing, but near the soft recollision at $p_x\approx\SI{0.61}{\au}$ the trajectories are sent to an opposite transverse momentum $p_y$, forming the caustic shown in \protect\subref{f6-kelvich-original-figure-b}, which then accumulates electrons to form an LES peak in the photoelectron spectrum. Figure excerpted from \citer{kelvich_coulomb-focusing_2015}. } \label{f6-kelvich-dynamical-map} \end{figure} \copyrightfootnote{ \reffig{f6-kelvich-dynamical-map} reprinted with permission from S.\ A. Kelvich et al., {% \hypersetup{urlcolor=black}% \href{http://dx.doi.org/10.1103/PhysRevA.93.033411}{\emph{Phys.\ Rev.\ A} \textbf{93}, 033411 (2016)}. % ©~2016 by the American Physical Society. } } %% As per APS T&Cs It is quite clear, from the detailed mapping, that some regions exhibit chaotic dynamics~\cite{chaotic_dynamics} (which show up as the burned-out hole near the origin, where the electrons are scattered away), but there are also large regions of regularity. For instance, the main LES is clearly visible as the result of a caustic induced by the Coulomb potential, shown in \reffig{f6-kelvich-original-figure-b}, which then causes the electron bunching at those energies. %%% CCSFA These results, then, show that classical trajectories capture much of the dynamics of the LES, but in the end the ionization process is quantum mechanical, and it is worthwhile to look for methods that explicitly include the quantum mechanical aspects of the problem. This is, essentially, the Coulomb-corrected SFA (CCFSA) approach which we discussed in section~\ref{sec:emergence-of-complex-trajectories}, and which was developed in Refs.~\citealp{CCSFA_initial_short} and~\citealp{ CCSFA_initial_full} for use in problems like sub-barrier Coulomb effects in tunnel ionization~\cite{TCSFA_sub_barrier}, which it can do quite successfully within the conceptual constraints we detailed in section~\ref{sec:emergence-of-complex-trajectories}. When applied to the LES, the CCSFA method produces angular distributions somewhat different to the ones obtained by classical-trajectory CTMC methods~\cite{ yan_TCSFA_caustics}, but it does provide a good match to the TDSE distributions, as shown in \reffig{f6-yan-original-figure}, which in principle describes the microscopic response better, prior to the washing out of interference fringes by focal averaging effects. As such, the CCSFA results form a useful bridge, connecting the full Schrödinger equation on one side with the classical understanding in terms of soft recollisions on the other. \begin{figure}[htb] \centering \subfigure{\label{f6-yan-original-figure-a}} \subfigure{\label{f6-yan-original-figure-b}} \subfigure{\label{f6-yan-original-figure-c}} \subfigure{\label{f6-yan-original-figure-d}} \subfigure{\label{f6-yan-original-figure-e}} \includegraphics[width=\textwidth]{6-LES/Figures/figure6L.png} \caption[ CCSFA analysis of the Low-Energy Structures, as performed by T.-M. Yan~et~al. ]{ Photoelectron momentum distributions from argon ionized by a $\SI{2}{\micro\meter}$ field at $\SI{e12}{\watt/\centi\meter^2}$~\cite{ yan_TCSFA_caustics}, via standard SFA \protect\subref{f6-yan-original-figure-a}, a full TDSE simulation~\protect\subref{f6-yan-original-figure-c}, and a CCSFA~(here named TC-SFA) calculation~\protect\subref{f6-yan-original-figure-b}. The CCSFA result provides a good match to the full TDSE, while still providing an intuitive trajectory picture. Specifically, the `cut' at the LES range in \protect\subref{f6-yan-original-figure-b} can be directly associated with a caustic, coming from trajectory type III as per \protect\subref{f6-yan-original-figure-e}, where the electron approaches the ion at low speed, changing the sign of its transverse momentum. Figure excerpted from \citer{yan_TCSFA_caustics}. } \label{f6-yan-original-figure} \end{figure} \copyrightfootnote{ \reffig{f6-yan-original-figure} reprinted with permission from T.-M. Yan et al., {% \hypersetup{urlcolor=black}% \href{http://dx.doi.org/10.1103/PhysRevLett.105.253002}{% \emph{Phys.\ Rev.\ Lett.} \textbf{105}, 253002 (2010)}. % ©~2010 by the American Physical Society. Labels (d,\,e) shifted for clarity. } } %% As per APS T&Cs %%% ISFA In addition to this, it is also possible to do an even deeper quantum approach, by augmenting the normal SFA with a single formal quantum scattering on the ionic potential. This approach, known as the Improved SFA (ISFA), builds a Born series in the Coulomb potential in much the same way that we performed a perturbative expansion with respect to the electron correlation interaction potential $V_{ee}^m$ in chapter~\ref{chap:R-matrix}. It was originally developed to deal with a large plateau of high-energy electrons (between $2U_p$ and $10U_p$) in above-threshold ionization~\cite{goreslavskii_ISFA-standard_1998, milosevic_ISFA-standard_2007}, and it has been very successful there, but it can also be applied to forward scattering at low velocities. In the LES context, then, the ISFA method can describe the LES peaks~\cite{ Becker_rescattering, Becker_Milosevic_quantum_orbits, Milosevic_scattering_large, Milosevic_reexamination, becker_milosevic_unified_2016}, which appear as a result of forward scattering with the Coulomb core at low energies. This had originally been neglected, because the direct pathway (without rescattering) was deemed dominant at low energies, but the large Coulomb scattering cross section in the forward direction makes up for the difference. Moreover, the forward scattering within ISFA can also be used to explain the V-shaped VLES~\cite{Becker_Milosevic_quantum_orbits, becker_ATI-low-energy_2015}, where it shows as the confluence of the locus of several types of forward-scattered quantum orbits, as shown in \reffig{f6-becker-circles-original-figure}, with the predicted spectrum providing a good match to the experimental observations of \reffig{f6-wolter-original-figure}. \newlength{\figuresixMheight} \setlength{\figuresixMheight}{6cm} \begin{figure}[htb] \centering \subfigure{\label{f6-becker-circles-original-figure-a}} \subfigure{\label{f6-becker-circles-original-figure-b}} \begin{tabular}{ccc} \includegraphics[height=\figuresixMheight]{6-LES/Figures/figure6Ma.jpg} & & \includegraphics[height=\figuresixMheight]{6-LES/Figures/figure6Mb.jpg} \end{tabular} \caption[ Emergence of the VLES V-shape from forward-scattered quantum orbits within the ISFA formalism, as presented by W. Becker et al. ]{ Emergence of the VLES V-shape from forward-scattered quantum orbits within the ISFA formalism~\cite{becker_ATI-low-energy_2015}. Here multiple quantum orbits (with different starting times, indexed by $\beta$, $\mu$ and $m$) form contributions with different loci, which are essentially universal up to the field momentum scale $A_0=F/\omega$. The intersection of the circular loci at the origin then gives rise to the V shape, as shown in the predicted spectrum \protect\subref{f6-becker-circles-original-figure-b} for argon in a $\SI{3.1}{\micro\meter}$ field at $\SI{9e13}{\watt/\centi\meter^2}$ as in \citer{ZES_paper}. Figure excerpted from~\citer{ becker_ATI-low-energy_2015}. } \label{f6-becker-circles-original-figure} \end{figure} \copyrightfootnote{ \reffig{f6-becker-circles-original-figure} © IOP Publishing. Reproduced with permission. All rights reserved. } %% As per terms in IOP email %%% sundries Finally, in complement to the above methods there is also a smattering of alternative views on the generation of the LES and VLES, most of which are variations on the augmented SFA idea~\cite{Titi_Drake_S_Matrix, Milosevic_LFA, murnane_TCSFA_tunnel_exit}, but generally they add mostly supplementary insights to the ones discussed above. %%% Pure SMM On the other hand, the ISFA analysis does also point to an awkward feature: since it is based only on pure laser-driven trajectories, most of its features can be boiled down to just classical trajectories that completely ignore the Coulomb field. Because of this, it is in fact possible to model the LES and the VLES V shape using only the so-called simple-man's model, augmented with only a single act of rescattering on a point nucleus~\cite{off_axis_LES}, and this sends the electrons on curves essentially identical to those shown in \reffig{f6-becker-circles-original-figure-a}, with rather similar predictions for experimental spectra. %%% Summing up Ultimately, though, the rough consensus emanating from these approaches is that the LES and VLES are essentially already present at the level of the simple-man's model -- the dynamics of a tunnel-ionized electron driven only by the laser field -- but that they do require the action of the Coulomb field of the ion to appear in a significant way~\cite{Becker_rescattering}. However, the mechanism of this action -- trajectory bunching in CTMC analyses, forward-scattering amplitudes within ISFA, trajectory interference at caustics inside CCSFA -- is still susceptible to multiple interpretations. %%% Scaling As a final note on the theoretical understanding of the LES, it is important to mention one of the main tools used to track it, identify it, and diagnose its origin: the structure's scaling with respect to the laser's wavelength and intensity and the ionization potential of the target species. This is usually measured in terms of the high-energy edge of the feature $E_\mathsf{H}$ (as defined in \reffig{f6-blaga-original-figure}) and scaling measurements were reported in the original detection by Blaga et al.~\cite{blaga_original_LES}, as shown in \reffig{f6-blaga-scaling-original-figure}, as well as in later calculations~\cite{CTMC1, lemell_classicalquantum_2013, murnane_TCSFA_tunnel_exit, LES_Scaling}. \begin{figure}[thb] \centering \includegraphics[scale=1]{6-LES/Figures/figure6N.png} \caption[ Scaling of the upper edge energy of the LES, as measured by C.I. Blaga~et~al. ]{ Scaling of the upper edge energy $E_\mathsf{H}$ of the LES structure as measured by Blaga et al.~\cite{blaga_original_LES} for multiple target species and wavelengths, over varying intensities. The scaling is essentially universal, varying with the Keldysh parameter as $E_\mathsf{H}\propto\gamma^{-2}$. Figure excerpted from \citer{blaga_original_LES}. } \label{f6-blaga-scaling-original-figure} \end{figure} \copyrightfootnote{ \reffig{f6-blaga-scaling-original-figure} reprinted by permission from Macmillan Publishers Ltd: % {\hypersetup{urlcolor=black}% \href{http://www.nature.com/nphys}{% \emph{Nature Phys.} \textbf{5}, p. 335 © 2009}. }} %% As per NPG T&Cs Generally speaking, the LES edge is quite reliably found to scale with the Keldysh parameter as $E_\mathsf{H}\propto\gamma^{-2}$, though this can mostly be refined further to $E_\mathsf{H}\propto U_p$, with a proportionality constant close to $1/10$. This scaling essentially arises from the classical dynamics of the soft recollision within the simple man's model, and we will return to it in section \ref{sec:classical-soft-recollisions}. \subsection{Theoretical explanations for near-zero energy structures} \label{sec:NZES-theory} As we have seen, the theoretical aspects of the LES and the VLES are relatively well understood, with a strong consensus on the fundamental roles of soft recollisions and the Coulomb field in their generation. On the other hand, the NZES is rather more recent and there is less work on the mechanisms behind it. Below, in sections~\ref{sec:ARM-soft-recollisions} and \ref{sec:classical-soft-recollisions}, we will propose a mechanism for the NZES based on an extension of the soft-recollision arguments above. At present, however, the only explanation that has been advanced relates to the role of the constant electric extraction field of the reaction microscope acting on highly excited states left over from the laser pulse~\cite{ZES_paper, Rost_latest} goes here. It has been known for some time that if an atom is ionized by a strong laser pulse in the tunnelling regime, some fraction of the electron population taken out of the ground state is left in high-lying Rydberg states~\cite{ nubbemeyer_rydberg-creation_2008, landsman_Rydberg-creation_2015, larimian_rydberg-detection_conference_2015}, a process known as frustrated tunnelling, though relatively little is known about these states and their energy, angular momentum, and coherence characteristics. In the experiments where the NZES was observed~\cite{dura_ionization_2013, ZES_paper}, these leftover Rydberg states were also left under the action of the macroscopic electric and magnetic fields, on the order of ${\sim}\SI{1}{\volt/\centi\meter}$ and ${\sim}\SI{e-4}{\tesla}$, used by the reaction microscope to guide the electrons to the detector~\cite{moshammer_ReMi_2003}. In principle, then, the electric extraction field, weak as it is on the atomic scale (i.e.~$\SI{1}{\volt/\centi\meter}\approx \SI{2e-10}{\au}$), can still liberate these electrons either by over-the-barrier ionization, for whatever population is left above the shallow barrier caused by the extraction field, or through tunnel ionization for states just below that. Indeed, there is some evidence that some electrons can be liberated in this way, from TDSE and CTMC simulations performed in the presence of such an extraction field~\cite{ZES_paper, Rost_latest}, though these are challenging due to the long length scales involved, and -- in the case of TDSE simulations~-- only a limited number of Rydberg states can be taken into account. Nevertheless, the structure does appear in CTMC simulations with the extraction field, as shown in \reffig{f6-wolter-nzes-original-figure}, and it shows some agreement with experiment. Moreover, CTMC calculations agree with experiment on the width of the VLES V shape as the length of the laser pulse changes, with the scaling shown in \reffig{f6-wolter-scaling-original-figure}. In addition to this, the extraction-field mechanism requires that the characteristics of the NZES peak change as the strength of the extraction field changes. This is a hard prediction to explore experimentally, because the extraction field is also a crucial variable both in how many electrons are detected as well as in fixing the final resolution of the detector (i.e. the level of zoom into the photoelectron momentum distribution), so variations there have a strong effect on the rest of the experiment, but there are indeed some hints of variation in the width of the structure in experiment, as shown in \reffig{f6-diesen-scaling-original-figure}. Moreover, there is also some agreement between the observed experimental variations and a simplified semiclassical theory which considers classical electrons, uniformly distributed in Rydberg states just below the ionization threshold, in a single rescaled Coulomb potential plus the extraction field~\cite{Rost_latest}. On the other hand, the extraction-field mechanism would also require the yield of the structure -- the number of electrons in the NZES -- to change with the extraction field, since if the pulse parameters don't vary then the Rydberg population will not change appreciably, and a stronger extraction field will lower the shallow barrier and thereby liberate a larger population of electrons. (Even further, the Rydberg population is expected to be roughly uniformly distributed over energy just below threshold, so the NZES yield should scale roughly linearly with the extraction field over its $\sim$tenfold variation in \citer{Rost_latest}.) However, this is rather difficult to test experimentally, since changing the extraction field has a strong effect on the entire detection, and there is at present no evidence for this dependence, which represents the clearest way forward in validating this mechanism as a contributor to the NZES. \section[Soft recollisions in Analytical R-Matrix theory]{Soft recollisions in Analytical $R$-Matrix theory} \label{sec:ARM-soft-recollisions} Having seen the current understanding of the LES in the literature, we now turn to what our ARM theory of photoionization can tell us about soft recollisions and their role in photoelectron spectra. \begin{figure}[b!] \centering \includegraphics[width=\textwidth]{6-LES/Figures/figure6Q.png} \caption[ Photoelectron momentum maps from measurements and CTMC simulations, showing a narrowing of the VLES V shape for longer pulses, together with a NZES-like structure, as observed by B. Wolter et al. ]{ Ionization of argon in a $\SI{3.1}{\micro\meter}$ pulse at $\SI{9e13}{\watt/\centi\meter^2}$ at varying pulse lengths~\cite{ZES_paper}, providing a zoom to the low-energy region of \reffig{f6-wolter-original-figure-b}, and its comparison with an equivalent CTMC simulation with the extraction field accounted for. The NZES structure also appears in the CTMC simulation, which agrees with experiment on the narrowing of the VLES V shape. Figure excerpted from \citer{ZES_paper}. } \label{f6-wolter-nzes-original-figure} \end{figure} \copyrightfootnote{ \reffig{f6-wolter-nzes-original-figure} reprinted with permission from B. Wolter et al., {% \hypersetup{urlcolor=black}% \href{http://dx.doi.org/10.1103/PhysRevA.90.063424}{% \emph{Phys.\ Rev.\ A} \textbf{90}, 063424 (2016)}. % ©~2016 by the American Physical Society. } } %% As per APS T&Cs As we have seen, the ARM formalism works with a different set of trajectories to the ones mentioned above, since it does not use the full Coulomb-laser trajectory used by full classical CTMC theories and by the semiclassical CCSFA, nor does it use the real-valued simple-man's trajectories with only the laser driving as in \citer{ off_axis_LES}. In those real-time theories, the soft recollision shows up topologically as a topological change in the trajectory, as shown in \reffig{f5-classical-tca-on-axis}, and the number of extrema of $\rcl(t)^2$ along the path: from a single inwards turning point, to an outwards turning point flanked by two closest-approach times, as shown in \reffig{f5-sample-trajectories-a}. The ARM trajectories, on the other hand, are different, because the trajectory path through the complex time plane is no longer constrained to lie on the real axis, which means that the closest-approach solutions are not lost -- they simply go off into the complex plane, where they can still be reached by the integration path over the complex time plane if there is a strong enough reason (such as avoiding Coulomb branch cuts) to do so. The last time we considered the soft recollisions in this context, then, was as a complex interaction between pairs of branch cuts, depicted in \reffig{f5-branch-cut-topology-change}, at which two pairs meet and recombine, changing the branch cut topology that the integration path needs to navigate. \begin{figure}[!t] \centering \includegraphics[scale=1]{6-LES/Figures/figure6P.png} \caption[ Variation of the width of the VLES V shape with respect to the pulse length, as observed and CTMC-simulated by B. Wolter et al. ]{ Variation of the width of the VLES V shape in \reffig{f6-wolter-nzes-original-figure} with respect to variations in the pulse length, showing good agreement between CTMC simulations and experiment. Figure excerpted from \citer{ZES_paper}. } \label{f6-wolter-scaling-original-figure} \end{figure} \copyrightfootnote{ \reffig{f6-wolter-scaling-original-figure} reprinted with permission from B. Wolter et al., {% \hypersetup{urlcolor=black}% \href{http://dx.doi.org/10.1103/PhysRevA.90.063424}{% \emph{Phys.\ Rev.\ A} \textbf{90}, 063424 (2016)}. % ©~2016 by the American Physical Society. } } %% As per APS T&Cs \begin{figure}[b] \centering \includegraphics[scale=1.15]{6-LES/Figures/figure6O.png} \caption[ Measured momentum width of the NZES, compared with predictions from extraction-field theory, as observed by E. Diesen et al. ]{ Momentum width $\Pi^*$ of the NZES in ionization of $\mathrm{N_2}$ in a $\SI{780}{\nano\meter}$ pulse, subsequently ionized by extraction fields of different strengths, showing some variation in the width of the feature with the extraction field strength. The red curve shows the predicted width of extraction from a rescaled Coulomb potential. Figure excerpted from~\citer{Rost_latest}. } \label{f6-diesen-scaling-original-figure} \end{figure} \copyrightfootnote{ \reffig{f6-diesen-scaling-original-figure} reprinted with permission from E. Diesen et al., {% \hypersetup{urlcolor=black}% \href{http://dx.doi.org/10.1103/PhysRevLett.116.143006}{% \emph{Phys.\ Rev.\ Lett.} \textbf{116}, 143006 (2016)}. % ©~2016 by the American Physical Society. } } %% As per APS T&Cs Moreover, although we showed in \reffig{f5-branch-cut-topology-change} a single example at a return time of $\omega t\approx 2\pi$, this behaviour reoccurs every half period thereafter, as is clear from the quantum $\tca$ surface we saw in \reffig{f5-quantum-tca-surface}. Here, for clarity, we revisit \reffig{f5-branch-cut-topology-change}, showing in \reffig{f6-branch-topology-revisited} the change in the branch cut topology of $\sqrt{\rl(t)^2}$ for the first soft recollision, at $\omega t\approx 2\pi$ and a very low momentum, and the second one at $\omega t\approx 3\pi$ and a slightly higher momentum. \input{6-LES/Figures/Figure6-2AParameters.tex} \begin{figure}[htb] \centering \begin{tabular}{cc} $\qquad p_z=\figurefiveKppl{}F/\omega$ & $p_z=\figurefiveKpph{}F/\omega$ \hspace{15mm} \\ \hline \vspace{-2mm} \\ \subfigure{ \includegraphics[scale=\figurefiveKscale]{6-LES/Figures/figure6-2Aa.pdf} \label{f6-branch-cut-topology-open-one} } & \hspace{-6mm} \subfigure{ \includegraphics[scale=\figurefiveKscale]{6-LES/Figures/figure6-2Ab.pdf} \label{f6-branch-cut-topology-closed-one} } \\[10mm] $\qquad p_z=\figuresixtwoAppltwo{}F/\omega$ & $p_z=\figuresixtwoApphtwo{}F/\omega$ \hspace{15mm} \\ \hline \vspace{-2mm} \\ \subfigure{ \includegraphics[scale=\figurefiveKscale]{6-LES/Figures/figure6-2Ac.pdf} \label{f6-branch-cut-topology-open-two} } & \hspace{-6mm} \subfigure{ \includegraphics[scale=\figurefiveKscale]{6-LES/Figures/figure6-2Ad.pdf} \label{f6-branch-cut-topology-closed-two} } \end{tabular} \caption[ Change in the branch cut topology for the first two soft recollisions ]{ Change in the branch cut topology, as in \reffig{f5-branch-cut-topology-change}, for the first two soft recollisions. Note that the order of the transition (open to closed, and vice versa) is reversed with respect to increasing $p_z$. } \label{f6-branch-topology-revisited} \end{figure} As we saw in chapter~\ref{chap:quantum-orbits}, these soft recollisions are the hardest point for the branch cut navigation, since they provide the closest gates with a very sensitive dependence on the problem's parameters. This is emphasized by the very small momentum changes between the left and right columns of \reffig{f6-branch-topology-revisited}, which mark the change in topology, and therefore the switch in the choice of closest-approach times the integration path needs to go through. In addition to this, however, the soft recollisions also have a strong effect on the ionization amplitude, because these drastic changes in the integrand occur precisely when it is at its largest. Thus, choosing the wrong contour in this region accounts for the largest contributions to the integrand, with a correspondingly large effect on the integral, so correctly navigating the cuts here is even more crucial. More surprisingly, however, once the contour is forced to pass through the `gate' $\tca$s, for $p_z$ just on the `closed-gate' topology side of $\pzsr$, the contributions of those saddles have the effect of suppressing the ionization amplitude there. To see how this comes about, consider the integral $\int U(\rcl(t))\d t$ for the configuration of \reffig{f6-branch-cut-topology-open-one}. Here $\sqrt{\rcl(t)^2}$ has a minimum at the central saddle point, $\tcasup{\,(2)}$, and this translates into a maximum of $1/\sqrt{\rcl(t)^2}$ which dominates the integral. At this point, the approach distance \begin{equation} r_\ast=\sqrt{\rcl(\tcasup{\,(2)})^2} \end{equation} is dominated by a modest and positive imaginary part. This means that \begin{equation} U_\ast=-1/r_\ast \end{equation} is large and (positive) imaginary, and therefore the correction factor $e^{-i\int U\d t}$ has a large amplitude. On the other hand, in the configuration of \reffig{f6-branch-cut-topology-closed-one} the integral is dominated by the `gate' closest-approach times, for which \begin{equation} r_\ast'=\sqrt{\rcl(\tcasup{\,(1)})^2}\approx\sqrt{\rcl(\tcasup{\,(1)})^2} \end{equation} is mostly real and much smaller than $r_\ast$. The corresponding potential $U_\ast'=-1/r_\ast'$ is then large, real and negative, and $-iU_\ast'$ is along $+i$. However, here the line element $\d t$ must slope upwards with a positive imaginary part to emphasize the contribution of the saddle point, and this then gives $-i\int U(\rcl(t))\d t$ a large and negative real part. This, in turn, suppresses the amplitude of the correction factor $e^{-i\int U\d t}$. This effect is then visible in the photoelectron spectrum as a large peak just below the soft recollision, followed by a deep, narrow dip, which we show in \reffig{f6-po-pp-spectrum}. In an experimental setting, the dip will almost certainly get washed out by nearby contributions unless specific steps are taken to prevent this, but the peak will remain. (In addition, this effect mirrors the redistribution of population seen in classical-trajectory-based approaches, where the peaks caused by dynamical focusing represent trajectories taken from other asymptotic momenta, whose amplitude is therefore reduced.) \input{6-LES/Figures/Figure6-2BParameters.tex} \begin{figure}[htb] \centering \begin{tabular}{c} \includegraphics[width=0.7\columnwidth]{6-LES/Figures/figure6-2B.pdf} \end{tabular} \caption[ ARM photoelectron spectra showing Near-Zero Energy Structures ]{ Emergence of the Near-Zero Energy Structures within the Analytical $R$-Matrix theory: incoherent addition of the sub-cycle ionization yields for two adjacent half-cycles, $\tfrac12\big(\left|a(p_x,0,p_z)\right|^2+\left|a(p_x,0,-p_z)\right|^2\big)$, as predicted by the ARM amplitude. We ignore the shape factor $R(\vbp)$, and consider for the ionization of unaligned molecular nitrogen by a $\SI{\figuresixtwoBwavelength}{\micro\meter}$ field at $\SI{e14}{\watt/\centi\meter^2}$, with $\gamma=\figuresixtwoBgamma$, as per the parameters of \citer{pullen_kinematically_2014} whose experimental data is shown in Figs.~\ref{f6-pullen-original-full-spectrum} and \ref{f6-pullen-original-transverse-spectrum}. The Coulomb-correction has been integrated over 2.75 laser periods. } \label{f6-po-pp-spectrum} \end{figure} Here the peak in \reffig{f6-po-pp-spectrum} should be compared with the experimental transverse spectra we showed earlier in \reffig{f6-pullen-original-transverse-spectrum}, which also displayed a sharp spike rising out of a gaussian background for very small longitudinal momentum. Here the spike is not as sharp (it is shown in linear scale instead of logarithmic scale) but given the approximations in ARM theory it is only expected to produce qualitative agreement, which is indeed very striking. For this specific case, the momentum scales involved are really very low: here there are two closely spaced transitions at $\pzsr{}= \SI{ \figuresixtwoBfirstpztransition }{\au}$ and $\pzsr{}=\SI{\figuresixtwoBthirdpztransition}{\au}$ (with some interplay between them showing up), and this is roughly at the state of the art of momentum resolution, $\Delta p\sim\SI{0.02}{\au}$, claimed by recent experiments~\cite{ pullen_kinematically_2014, Wolter_PRX}; in terms of energy, it corresponds to a photoelectron energy of about $\SI{8}{\milli\electronvolt}$. Thus, the ARM peak has a finite width, but this is too small to be resolved at present and structures at this range would show up simply as consistent with zero. The peak in \reffig{f6-po-pp-spectrum} is clearly similar to the NZES structure, as experimentally observed, so it calls for further exploration. Since it is directly associated with the soft recollisions of the previous chapter, we have a clear indication of the possible mechanism for the structure, and we will examine the connection further in the next section. Finally, it is worth noting that a more recent CCSFA analysis of ATI~\cite{ keil_branch-cuts_2016}, using a complex tunnel exit as in ARM theory (and thereby restricted to only a laser-driven trajectory), and relying on the branch-cut navigation algorithm we developed in chapter~\ref{chap:LES-NZES}, confirms our findings of LES peaks in this energy region. \section{Classical soft-recollision trajectories} \label{sec:classical-soft-recollisions} We have seen, then, that our ARM theory of photoionization predicts a sharp peak at very low electron energies, and we know the class of trajectories -- soft recollisions -- that underpin it. Moreover, we have been forced by our integration-path selection algorithm to grapple with soft recollisions every half cycle, with the complex topological changes depicted in \reffig{f6-branch-topology-revisited} occurring at $\omega t\approx 2\pi,3\pi,4\pi,\ldots$, giving a distinct series of trajectories and therefore a distinct series of LES structures, some of which we have already covered. However, these trajectories come in two distinct flavours, as depicted in \reffig{f6-trajectories-at-transitions}: one class (shown in dashed red) with the soft recollision on a `backwards' turning point, at $\omega t\approx 3\pi, 5\pi, 7\pi, \ldots$, and a second class with the soft recollision on the `forwards' turning points, at $\omega t\approx 2\pi, 4\pi, 6\pi, \ldots$, shown in solid blue. \input{6-LES/Figures/Figure6-2DParameters.tex} \begin{figure}[ht] \centering \includegraphics[width=0.6\columnwidth]{6-LES/Figures/figure6-2D.pdf} \caption[ Trajectories with soft recollisions, both on the backwards swing and the forwards turning point ]{ Trajectories with soft recollisions after tunnel ionization, for a Keldysh parameter of $\gamma=\figuresixtwoDgamma$.} \label{f6-trajectories-at-transitions} \end{figure} The first class we have already met, in \reffig{f6-rost-soft-recollisions} and first described in Refs.~\citealp{Rost_PRL} and \citealp{Rost_JPhysB}, but the second class has received very little attention in the literature, essentially because most analyses of the soft recollisions as a marker for the LES peaks have used models where the electron trajectory starts at the origin~\cite{Becker_rescattering}. At first blush, ignoring the tunnel exit $\zexit\sim I_p/F$ is relatively safe, since at the wavelengths of interest the quiver amplitude $\zquiv=F/\omega^2$ is much larger, as shown in \reffig{f6-trajectories-at-transitions}. However, for the second class of trajectories, if one ignores the tunnel exit then the whole series collapses into a single trajectory at zero momentum, thereby washing out all the dynamics. However, any reasonable theory of optical tunnelling should place the electrons at the tunnel exit, and doing this unfolds the second series into distinct trajectories. Moreover, it is quite possible to describe the trajectories shown in \reffig{f6-rost-soft-recollisions} within the quasi-classical formalism that simply looks for real-valued trajectories on real times, and this will help us better understand their characteristics. We retake, then the classical trajectories \begin{equation} \rcl(t) = \Re\left(\int_{\ts}^{t} \left[ \vbp+\vba(\tau) \right] \: \d\tau\right) \backtag{e5-classical-trajectory} \end{equation} from section~\ref{sec:classical-tcas}, as our classical trajectories. Within these trajectories, we define the soft recollisions as those real times $\tr$ for which both the velocity and the real part of the trajectory vanish, so that \begin{subequations} \label{e6-symbolic-system} \begin{empheq}[left=\empheqlbrace]{align} \zcl(\tr)&=\Re\left[ \int_\ts^{\tr} \left(p_z+A(\tau)\right)\d\tau\right]=0 \\ v_z(\tr)&=p_z+A(\tr)=0. \end{empheq} \end{subequations} Putting in explicit values for the vector potential and its integral, this can then be re-expressed as \begin{subequations} \label{e6-spelled-out-system} \begin{empheq}[left=\empheqlbrace]{align} \zexit+p_z(\tr-\tn)+\frac{F}{\omega^2}\left(\cos(\omega\tr)-\cos(\omega\tn)\right) &=0 \\ p_z-\frac F\omega \sin(\omega\tr) &=0, \end{empheq} \end{subequations} where \begin{align} \zexit &= \Re\left[ \int_\ts^\tn \left(p+A(\tau)\right)\d\tau\right] %\nonumber\\ & = \frac{F}{\omega^2}\cos(\omega\tn) \left(1- \cosh(\omega\tauT)\right) \end{align} models the tunnel exit, and reduces to the standard $\zexit\approx -I_p/F$ in the tunnelling limit where $\gamma\ll 1$. This system of equations, \eqref{e6-spelled-out-system}, can be solved numerically rather easily, but it is more instructive to consider its linearized version with respect to $p_z$, since all the soft recollisions happen at small energies with respect to $U_p$. To do this, we express the starting time as \begin{align} \tn+i\tauT=\ts & = \frac1\omega \arcsin\left(\frac{\omega}{F}(p_z+i\kappa)\right) %\nonumber\\& \approx \frac{ p_z}{F}\frac{1}{\sqrt{1+\gamma^2}} + \frac{i}{\omega}\arcsinh\left(\gamma\right), \end{align} where $\gamma=\omega\kappa/F$ is the Keldysh parameter as usual, so that $\zexit \approx - \frac{F}{\omega^2}\left(\sqrt{1+\gamma^2}-1\right)$. The linearized system now reads \begin{subequations} \begin{empheq}[left=\empheqlbrace]{align} p_z\tr+\frac{F}{\omega^2}\left(\cos(\omega\tr)-\sqrt{1+\gamma^2}\right) &=0 \\ p_z-\frac F\omega \sin(\omega\tr) &=0, \label{e6-pz-to-tr-eqn} \end{empheq} \end{subequations} and to obtain a solution we must linearize $\tr$ with respect to $p_z$. It is clear from \reffig{f6-trajectories-at-transitions}, and from the numerical solutions of \eqref{e6-spelled-out-system}, that the solutions occur close to each $(n+1)\pi$ for $n=1,2,3,\ldots$, so it is justified to write \begin{equation} \omega \tr= (n+1)\pi+\omega \,\delta\tr, \end{equation} where we expect $\delta\tr$ to be small. Putting this in we obtain from \eqref{e6-pz-to-tr-eqn} that $\delta\tr\approx(-1)^{n+1}p_z/F$ and $\cos(\omega\tr) \approx(-1)^{n+1}$, and this gives in turn the drift momentum of the successive soft-recolliding trajectories as \begin{equation} \pzsr \approx \frac F\omega \frac{\sqrt{1+\gamma^2}+(-1)^n}{(n+1)\pi}. \label{e6-linearized-momenta} \end{equation} These are shown in \reffig{f6-soft-recollision-scaling} , and they are generally a good approximation to the exact solutions of \eqref{e6-spelled-out-system}, shown dotted. \begin{figure}[htb] \centering \includegraphics[scale=1]{6-LES/Figures/figure6-2E.pdf} \caption[ Scaling of the soft-recollision momentum as a function of the Keldysh parameter ]{ Scaling of the normalized soft-recollision momentum $\omega p_z/F$ as a function of the Keldysh parameter $\gamma$ for the first six soft-recollision trajectories. Dots show the exact solutions of \eqref{e6-symbolic-system} and lines show the linearized result \eqref{e6-linearized-momenta}. } \label{f6-soft-recollision-scaling} \end{figure} Here it is quite clear that the trajectories with even $n$ -- our new series, shown solid blue in \reffig{f6-trajectories-at-transitions} -- will scale very differently than the previously known series, which has odd $n$ and was shown in dotted red in \reffig{f6-trajectories-at-transitions}. This scaling, in fact, holds the key to the physical differences between the two classes of trajectories, so it is worth spending some time teasing out its roots and implications. \begin{itemize} \item Starting with the even-$n$ trajectories, we can further simplify the momentum to \begin{equation} \pzsr \approx \frac F\omega \frac{\sqrt{1+\gamma^2}+1}{(n+1)\pi}, \end{equation} which for low $\gamma$ simplifies to \begin{equation} \pzsr \approx \frac F\omega \frac{2+\frac12 \gamma^2}{(n+1)\pi} =\frac F\omega \frac{2}{(n+1)\pi} \approx 2\,\frac{F}{\omega^2} \frac{\omega}{(n+1)\pi}. \label{e6-even-n-scaling} \end{equation} This last form holds most of the physical content for the scaling, because it separates into twice the quiver radius, $2\zquiv=2F/\omega^2$, split over the time between the ionization and the recollision, $(n+1)\pi/\omega$, and this is clearly the distance that the oscillation centroid of the even-$n$ trajectories needs to cover in \reffig{f6-trajectories-at-transitions} for the backwards turning points to pass through the origin. Moreover, this scaling now gives us a direct line on the behaviour of the LES, because it can be directly translated into an estimate of the signature kinetic energy of the structure, \begin{equation} \frac12 \left(\pzsr\right)^2 \approx \frac{F^2}{\omega^2}\frac{2}{(n+1)^2\pi^2} =\frac{8}{(n+1)^2\pi^2} U_p, \end{equation} where the $\pzsr\sim F/\omega$ momentum scaling translates directly into an energy that scales directly with the ponderomotive potential $U_p$. Further, the numerical constant evaluates to roughly $\frac{1}{10}$. The linearity with respect to $U_p$, together with the numerical constant, matches the known scaling of the LES, as discussed at the end of section~\ref{sec:LES-theory}. \item Turning to the odd-$n$ trajectories, the $\cos(\omega\tr)\approx (-1)^n$ factor is now odd -- we are on a forwards swing of the orbit -- and this means that the momentum will behave differently, since now \begin{equation} \pzsr \approx \frac F\omega \frac{\sqrt{1+\gamma^2}-1}{(n+1)\pi} \approx \frac F\omega \frac{\gamma^2}{2(n+1)\pi} \approx \frac{\gamma\kappa}{2(n+1)\pi} \approx \frac{\omega}{F} \frac{\kappa^2/2}{(n+1)\pi}. \label{e6-pzsr-odd-n-full} \end{equation} This scaling is somewhat more complicated, and it is in fact one of the central results of this work; simple as it is, it seems to have avoided description so far. To begin with, the form $\pzsr\sim \gamma \kappa$, obtained by trading in one factor of $\gamma=\kappa\omega/F$, implies that the high-energy edge of this NZES structure will scale as $\gamma^2$ for a fixed target species, and this marks a straight departure from the LES scaling, which goes~as~$\gamma^{-2}$. Finally, the last form of the soft-recollision momentum $\pzsr$ in \eqref{e6-pzsr-odd-n-full} tells the rest of the tale, since it can be cleanly reorganized as \begin{equation} \pzsr \approx \frac{\zexit}{\Delta t} = \frac{I_p/F}{(n+1)\pi/\omega} \propto \frac{I_p\omega}{F}, \label{e6-pzsr-odd-n-summary} \end{equation} giving the distance to be covered -- the tunnel width, $\zexit\approx I_p/F$, over the time $\Delta t=(n+1)\pi/\omega$ between ionization and recollision. \end{itemize} This last form also marks in a clean way the real difference in scalings between the usual even-$n$ trajectories and our odd-$n$ ones, because when translated into energy it reads \begin{equation} \frac12\left(\pzsr\right)^2 \sim \frac{I_p^2}{U_p} \sim I_p\gamma^2, \label{e6-odd-n-energy-scaling} \end{equation} that is, for a fixed target species the high-energy edge of this structure should be expected to scale \textit{inversely} with respect to the ponderomotive potential $U_p$, which is completely opposite to the usual behaviour of the LES series. In addition to this, the energy scaling in \eqref{e6-odd-n-energy-scaling} is also immensely valuable in that it directly suggests the experimental avenues that will help resolve the contribution of our odd-$n$ trajectory series to the observed NZES experimental feature. As we argued earlier, the NZES has so far only been observed to be at energies consistent with zero to the experimental accuracy, and any tools that can help lift this feature to higher energies where the detectors -- already at their state-of-the-art resolution -- can resolve them better will be a valuable avenue for exploration. In particular, for the tunnelling mechanism to hold well we require that the Keldysh parameter $\gamma$ be small, which therefore means that if we want the energy in \eqref{e6-odd-n-energy-scaling} to be large this can only be done by going to harder targets with a higher ionization potential; this would ideally be helium, or if possible the helium ion $\mathrm{He^+}$, either as an ionic beam or prepared locally via sequential ionization or a separate pre-ionizing pulse. (In any case, the requirement of a high $I_p$ is consistent with the weak NZES structure observed in xenon~\cite{Wolter_PRX}, which we reproduced in \reffig{f6-wolter-prx-original-figure}.) The scaling in \eqref{e6-odd-n-energy-scaling} is certainly unfavourable, but it points the way to experiments which should be able to resolve whether this mechanism contributes or not. As a separate observation, it is interesting to note that the $\gamma^2$ term that is crucial to the scaling of the odd-$n$ trajectories is in fact also present for the even-$n$ scaling, which can be refined to the form \begin{equation} \pzsr \approx \frac F\omega \frac{2+\frac12 \gamma^2}{(n+1)\pi} = \left(2\frac{F}{\omega^2}+\frac{I_p}{F}\right) \frac{\omega}{(n+1)\pi} = \frac{2\zquiv + \zexit}{(n+1)\pi/\omega}, \label{e6-even-n-scaling-with-zexit} \end{equation} which cleanly expresses the fact, shown in \reffig{f6-trajectories-at-transitions}, that the odd-$n$ trajectories also need to traverse the tunnel exit to make their soft-recollision date with the ion; here the $\zexit$ contribution is small but it is still present. In fact, this difference in the scaling properties of the LES energy has already been observed~\cite{murnane_TCSFA_tunnel_exit}, and we reproduce the result in \reffig{f6-hickstein-scaling-original-figure}. For the odd-$n$ series, adding in the tunnel exit represents a small correction to the main result driven by the quiver radius, and this correction is mirrored by similar corrections for the cutoff position in high-harmonic generation \cite{LewensteinHHG} and in high-order above-threshold ionization \cite{ HATI_quantum_correction, HATI_quantum_correction_2, HATI_quantum_correction_3}, so this comes about as yet another example of fairly standard tunnelling theory. For our even-$n$ series of trajectories, on the other hand, this correction is applied on top of a zero result, so it becomes the driving term for the scaling dynamics of this series of trajectories. In addition to the scaling dynamics, if the odd-$n$ do get lifted from consistent-with-zero by experiments with enough resolution, there is also a specific signature in the ratio of the momenta of the different structures within each series, which is relatively universal, coming from the fact that each series scales with $n$ as $1/(n+1)$, but with even $n$ for one and odd $n$ for the other. Thus, the momentum ratios between successive peaks of the LES series are expected to go down with $n$ as $3/5,5/7,7/9,\ldots$~\cite{Rost_PRL, Rost_JPhysB}, whereas the odd-$n$ series should scale down as $1/2,2/3,3/4,\ldots$. The way things stand, however, it will be hard enough to lift even the first peak out of the experimental zero of energy. \begin{figure}[t!] \centering \includegraphics[scale=1.2]{6-LES/Figures/figure6R.png} \caption[ Experimental scaling of the LES width, as observed by D.D. Hickstein et al., showing departures from the naive theory caused by the tunnel width ]{ Scaling of the LES width for ionization of argon and xenon in $\SI{1.3}{\micro\meter}$ and $\SI{2}{\micro\meter}$ fields at varying intensity, with respect to the ponderomotive energy of the field. The naive scaling as per~\eqref{e6-even-n-scaling} is shown dashed, while the solid lines denote a semiclassical tunnelling theory with the tunnel exit included, as in~\eqref{e6-even-n-scaling-with-zexit}. Figure excerpted from \citer{murnane_TCSFA_tunnel_exit}. } \label{f6-hickstein-scaling-original-figure} \end{figure} \copyrightfootnote{ \reffig{f6-hickstein-scaling-original-figure} reprinted with permission from D.D. Hickstein et al., {% \hypersetup{urlcolor=black}% \href{http://dx.doi.org/10.1103/PhysRevLett.109.073004}{% \emph{Phys.\ Rev.\ Lett.} \textbf{109}, 073004 (2012)}. % ©~2012 by the American Physical Society. } } %% As per APS T&Cs \input{6-LES/Figures/Figure6-2CParameters.tex} \begin{figure}[hbt] \centering \includegraphics[scale=1]{6-LES/Figures/figure6-2C.pdf} \caption[ Scaling of the ARM spectrum, including LES and NZES peaks, as a function of $\gamma$ and the wavelength ]{ Variation of the on-axis ionization amplitude $|a(\vbp)|^2$, in an arbitrary logarithmic scale, as a function of the wavelength and the corresponding Keldysh parameter~$\gamma$. The sudden drops in amplitude of \reffig{f6-po-pp-spectrum} shift along the momentum axis with a scaling that closely matches the classical soft-recollision trajectories, shown as red dots. Here the transverse momentum $\pp$ has been chosen so that the transverse coordinate of the classical trajectory has a small but positive value, $x = 1.07\tfrac{1}{\kappa}$, at the first soft recollision at $\omega t\approx 2\pi$, to avoid the hard singularity of the Coulomb kernel. Here we take $F=\SI{\figuresixtwoCfield}{\au}$ $\kappa=\SI{\figuresixtwoCkappa}{\au}$, scaling $\gamma$ as a function of $\omega$ only. } \label{f6-spectrum-scaling} \end{figure} Coming back to the ARM results, the near-zero energy peaks shown in \reffig{f6-po-pp-spectrum}, along with similar peaks associated with the LES regime, scale exactly as they need to, which we show in \reffig{f6-spectrum-scaling}: the sharp changes in the spectrum, caused by the soft recollisions' topological transition, closely track the classical soft recollision scaling of \reffig{f6-soft-recollision-scaling}, underscoring the fundamental link between the two. In this connection, it is worth remarking here that the soft recollisions, a crucial concept for our (seemingly abstract) branch-cut navigation algorithm of chapter~\ref{chap:quantum-orbits}, are brought directly to experimental life in the form of the Low-Energy Structures. The navigation algorithm is completely dependent on the resolution of the soft recollisions, but more importantly it requires the solution of both the even-$n$ and the odd-$n$ families to allow for fully functional ARM spectra in linear fields. Thus, it is the abstract branch-cut navigation that makes the discovery of the odd-$n$ soft recollision series inescapable (in contrast, for example to other approaches, where the odd-$n$ series is still present, but it is by nature much easier to miss); our account of the NZES therefore underscores the importance of the branch-cut navigation formalism. Other recent applications, in a higher-energy scenario, also underscore this~\cite{keil_branch-cuts_2016}. Finally, it is also important to point out that, irrespective of the precise mechanism which translates the soft-recolliding trajectories into peaks in the photoelectron spectrum -- which can be the ARM method of tracking imaginary phases over laser-driven trajectories, but also the CCSFA method with semiclassical calculations on top of full trajectories, the ISFA interpretation in terms of single Born scattering terms, or the Monte Carlo focusing mechanism -- it is quite clear that the even-$n$ trajectories shown in \reffig{f6-trajectories-at-transitions} translate into photoelectron energy peaks, and the same should apply for the odd-$n$ trajectories, which are dynamically very similar. This can be seen, for example, in \reffig{f6-kelvich-dynamical-map}, where the first odd-$n$ recollision causes a caustic similar to the one behind the standard LES, but at NZES energies, but a closer investigation is required, on all of the mechanisms, to establish the exact nature of the connection and the contribution of this mechanism to the NZES.
{ "alphanum_fraction": 0.778335005, "avg_line_length": 81.4450704225, "ext": "tex", "hexsha": "0d4fea7f11581db2c58297d4421a2d9d498f3295", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2019-12-26T11:08:08.000Z", "max_forks_repo_forks_event_min_datetime": "2016-12-17T06:06:55.000Z", "max_forks_repo_head_hexsha": "3347dfb59c11db5572a4139ee3b784ad56260e76", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "episanty/PhD-Thesis", "max_forks_repo_path": "6-LES/LowEnergyStructures.tex", "max_issues_count": 21, "max_issues_repo_head_hexsha": "3347dfb59c11db5572a4139ee3b784ad56260e76", "max_issues_repo_issues_event_max_datetime": "2021-08-31T16:38:58.000Z", "max_issues_repo_issues_event_min_datetime": "2017-02-22T19:26:54.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "episanty/PhD-Thesis", "max_issues_repo_path": "6-LES/LowEnergyStructures.tex", "max_line_length": 1204, "max_stars_count": 5, "max_stars_repo_head_hexsha": "3347dfb59c11db5572a4139ee3b784ad56260e76", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "episanty/PhD-Thesis", "max_stars_repo_path": "6-LES/LowEnergyStructures.tex", "max_stars_repo_stars_event_max_datetime": "2021-12-19T12:32:56.000Z", "max_stars_repo_stars_event_min_datetime": "2016-11-16T19:28:08.000Z", "num_tokens": 22522, "size": 86739 }
% Will house all of the introduction components % Each chapter gets it's own file \part{Introduction} \input{Introduction/WhatIsDabo} \input{Introduction/HistoryOfDabo} \input{Introduction/DaboInstallation} % \section{Summary} % Dabo is a framework built on Python that provides a clean API for developers to %build data-aware business applications that are cross-platform. In addition to this %underlying framework, Dabo also provides some power tools, such as a visual UI %designer based on wxGlade, for designing and laying out your forms, menus, and % other UI elements, and wizards and demo applications for getting started. These % power tools are discussed elsewhere in this book.
{ "alphanum_fraction": 0.7766853933, "avg_line_length": 41.8823529412, "ext": "tex", "hexsha": "be35d851ef7cfb366a4ce4de9eb2aafd34d3a2c2", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8b384018f2283fe9f37fd1fdd44fc807fbb5808c", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "dabodev/dabodoc", "max_forks_repo_path": "book/Introduction/Introduction.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8b384018f2283fe9f37fd1fdd44fc807fbb5808c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "dabodev/dabodoc", "max_issues_repo_path": "book/Introduction/Introduction.tex", "max_line_length": 86, "max_stars_count": null, "max_stars_repo_head_hexsha": "8b384018f2283fe9f37fd1fdd44fc807fbb5808c", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "dabodev/dabodoc", "max_stars_repo_path": "book/Introduction/Introduction.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 157, "size": 712 }
\documentclass{beamer} \usepackage[utf8]{inputenc} \usepackage{hyperref} \usetheme{Madrid} \setbeamertemplate{headline}{} \setbeamertemplate{items}[circle] \beamertemplatenavigationsymbolsempty \title{Open Source} \subtitle{Principles of Open Source} \author{Florian Weingartshofer} \institute{FH Hagenberg} \date{\today} \subject{English} %\logo{\includegraphics[scale=0.01]{./img/tux.png}} \begin{document} \titlepage \begin{frame} \frametitle{Outline} \tableofcontents \end{frame} \section{Definition} \section{Usage} \section{Tutorial} \section{Free Software} \begin{frame} \frametitle{What's Open Source?} \begin{columns} \begin{column}{0.5\textwidth} % A way of developing Software \begin{center} \Large Transparency Collaboration Release early and often Meritocracy Community \end{center} \end{column} \begin{column}{0.5\textwidth} %%<--- here \begin{center} \begin{figure} \includegraphics[scale=0.03]{./img/open.jpg} \caption{unsplash.com/@stairhopper} \end{figure} \end{center} \end{column} \end{columns} \end{frame} \begin{frame} \frametitle{Who's Using It?} \begin{columns} \begin{column}{0.5\textwidth} \begin{center} \begin{figure} \includegraphics[scale=0.06]{./img/tux.png} \caption{Tux} \end{figure} \end{center} \end{column} \begin{column}{0.4\textwidth} \begin{block}{Everybody Is Using OS!} Web-Dev Programming Languages Protocols \end{block} \begin{exampleblock}{Many are Contributing!} \begin{enumerate} \item Microsoft \item Google \item Red Hat \item IBM \end{enumerate} \end{exampleblock} \end{column} \end{columns} \end{frame} \begin{frame} \frametitle{Open Source Crash Course} \begin{columns} \begin{column}{0.4\textwidth} \Large Source Hosting Add a License: \href{https://choosealicense.com/} {choosealicense.com} Upload Files Thats it! \end{column} \begin{column}{0.3\textwidth} \begin{block}{Source Hosting} GitHub.com GitLab.com BitBucket.org \end{block} \begin{block}{Licenses} MIT GPLv2.0 or GPLv3.0 BSD CC-BY-4.0 \end{block} \end{column} \end{columns} \begin{figure} \includegraphics[scale=0.3]{./img/git.png} \caption{Git} \end{figure} \end{frame} \begin{frame} \frametitle{Free Software} \begin{columns} \begin{column}{0.6\textwidth} \begin{figure} \includegraphics[scale=0.3]{./img/gnu_penguin.png} \caption {GNU and Penguin; Free Arts License} \end{figure} \end{column} \begin{column}{0.4\textwidth} \Large Political Hard Liner Stricter Criteria \end{column} \end{columns} \end{frame} \begin{frame} \frametitle{Questions?} \begin{center} \begin{figure} \includegraphics[scale=0.3]{./img/tux_questions.png} \end{figure} The Presentation is obviously Open Source at \underline{ \href{https://github.com/flohero/opensource-presentation} {github.com/flohero/opensource-presentation} } \end{center} \end{frame} \begin{frame} \frametitle{Sources} \begin{center} \Large \href{https://opensource.com/open-source-way}{opensource.com} \href{https://twitter.com/filmaj}{twitter.com/filmaj} \href{https://en.wikipedia.org}{wikipedia.org} \href{https://github.com}{github.com} \href{https://www.gnu.org/philosophy/open-source-misses-the-point}{gnu.org} \end{center} \end{frame} \end{document}
{ "alphanum_fraction": 0.620292887, "avg_line_length": 20.6702702703, "ext": "tex", "hexsha": "2fa5f1c30af186de877289b902a8db970d213ecb", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "106fcd5eeabfe6fc6e00b338f65067962e48a283", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "flohero/opensource", "max_forks_repo_path": "opensource.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "106fcd5eeabfe6fc6e00b338f65067962e48a283", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "flohero/opensource", "max_issues_repo_path": "opensource.tex", "max_line_length": 79, "max_stars_count": null, "max_stars_repo_head_hexsha": "106fcd5eeabfe6fc6e00b338f65067962e48a283", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "flohero/opensource", "max_stars_repo_path": "opensource.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1136, "size": 3824 }
\section{Usage} \subsection{Workflow} Creating a spectrum generator with \flexisusy consists of the following steps: % \begin{enumerate} \setcounter{enumi}{-1} \item Creating a \emph{SARAH model file}. The SARAH model file must be loadable via \code{Start["<model-name>"]}. Note, that SARAH already ships a lot of pre-defined model files in \code{SARAH/Models/} that can be used. \item Creating a \emph{\flexisusy model file} in \code{models/<model-name>/}. The \code{createmodel} script will help you creating one, see Section~\ref{sec:createmodel}. For more details on how to write a \flexisusy model file see Section~\ref{sec:model-files}. \item \emph{Configure} \flexisusy with your desired model, see Section~\ref{sec:configure}. \item Run \code{make}. This will create the C++ code for your spectrum generator and compile it, see Section~\ref{sec:make}. \end{enumerate} % The steps 1--3 are illustrated in \figref{fig:workflow}. \begin{figure}[tbh] \centering \begin{tikzpicture}[node distance = 3.5cm, auto] \node[block, text width=14em] (createmodel) {% \code{./createmodel --name=NMSSM}}; \node[block, text width=14em, below of=createmodel] (configure) {% \code{./configure --with-models=NMSSM}}; \node[block, text width=14em, below of=configure] (make) {% \code{make}}; \node[cloud, text width=22em, right=1.6cm of createmodel] (createmodel-output) {% Creates the files:\\% \code{./models/NMSSM/module.mk}\\% \code{./models/NMSSM/start.m}\\% \code{./models/NMSSM/FlexibleSUSY.m}}; \node[cloud, text width=22em, right=1.6cm of configure] (configure-output) {% Creates \code{./Makefile} which includes\\% \code{./models/NMSSM/module.mk}}; \node[cloud, text width=22em, right=1.6cm of make] (make-output) {% \begin{enumerate}\itemsep0em \item Runs the meta code \code{math -run "Get[\"./models/NMSSM/start.m\"];"} which creates the C++ source files. \item The source files are then compiled to \code{./models/NMSSM/libNMSSM.a} and linked against the executable \code{./models/NMSSM/run\_NMSSM.x}. \end{enumerate}}; \path[arrow] (createmodel) -- (configure); \path[arrow] (configure) -- (make); \path[draw, dashed] (createmodel) -- (createmodel-output); \path[draw, dashed] (configure) -- (configure-output); \path[draw, dashed] (make) -- (make-output); \end{tikzpicture} \caption{\flexisusy workflow} \label{fig:workflow} \end{figure} \subsection{Basic commands} Explain: \begin{itemize} \item ./createmodel \item ./configure \item make \item Mathematica interface (see models/<model>/start.m) \end{itemize} \subsubsection{\code{createmodel}} \label{sec:createmodel} The \code{createmodel} script sets up a \flexisusy\ model. This involves % \begin{itemize} \item a model directory \code{models/<flexiblesusy-model>/} \item a makefile module \code{models/<flexiblesusy-model>/module.mk} \item a \flexisusy\ model file \code{models/<flexiblesusy-model>/FlexibleSUSY.m} \item a Mathematica start script \code{models/<flexiblesusy-model>/start.m} \end{itemize} % Usage: \begin{lstlisting}[language=bash] ./createmodel --name=<flexiblesusy-model> --sarah-model=<sarah-model> \end{lstlisting} Here \code{<flexiblesusy-model>} is the name of \flexisusy\ model to be created and \code{<sarah-model>} is the name of the \sarah\ model file which defines the Lagrangian and the particles.\\ Example: \begin{lstlisting}[language=bash] ./createmodel --name=MyCMSSM --sarah-model=MSSM \end{lstlisting} % % If a certain \sarah\ sub-model should be used, the name of the sub-model has to be appended with at preceeding slash.\\ Example: use the \code{CKM} sub-model of the \code{MSSM} \begin{lstlisting}[language=bash] ./createmodel --name=MyCMSSM --sarah-model=MSSM/CKM \end{lstlisting} % For further information and options see \begin{lstlisting}[language=bash] ./createmodel --help \end{lstlisting} \subsubsection{\code{configure}} \label{sec:configure} The \code{configure} script checks for the installed compilers, libraries and Mathematica. If all of these exists in a sufficent version, the \code{Makefile} is created, which contains the information on how to compile the code. The user has to specify the models which should be included in the build via the \code{--with-models=} option, e.g. % \begin{lstlisting}[language=bash] ./configure --with-models=<flexibsusy-model> \end{lstlisting} % Here \code{<flexibsusy-model>} is either \code{all} or a comma separated list of \flexisusy models. Furthermore, the user can select which RG solver algorithms to use via the \code{--with-algorithms=} option.\\ Example: % \begin{lstlisting}[language=bash] ./configure --with-algorithms=two_scale \end{lstlisting} The \code{configure} script further allows to select the C++ and Fortran compilers, the Mathematica kernel command as well as paths to libraries and header files. See \code{configure --help} for all available options.\\ Example: % \begin{lstlisting}[language=bash] ./configure --with-models=MSSM --with-cxx=clang++ --with-boost-incdir=/usr/include/ --with-boost-libdir=/usr/lib/ \end{lstlisting} \subsubsection{\code{make}} \label{sec:make} Running \code{make} will create the C++ source code for your spectrum generator and compile it. These two processes are controled in the makefile module \code{models/<model-name>/module.mk}. See, Section~\ref{sec:makefile-modules} for details how to create your own makefile module. \subsection{Model files} \label{sec:model-files} Explain how to write a \flexisusy\ model file (starting from a \sarah\ model file). A \flexisusy\ model file is a Mathematica file, where the following pieces are defined: \paragraph{General model information} \subparagraph{FSModelName} Name of the \flexisusy\ model. Example (MSSM): \begin{lstlisting}[language=Mathematica] FSModelName = "MSSM"; \end{lstlisting} \subparagraph{OnlyLowEnergyFlexibleSUSY} If set to True, creates a spectrum generator without high-scale constraint, i.e.\ only a low-scale and susy-scale constraint will be created (default: False). In this case all model parameters, except for gauge and Yukawa couplings are input at the susy scale. This option is similar to \code{OnlyLowEnergySPheno} in SARAH/SPheno. \\ Example: \begin{lstlisting}[language=Mathematica] OnlyLowEnergyFlexibleSUSY = False; (* default *) \end{lstlisting} \paragraph{Input and output parameters} \subparagraph{MINPAR} This list of two-component lists defines model input parameters and their SLHA keys. These parameters will be read from the MINPAR block in the SLHA file. \\ Example (MSSM): \begin{lstlisting}[language=Mathematica] MINPAR = { {1, m0}, {2, m12}, {3, TanBeta}, {4, Sign[\[Mu]]}, {5, Azero} }; \end{lstlisting} \subparagraph{EXPAR} This list of two-component lists defines further model input parameters and their SLHA keys. These parameters will be read from the EXTPAR block in the SLHA file. \\ Example (E6SSM): \begin{lstlisting}[language=Mathematica] EXTPAR = { {61, LambdaInput}, {62, KappaInput}, {63, muPrimeInput}, {64, BmuPrimeInput}, {65, vSInput} }; \end{lstlisting} \paragraph{Constraints} Currently three constraints are supported: low-scale, susy-scale and high-scale constraints. In \flexisusy\ they are named as \code{LowScale}, \code{SUSYScale} and \code{HighScale}. For each constraint there is a scale definition (named after the constraint), an initial guess for the scale (concatenation of the constraint name and \code{FirstGuess}) and a list settings to be applied at the constraint (concatenation of the constraint name and \code{Input}). \\ Example (MSSM): \begin{lstlisting}[language=Mathematica] (* susy-scale constraint *) SUSYScale = Sqrt[M[Su[1]]*M[Su[6]]]; (* scale definition *) SUSYScaleFirstGuess = Sqrt[m0^2 + 4 m12^2]; (* first scale guess *) SUSYScaleInput = {}; (* nothing is set here *) (* high-scale constraint *) HighScale = g1 == g2; (* scale definition *) HighScaleFirstGuess = 2.0 10^16; (* first scale guess *) HighScaleInput = { {T[Ye], Azero*Ye}, {T[Yd], Azero*Yd}, {T[Yu], Azero*Yu}, {mHd2, m0^2}, {mHu2, m0^2}, {mq2, UNITMATRIX[3] m0^2}, {ml2, UNITMATRIX[3] m0^2}, {md2, UNITMATRIX[3] m0^2}, {mu2, UNITMATRIX[3] m0^2}, {me2, UNITMATRIX[3] m0^2}, {MassB, m12}, {MassWB,m12}, {MassG, m12} }; LowScale = LowEnergyConstant[MZ]; (* scale definition *) LowScaleFirstGuess = LowEnergyConstant[MZ]; (* first scale guess *) LowScaleInput = { {vd, 2 MZDRbar / Sqrt[GUTNormalization[g1]^2 g1^2 + g2^2] Cos[ArcTan[TanBeta]]}, {vu, 2 MZDRbar / Sqrt[GUTNormalization[g1]^2 g1^2 + g2^2] Sin[ArcTan[TanBeta]]} }; \end{lstlisting} \paragraph{Initial parameter guesses} At the \code{LowScale} and \code{HighScale} it is recommended to make an initial guess for the model parameters. This can be done via \begin{lstlisting}[language=Mathematica] InitialGuessAtLowScale = { {vd, LowEnergyConstant[vev] Cos[ArcTan[TanBeta]]}, {vu, LowEnergyConstant[vev] Sin[ArcTan[TanBeta]]} }; InitialGuessAtHighScale = { {\[Mu] , 1.0}, {B[\[Mu]], 0.0} }; \end{lstlisting} \paragraph{Setting the tree-level EWSB eqs.\ solution by hand} In case the meta code cannot find an analytic solution to the tree-level EWSB eqs., one can set solutions by hand in the model file using the variable \code{TreeLevelEWSBSolution}. \\ Example (MSSM): \begin{lstlisting}[language=Mathematica] TreeLevelEWSBSolution = { {\[Mu] , ... }, {B[\[Mu]], ... } }; \end{lstlisting} \begin{sidewaystable}[tb] \centering \begin{tabularx}{\textwidth}{>{\ttfamily}l>{\ttfamily}lX} \toprule variable & default value & description \\ \midrule FSModelName & Model`Name & Name of the \flexisusy\ model \\ OnlyLowEnergyFlexibleSUSY & False & low-energy model \\ MINPAR & \{\} & list of input parameters in SLHA MINPAR block \\ EXTPAR & \{\} & list of input parameters in SLHA EXTPAR block \\ LowScale & LowEnergyConstant[MZ] & Standard Model matching scale (in GeV) \\ LowScaleFirstGuess & LowEnergyConstant[MZ] & first guess for the Standard Model matching scale (in GeV) \\ LowScaleInput & \{\} & settings applied at the low scale \\ SUSYScale & 1000 & scale of supersymmetric particle masses (in GeV) \\ SUSYScaleFirstGuess & 1000 & first guess for the susy scale (in GeV) \\ SUSYScaleInput & \{\} & settings applied at the susy scale \\ HighScale & SARAH`hyperchargeCoupling == SARAH`leftCoupling & unification scale (in GeV) \\ HighScaleFirstGuess & $2\cdot 10^{16}$ & first guess for the unification scale (in GeV) \\ HighScaleInput & \{\} & settings applied at the unification scale \\ \bottomrule \end{tabularx} \caption{\flexisusy\ model file variables} \label{tab:model-file-variables} \end{sidewaystable} \subsection{Makefile modules} \label{sec:makefile-modules} Explain how to write a custom makefile module (module.mk) for a \flexisusy\ model.
{ "alphanum_fraction": 0.7065071856, "avg_line_length": 35.1191222571, "ext": "tex", "hexsha": "9509ebad33e9fad3c4f9d469ba3c0a073d4a5001", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a38bd6fc10d781e71f2adafd401c76e1e3476b05", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "aaronvincent/gambit_aaron", "max_forks_repo_path": "contrib/MassSpectra/flexiblesusy/doc/chapters/usage.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a38bd6fc10d781e71f2adafd401c76e1e3476b05", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "aaronvincent/gambit_aaron", "max_issues_repo_path": "contrib/MassSpectra/flexiblesusy/doc/chapters/usage.tex", "max_line_length": 110, "max_stars_count": null, "max_stars_repo_head_hexsha": "a38bd6fc10d781e71f2adafd401c76e1e3476b05", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "aaronvincent/gambit_aaron", "max_stars_repo_path": "contrib/MassSpectra/flexiblesusy/doc/chapters/usage.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3372, "size": 11203 }
\documentclass[a4paper,twocolumn]{article} \usepackage{enumitem} \usepackage{tikz} \usetikzlibrary{arrows,automata} \usepackage{graphicx} \usepackage{mathtools} \usepackage{marvosym} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage[a4paper, top=0.8in, bottom=1.0in, left=0.8in, right=0.8in]{geometry} \usepackage{amssymb} \usepackage{amsmath} \usepackage{amsthm} \usepackage{verbatim} \usepackage{minted} \setcounter{secnumdepth}{0} \begin{document} \title{A Introduction to an Algebra of labelled Graphs} \author{Anton Lorenzen} \date{March 2018} \maketitle The algebraic-graphs package, or just alga, is a library for constructing graphs in Haskell using a functional interface. This is a ground up introduction to alga. You should definitely read it from the beginning on though, even if you have already read the original functional pearl\footnote{https://github.com/snowleopard/alga-paper} since some definitions are different. \section{A Introduction to Algebraic Graphs} Think of any (finite) graph. As you probably know a graph can be represented as a matrix. Let's say we have the vertices $V = (a, b, c, d, e)$: \begin{center} \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=2.0cm, semithick] \node[state] (C) {$c$}; \node[state] (B) [below left of=C] {$b$}; \node[state] (D) [below right of=C] {$d$}; \node[state] (A) [below right of=B] {$a$}; \node[state] (E) [right of=D] {$e$}; \path (A) edge node {} (B) edge node {} (C) (B) edge [loop above] node {} (B) edge node {} (C) (C) edge node {} (D) edge [bend left] node {} (E) (D) edge node {} (A) (E) edge [bend left] node {} (A); \end{tikzpicture} \end{center} This is equivalent to the following matrix: \[ A= \begin{bmatrix} 0 & 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 1 \\ 1 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 \end{bmatrix} \] We can decompose $A$ into 25 matrices each containing one of $A$'s elements and zeros everywhere else: \[ A= \begin{bmatrix} 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{bmatrix} + \begin{bmatrix} 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{bmatrix} + \dots \] TODO: (also note that $A + A = A$, since $1$ is the maximum element in each cell). We can write each of these matrices as $FromVertex\xrightarrow{}ToVertex$ in the case of a $1$ and $\varepsilon$ in the case of a zero. Therefore we have: \[ A = \varepsilon + a\xrightarrow{}b + \dots \] We can therefore represent every matrix using the following data type: \begin{minted}{haskell} newtype Vertex a = Vertex a data Graph a = Empty | Connect Vertex Vertex | Overlay (Graph a) (Graph a) \end{minted} Here \texttt{Connect} corresponds to $\xrightarrow{}$, \texttt{Empty} to $\varepsilon$ and \texttt{Overlay} to $+$. We require that \texttt{Overlay} is associative, commutative, has \texttt{Empty} as an identity element and is idempotent ($a\xrightarrow{}b + a\xrightarrow{}b = a\xrightarrow{}b$) just like our matrix addition above. Now, this isn't too revolutionary since we are basically just building an unbalanced binary tree of edges. But it gets more interesting if we allow \texttt{Connect} to connect more than simple vertices. For example, maybe we could define the following: \begin{minted}[escapeinside=~~]{haskell} data Graph a = ~$\varepsilon$~ | Vertex a | (Graph a) ~$\xrightarrow{}$~ (Graph a) | (Graph a) ~$+$~ (Graph a) [x,y,z] = map Vertex ["x", "y", "z"] \end{minted} With the law: \begin{minted}[escapeinside=~~]{haskell} x ~$\xrightarrow{}$~ (y ~$+$~ z) == (x ~$\xrightarrow{}$~ y) ~$+$~ (x ~$\xrightarrow{}$~ z) (x ~$+$~ y) ~$\xrightarrow{}$~ z == (x ~$\xrightarrow{}$~ z) ~$+$~ (y ~$\xrightarrow{}$~ z) \end{minted} That opens up the question what \begin{minted}[escapeinside=~~]{haskell} x ~$\xrightarrow{}$~ y == x ~$\xrightarrow{}$~ (y ~$+$~ ~$\varepsilon$~) == (x ~$\xrightarrow{}$~ y) ~$+$~ (x ~$\xrightarrow{}$~ ~$\varepsilon$~) \end{minted} means for the $x \xrightarrow{} \varepsilon$ part? Obviously, it needs to be $\varepsilon$, you might say and you would end up with a $\xrightarrow{}$ that looks pretty much like matrix multiplication. But alga took a different route and added another law, which gives it a different spin: \begin{minted}[escapeinside=~~]{haskell} x ~$\xrightarrow{}$~ ~$\varepsilon$~ == ~$\varepsilon$~ ~$\xrightarrow{}$~ x == x x ~$\xrightarrow{}$~ (y ~$\xrightarrow{}$~ z) == (x ~$\xrightarrow{}$~ (y ~$+$~ z)) ~$+$~ (y ~$\xrightarrow{}$~ z) (x ~$\xrightarrow{}$~ y) ~$\xrightarrow{}$~ z == ((x ~$+$~ y) ~$\xrightarrow{}$~ z) ~$+$~ (x ~$\xrightarrow{}$~ y) \end{minted} This law is what makes alga different from existing approaches, because it does it away with the difference between vertices and edges: The $\xrightarrow{}$ constructor of the graph preserves the vertices contained in it. We can even derive this property from the laws: \begin{align*} x \xrightarrow{} y &= (x \xrightarrow{} y) \xrightarrow{} \varepsilon \\ &= ((x + y) \xrightarrow{} \varepsilon) + (x \xrightarrow{} y) \\ &= x + y + (x \xrightarrow{} y) \end{align*} Convince yourself that we can see $x$ and $y$ as arbitrary graphs from now on and not just as vertices. Also we will assume that $\xrightarrow{}$ binds stronger than $+$. So, what is a good intuition for $\xrightarrow{}$? To come back to your $A$ matrix above, we can write \[ (a + b) \xrightarrow{} (d + e) = \begin{bmatrix} 0 & 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{bmatrix} \] Exercises: \begin{itemize} \item Show that the laws imply that $\xrightarrow{}$ is associative. \item Show that the idempotence of $+$ and $\varepsilon$ being the identity of $+$ follow from the other laws. \item Show that $x \xrightarrow{} x \xrightarrow{} x = x \xrightarrow{} x$. \item Show we can derive the decomposition law from the original paper: \[ a \xrightarrow{} b \xrightarrow{} c = a \rightarrow{} b + a \xrightarrow{} c + b \xrightarrow{} c \] \end{itemize} \subsection{About intuition} Now that you have worked through the exercises, you will probably agree that we can characterize a graph by the following laws: \begin{itemize} \item Addition: $+$ is associative and commutative. \item Identity: $\varepsilon$ is the identity of $\xrightarrow{}$. \item Distribution: \begin{align*} x \xrightarrow{} (y + z) &= (x \xrightarrow{} y) + (x \xrightarrow{} z) \\ (x + y) \xrightarrow{} z &= (x \xrightarrow{} z) + (y \xrightarrow{} z) \end{align*} \item Extraction: \begin{align*} x \xrightarrow{} (y \xrightarrow{} z) &= (x \xrightarrow{} (y + z)) + (y \xrightarrow{} z) \\ (x \xrightarrow{} y) \xrightarrow{} z &= ((x + y) \xrightarrow{} z) + (x \xrightarrow{} y) \end{align*} \end{itemize} The intuition I gave for these laws above might help you to understand what these laws mean in the contexts of graphs and indeed all implementations of these laws as adjacency maps or sets follow this intuition. However it falls short, if we go outside the context of graphs, so I would like to show you some statements you can't derive from the laws: \begin{itemize} \item \textit{$+$ and $\xrightarrow{}$ are distinct}. More specifically, every associative, commutative, idempotent operation with an identity fulfills the laws. \item \textit{$\varepsilon$ denotes the empty matrix}. Consider for example $a + b = a \xrightarrow{} b = \text{min}(a,b)$ with the identity being $\infty$. \end{itemize} \section{Labelled Graphs} Let's go back to our graph from the beginning and add labels to the edges. \begin{center} \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=2.0cm, semithick] \node[state] (C) {$c$}; \node[state] (B) [below left of=C] {$b$}; \node[state] (D) [below right of=C] {$d$}; \node[state] (A) [below right of=B] {$a$}; \node[state] (E) [right of=D] {$e$}; \path (A) edge node {k} (B) edge node {l} (C) (B) edge [loop above] node {m} (B) edge node {n} (C) (C) edge node {o} (D) edge [bend left] node {p} (E) (D) edge node {q} (A) (E) edge [bend left] node {r} (A); \end{tikzpicture} \end{center} This is equivalent to the following matrix: \[ A= \begin{bmatrix} 0 & k & l & 0 & 0 \\ 0 & m & n & 0 & 0 \\ 0 & 0 & 0 & o & p \\ q & 0 & 0 & 0 & 0 \\ r & 0 & 0 & 0 & 0 \end{bmatrix} \] Here you can think of $k, l, m, \dots$ as variable names standing for strings, numbers or anything else. We will come back to which elements exactly we can use here in a minute. Again we can decompose $A$ into 25 matrices each containing one of $A$'s elements and zeros everywhere else: \[ A= \begin{bmatrix} 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{bmatrix} + \begin{bmatrix} 0 & k & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{bmatrix} + \dots \] And write each of these matrices as $FromVertex\xrightarrow{edge}ToVertex$. Therefore we have: \[ A = a\xrightarrow{0}a + a\xrightarrow{k}b + \dots \] We can therefore represent every labelled graph as the following data type: \begin{minted}[escapeinside=~~]{haskell} data Graph l a = ~$\varepsilon$~ | Vertex a | (Graph a) ~$\xrightarrow{l}$~ (Graph a) | (Graph a) ~$+$~ (Graph a) [x,y,z] = map Vertex ["x", "y", "z"] [k,l,m,n] = ["k", "l", "m", "n"] \end{minted} With the new laws: \begin{itemize} \item Addition: $+$ is associative and commutative. \item Identity: $\varepsilon$ is the identity of $\xrightarrow{l}$. \item Distribution: \begin{align*} x \xrightarrow{k} (y + z) &= (x \xrightarrow{k} y) + (x \xrightarrow{k} z) \\ (x + y) \xrightarrow{k} z &= (x \xrightarrow{k} z) + (y \xrightarrow{k} z) \end{align*} \item Extraction: \begin{align*} x \xrightarrow{k} (y \xrightarrow{l} z) &= (x \xrightarrow{k} (y + z)) + (y \xrightarrow{l} z) \\ (x \xrightarrow{k} y) \xrightarrow{l} z &= ((x + y) \xrightarrow{k} z) + (x \xrightarrow{l} y) \end{align*} \item Absorption: \begin{align*} x \xrightarrow{k} y + x \xrightarrow{l} y &= x \xrightarrow{k + l} y \end{align*} \end{itemize} Exercise: Under which circumstances is the new $\xrightarrow{l}$ still associative? \subsection{Semirings} You have probably noticed that we used a new $+$ operation on our labels above and as you might have guessed there are laws for it too, since we might run into contradictions otherwise: \begin{itemize} \item $+$ is commutative, associative and idempotent. \item There is an element called $0$ which acts as an identity for $+$. \end{itemize} \end{document}
{ "alphanum_fraction": 0.6242353365, "avg_line_length": 35.5143769968, "ext": "tex", "hexsha": "d3cbd287827066ded3a3a5cd9aa90a1768e0ab6a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "1344493bce87e7485912ad5e1628dd31c383d1bd", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "anfelor/alga-tutorials", "max_forks_repo_path": "Laws.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "1344493bce87e7485912ad5e1628dd31c383d1bd", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "anfelor/alga-tutorials", "max_issues_repo_path": "Laws.tex", "max_line_length": 142, "max_stars_count": 3, "max_stars_repo_head_hexsha": "1344493bce87e7485912ad5e1628dd31c383d1bd", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "anfelor/alga-tutorials", "max_stars_repo_path": "Laws.tex", "max_stars_repo_stars_event_max_datetime": "2018-04-25T11:39:05.000Z", "max_stars_repo_stars_event_min_datetime": "2018-04-15T01:39:19.000Z", "num_tokens": 3742, "size": 11116 }
\section{Conclusion} \label{Conclusion} %what we did. sum up key technologies, what needs to happen for technologies that are not "key", importance of rapid change/suggested timeline. We simulated five transition scenarios which assess potential pathways to meeting 2030 and 2050 emission targets within the Japanese electricity supply system and their long-term impact up to 2100. Our transition scenarios prove that meeting emission goals without new nuclear or new low-emission technologies is infeasible, and such an endeavour is likely to be an expensive failure. These results also demonstrate that emission goals can be met by either investing heavily in nuclear, investing heavily in hydrogen, or using a combination of both. Scenarios that incorporate nuclear are the most cost-effective, and using a combination of nuclear and hydrogen leads to the greatest emission reduction post-2050. Key technologies that emerge from are results include nuclear power and hydrogen from renewables, while \gls{CCS} with natural gas and photochemical water splitting (\gls{PWS}) play a nominal role. CCS with coal, steam reforming with or without CCS, new coal, and new oil are not utilised due to their high direct and life-cycle emissions. Our analysis indicates that while politically challenging, a hybrid nuclear-hydrogen strategy is economically feasible and results in long-term emission reduction. Such a multifaceted approach to emission reduction is also likely to improve decarbonisation outcomes since the commercialisation and deployment of hydrogen in time to meet 2030 and 2050 emission goals is uncertain. Mitigating emissions from the industrial and transportation sector presents unique challenges that may affect the amount of emission reduction required from the electricity supply sector to meet Japan's 2030 and 2050 goals. Future work should incorporate holistic assessment of the entire Japanese energy system when exploring energy transition pathways. The assessment of synergistic utilisation of hydrogen in transportation and industry alongside electricity storage and supply is vital for policy decisions. The effect of transportation media such as trucks and pipelines on hydrogen and CCS is also worth investigating. Any new technologies that develop in the future and promise rapid decarbonisation should also be incorporated in such work. Finally, economic feasibility analyses with respect to national budget requirements and projected GDP trends must also be conducted to improve decarbonisation strategies, improve social outcomes, and delineate investment goals for the energy sector. %\section{Future work} %grid resilience - analysis of grid stability of similar mixes at the resolution of minutes. Maybe in TIMES (ugh). %\section{Declaration of Competing Interest} %The authors declare no conflict of interest.
{ "alphanum_fraction": 0.8311095506, "avg_line_length": 258.9090909091, "ext": "tex", "hexsha": "b442b362c00957b273ad14f5feba5bee83cd7154", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2018-01-03T20:00:35.000Z", "max_forks_repo_forks_event_min_datetime": "2018-01-03T20:00:35.000Z", "max_forks_repo_head_hexsha": "29248cf5fdb0fa8c20f2a58794ef028448bb3e41", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "arfc/i2cner", "max_forks_repo_path": "publications/2020-03-paper/revisions/conclusion.tex", "max_issues_count": 87, "max_issues_repo_head_hexsha": "29248cf5fdb0fa8c20f2a58794ef028448bb3e41", "max_issues_repo_issues_event_max_datetime": "2021-01-19T23:18:20.000Z", "max_issues_repo_issues_event_min_datetime": "2018-01-05T21:27:55.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "arfc/i2cner", "max_issues_repo_path": "publications/2020-03-paper/revisions/conclusion.tex", "max_line_length": 1433, "max_stars_count": 1, "max_stars_repo_head_hexsha": "29248cf5fdb0fa8c20f2a58794ef028448bb3e41", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "arfc/I2CNER", "max_stars_repo_path": "publications/2020-03-paper/revisions/conclusion.tex", "max_stars_repo_stars_event_max_datetime": "2020-11-01T03:40:33.000Z", "max_stars_repo_stars_event_min_datetime": "2020-11-01T03:40:33.000Z", "num_tokens": 523, "size": 2848 }
\label{sec:bckgdestimation} \subsection{\mjj\xspace binning in the resonance analyses} The \mjj\ bin size is optimized to minimize JER effects, following the studies performed for the 2015 dijet analysis \cite{EXOT-2015-02} and the Run 2 dijet analysis \cite{Nishu:2646455}. The following \mjj\ binning is used, which is the same used in the prior Run-2 analyses: bins = [ 946, 976, 1006, 1037, 1068, 1100, 1133, 1166, 1200, 1234, 1269, 1305, 1341, 1378, 1416, 1454, 1493, 1533, 1573, 1614, 1656, 1698, 1741, 1785, 1830, 1875, 1921, 1968, 2016, 2065, 2114, 2164, 2215, 2267, 2320, 2374, 2429, 2485, 2542, 2600, 2659, 2719, 2780, 2842, 2905, 2969, 3034, 3100, 3167, 3235, 3305, 3376, 3448, 3521, 3596, 3672, 3749, 3827, 3907, 3988, 4070, 4154, 4239, 4326, 4414, 4504, 4595, 4688, 4782, 4878, 4975, 5074, 5175, 5277, 5381, 5487, 5595, 5705, 5817, 5931, 6047, 6165, 6285, 6407, 6531, 6658, 6787, 6918, 7052, 7188, 7326, 7467, 7610, 7756, 7904, 8055, 8208, 8364, 8523, 8685, 8850, 9019, 9191, 9366, 9544, 9726, 9911, 10100, 10292, 10488, 10688, 10892, 11100, 11312, 11528, 11748, 11972, 12200, 12432, 12669, 12910, 13156 ] ~\GeV\. \\ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Resonance Analysis: Sliding Window Fit (SWiFt)} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% In the resonant search the SM background of the \mjj\ spectrum is determined by a functional fit to the data. Previous searches, from ATLAS and other experiments (such as Refs.~\cite{Bagnaia:1984ip,PhysRevD.79.112002,EXOT-2010-01,CMS-EXO-10-010,EXOT-2010-07,EXOT-2013-11}) have found that a parametric function of the form \begin{equation} f(x) = p_1 (1 - x)^{p_2} x^{p_3 + p_4\ln x + p_5 (\ln x)^2}, \label{Eq:fitfunction} \end{equation} where $x \equiv \mjj /\sqrt{s}$, accurately describes dijet mass distribution predicted by leading and next-to-leading-order QCD Monte Carlo. In the ATLAS 2015 analysis with 3.57~\ifb\ of data the three parameter ($p_4, p_5 = 0$) function sufficiently described the data, while the previous paper publication with 37.0~\ifb\ of data, the four parameter version of the function was found to properly describe the QCD background. Experience with past experiments has shown that with increased statistics require more and more parameters to properly describe the full invariant mass spectrum. In a effort to prevent the possible breakdown of our fit function with a high integrated luminosity, the global function fit has been replaced by the Sliding Window Fit method (SWiFt), replacing a fit on the full spectrum with a sliding localized fit on smaller \mjj~ ranges where we expect the function in Equation \ref{Eq:fitfunction} to properly model the QCD background contribution even with very high statistics. This approach was used in the previous dijet search\cite{EXOT-2016-21}, where the results were cross-checked against the four-parameter global fit function, and in the higher-statistics TLA dijet search \cite{Nishu:2646455}. SWiFt produces a non-parametric global background model that can be used to search for excesses in the mass spectrum, and to provide inputs to HistFitter to assess limits on specific benchmark signal models. The methodology behind the SWiFt method is described in \cite{Sekhon:2305523}. \subsubsection{SWiFt Background} \label{sec:SwiftBkg} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% The SWiFt background is extracted from the data by fitting it using an analytic function. In smaller mass windows, the function is fit to the data and the evaluation of the function at the window's center bin is taken to be the background estimation for the bin. By sliding over the entire mass range, the background is created this way bin-by-bin. The window size, defined in terms of the window half-width, or the number of mass bins to the left and right of a window center, is chosen to be the largest possible window which satisfies two statistical measures. The determination of the window size begins with a window half-width of 24 bins, making the window slightly larger than half of the expected invariant mass spectrum for the signal regions, and in line with the window size used in the previous 37.0~\ifb\ iteration of the search. The sliding window fit procedure is then performed using a four-parameter version of Eq.~\ref{Eq:fitfunction}. The quality of the fit to the data is considered to be good if it passes two metrics which confirm that the individual windows as well as the global fit describe the data well: \begin{itemize} \item Global $\chi^2$ $p$-value > 0.05 \item \BumpHunter\ $p$-value > 0.01 \end{itemize} If these criteria are both met for the window size and fit function, the background is selected for use in the limit machinery. If it fails either one of these metrics, the SWiFt method evolves until a fit which satisfies both metrics is found. First, the 5-parameter version of Eq.~\ref{Eq:fitfunction} is used instead of the 4-parameter version and the fit is re-run. If this fit also fails the criteria, the window half-width is reduced by 2 (thus, the whole window shrinks by four bins) and the fit is repeated with the 4-parameter function. This process of alternating between increasing the number of parameters in the fit function and reducing the number of parameters and shrinking the window size is repeated until a satisfactory fit is obtained. Exception to this rule applies if the \BumpHunter~ $p$-value indicates a signal. In this case, the S+B fit is repeated to asses the level of compatibility with the model when considering full systematical uncertainty. The difference between the nominal background (using the n-parameter function) and the alternate background (using the n+1 parameter function) will later be assessed as a systematic uncertainty. \todo[inline]{Need to check SWiFt with gluon selection applied. Update for new pseudo-data method.} {\textit {\textcolor{red}{ The background making procedure has been validated using pseudo-data sets derived from the nominal SWiFt background. While ideally this would be performed on MC, its limited statistics compared to the data prevent any useful conclusions from being drawn. Several pseudo-data sets have been evaluated by taking the fit result from the 2015+2016 data (37\ifb), scaling up to the expected full Run-2 dataset, Poisson-fluctuating the background to obtain a data-like spectrum, and running the fit procedure over this pseudo-data set. This procedure has shown that the starting function can be reasonably expected to perform well for the full Run-2 dataset.}}} The background-only hypothesis is used as input to \BumpHunter\, which identifies if there is a significant excess in the data. If \BumpHunter\ shows good compatibility between the data and the background-only hypothesis (\BumpHunter\ p-value larger than 0.01) the exclusion limits are set. A S+B fit is run for different signal hypotheses. The signal template at each mass window is determined by the morphing procedure described in \subsection{Pseudo-data for Validation} \label{sec:pseudo} Pseudo-data for finding and validating the optimal gluon selection is created using the \todo{Check that using good fit to full data with appropriate settings} SWiFt fit result from the full Run 2 dataset scaled with the smoothed fraction of events that pass the selection criteria in the simulated \QCD\ dataset. The SWiFt fit used in the creation of the pseudo-data is shown in Fig.~\ref{fig:SWiFt_Run2}. The fraction of \QCD\ events that pass the gluon-gluon selection criteria is smoothed using Friedman's `super smoother' with maximum smoothness. The choice of smoother is not very important as the fraction changes slowly over a small range. The fraction for a gluon-gluon selection efficiency of 75\% per jet the fraction runs from 27\% at 1.1\,\TeV\ to 13\% at 8\,\TeV\ (Fig.~\ref{fig:Smoothed_GG_Fraction}). \begin{figure}[htb] \centering \includegraphics[width=0.75\textwidth]{figures/04-BackgroundEstimation/SWiFtData15-18.pdf} \caption{SWiFt fit to the Run 2 data set used for creating pseudo-data. \label{fig:SWiFt_Run2}} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=0.75\textwidth]{figures/04-BackgroundEstimation/fracSmoothSelected_GG_New_Selection5.pdf} \caption{The smoothed fraction of events that pass gluon selection with 75\% efficiency. \label{fig:Smoothed_GG_Fraction}} \end{figure} An example of a pseudo-data set created with a gluon efficiency of 75\% having been run through SWiFt and \BumpHunter\ is shown in Figures~\ref{fig:SWiFtPD_75percentGG} and \ref{fig:figure1_GG_PD_75percent} showing that SWiFt does a good job of fitting the gluon selected background distribution. \begin{figure}[htb] \centering \includegraphics[width=0.75\textwidth]{figures/04-BackgroundEstimation/SWiFtPD_75percentGG} \caption{SWiFt fit pseudo-data for a two gluon selection with 75\% efficiency. \label{fig:SWiFtPD_75percentGG}} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=0.75\textwidth]{figures/04-BackgroundEstimation/figure1_GG_PD_75percent.pdf} \caption{\BumpHunter\ run on a pseudo-data for a two gluon selection with 75\% efficiency.\label{fig:figure1_GG_PD_75percent}} \end{figure}
{ "alphanum_fraction": 0.7650537634, "avg_line_length": 81.5789473684, "ext": "tex", "hexsha": "326b65e33289cf1ac48421c028c6b3d3e03cd466", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "45b1a7d88ca7b15f19ec25270b6fbbebd839fa0b", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "krybacki/IntNote2", "max_forks_repo_path": "include/oldMaterial/04-BackgroundEstimation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "45b1a7d88ca7b15f19ec25270b6fbbebd839fa0b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "krybacki/IntNote2", "max_issues_repo_path": "include/oldMaterial/04-BackgroundEstimation.tex", "max_line_length": 853, "max_stars_count": null, "max_stars_repo_head_hexsha": "45b1a7d88ca7b15f19ec25270b6fbbebd839fa0b", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "krybacki/IntNote2", "max_stars_repo_path": "include/oldMaterial/04-BackgroundEstimation.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2636, "size": 9300 }
\documentclass[10pt]{beamer} \usetheme{metropolis} % \usecolortheme{beaver} % \usecolortheme{whale} \usepackage{appendixnumberbeamer} \usepackage{booktabs} \usepackage[scale=2]{ccicons} \usepackage{pgfplots} \usepgfplotslibrary{dateplot} \usepackage{xspace} \newcommand{\themename}{\textbf{\textsc{metropolis}}\xspace} \input{usepackages.tex} \input{newcommands.tex} \newcommand{\blue}[1]{{\color[rgb]{0, 0, 1} #1}} \title{Iris-CF} \subtitle{Iris Program Logic for Control Flow} \date{\today} \author{Wang Zhongye} % \institute{Center for modern beamer themes} % \titlegraphic{\hfill\includegraphics[height=1.5cm]{logo.pdf}} \begin{document} \maketitle \begin{frame}{Table of contents} \setbeamertemplate{section in toc}[sections numbered] \tableofcontents[hideallsubsections] \end{frame} \section{Introduction} \begin{frame} \frametitle{What is Iris?} \ Iris \cite{jung2017iris} is a Higher-Order Concurrent Separation Logic Framework, implemented and verified in the Coq proof assistant. \ \input{fig/iris.tex} \end{frame} \begin{frame} \frametitle{Why Iris-CF?} Iris does not support reasoning about control flows very well. \ One possible way to enable this is through continuation \cite{timany2019mechanized}, i.e., use try-catch style control flow. \ \onslide<2->{ Our approach is like the multi-post-condition judgement in VST \cite{VST,VST-Floyd}. \begin{mathpar} \inferrule[Seq]{ \vdash \triple{P}{e_1}{R, [R_\ekb, R_\ekc, R_\ekr]} \and \vdash \triple{R}{e_2}{Q, [R_\ekb, R_\ekc, R_\ekr]} }{ \vdash \triple{P}{e_1 \cmdseq e_2}{Q, [R_\ekb, R_\ekc, R_\ekr]} } \end{mathpar} } % \begin{figure}[h] % $ % (\cmdif x == 1 \cmdthen \cmdcontinue) \cmdseq e % $ % $\Downarrow$ % $ % \Big(\cmdif x == 1 \cmdthen \text{\blue{throw} $()$ \blue{to} } (\cmdif x == 1 \cmdthen \cdots) \cmdseq e\Big) \cmdseq e % $ % \end{figure} \end{frame} \section{Programming Language} \begin{frame} \frametitle{Values} $$ v \in \textit{Val} ::=\, () \sep z \sep \true \sep \false \sep l \sep \lambda x.e \sep \cdots $$ \end{frame} \begin{frame} \only<1,2,5,6>{ \frametitle{Expressions} $$ \begin{aligned} e \in \textit{Expr} ::=\, & v \sep x \sep e_1(e_2) \\ \sep\, & \cmdref(e) \sep !e \sep e_1 \leftarrow e_2 \\ \sep\, & \cmdfork{e} \sep e_1 \cmdseq e_2 \sep \cmdif e_1 \cmdthen e_2 \cmdelse e_3 \\ \onslide<2->{\sep\, & \blue{\cmdloop{e} e \sep \cmdbreak e \sep \cmdcontinue} \\} \onslide<5->{\sep\, & \blue{\cmdcall e \sep \cmdreturn e} \sep \cdots \\} \end{aligned} $$ \only<6>{ $$ \begin{aligned} \textit{FACT}' &\triangleq \lambda f. \lambda n. (\cmdif n = 0 \cmdthen \cmdreturn 1) \cmdseq n \times (\cmdcall (f\, f\, (n - 1))) \\ \textit{FACT} &\triangleq \textit{FACT}'\,\textit{FACT}' \end{aligned} $$ } } \only<3,4>{ \frametitle{Evaluation Context} $$ \begin{aligned} K \in \textit{Ctx} ::=\, & [] \sep K(e) \sep v(K) \\ \sep\, & \cmdref(K) \sep !K \sep K \leftarrow e \sep v \leftarrow K \\ \sep\, & K \cmdseq e \sep \cmdif K \cmdthen e_2 \cmdelse e_3 \\ \sep\, & \blue{\cmdloop{e} K \sep \cmdbreak K} \sep \cdots \end{aligned} $$ \onslide<4>{ $$ \begin{aligned} \cmdloop{e} e = \cmdloop{e}[&e] && \rightarrow & \cmdloop{e}[&e'] = \cmdloop{e} e' \\ &e && \rightarrow & &e' \end{aligned} $$ } } \end{frame} \begin{frame} \frametitle{Evaluation Context} $$ \begin{aligned} K \in \textit{Ctx} ::=\, & [] \sep K(e) \sep v(K) \\ \sep\, & \cmdref(K) \sep !K \sep K \leftarrow e \sep v \leftarrow K \\ \sep\, & K \cmdseq e \sep \cmdif K \cmdthen e_2 \cmdelse e_3 \\ \sep\, & \blue{\cmdloop{e} K \sep \cmdbreak K \sep \cmdcall K \sep \cmdreturn K} \sep \cdots \end{aligned} $$ \end{frame} \begin{frame} \frametitle{Small Step Semantics} $$ \begin{array}{c} (e, \sigma) \hred (e', \sigma', \vec{e}_f) \\ \text{where $\sigma, \sigma' \in \mathrm{state}$ and $\vec{e}_f \in \mathrm{list} \textit{ Expr}$} \end{array} $$ \begin{mathpar} \inferrule*[]{ (e, \sigma) \hred (e', \sigma', \vec{e}_f) }{ (K[e], \sigma) \tred (K[e'], \sigma', \vec{e}_f) } \end{mathpar} $$ \text{Here $\tred$ is thread local reduction.} $$ \end{frame} \begin{frame} \frametitle{Small Step Semantics: Loop} $$ \begin{aligned} (\cmdloop{e} v, \sigma) &\hred (\cmdloop{e} e, \sigma, \epsilon) && \\ (\cmdloop{e} (\cmdcontinue), \sigma) &\hred (\cmdloop{e} e, \sigma, \epsilon) && \\ \onslide<2->{(\cmdloop{e} (\cmdbreak v), \sigma) &\hred (v, \sigma, \epsilon) && \\} \onslide<3->{ (K[\cmdbreak v], \sigma) &\hred (\cmdbreak v, \sigma, \epsilon) && \text{if } K \in \pure{\cmdbreak\!} \\ (K[\cmdcontinue], \sigma) &\hred (\cmdcontinue, \sigma, \epsilon) && \text{if } K \in \pure{\cmdcontinue} } \end{aligned} $$ \onslide<3->{ $$ \begin{aligned} \text{where } \pure{\cmdbreak\!} &= \pure{\cmdcontinue}\\ &\triangleq \textit{Ctx}^1\backslash\cmdloop{e}[]\backslash\cmdcall[] \end{aligned} $$ } \end{frame} \begin{frame} \frametitle{Small Step Semantics: Call} $$ \begin{aligned} (\cmdcall v, \sigma) &\hred (v, \sigma, \epsilon) \\ (\cmdcall (\cmdreturn v), \sigma) &\hred (v, \sigma, \epsilon) \\ (K[\cmdreturn v], \sigma) &\hred (\cmdreturn v, \sigma, \epsilon) && \text{if } K \in \pure{\cmdreturn\!} \end{aligned} $$ $$ \begin{aligned} \text{where } \pure{\cmdreturn\!} \triangleq \textit{Ctx}^1\backslash\cmdcall[] \end{aligned} $$ \end{frame} \section{Program Logic: Iris} \begin{frame} \frametitle{Weakest Precondition (Iris)} \only<1,2>{ $$ \triple{P}{e}{v.\, Q(v)} $$ $$ \Updownarrow $$ $$ \only<1>{P \Rightarrow \WP\, e \{v.\, Q(v)\}} \only<2>{\Box (P \wand \WP\, e \{v.\, Q(v)\})} $$ } \only<3>{ $$ \begin{aligned} \sigma \vDash \WP\, e \{\Phi\} &\text{ iff. } (e \in \textit{Val} \land \sigma \vDash \Phi(e))\\ &\,\,\lor \bigg(e \notin \textit{Val} \land \mathrm{reducible}(e, \sigma) \\ &\,\,\quad \land \forall e_2, \sigma_2.\, \big((e, \sigma) \tred (e_2, \sigma_2, \epsilon)\big) \Rightarrow \sigma_2 \vDash \WP\, e_2 \{\Phi\} \bigg) \end{aligned} $$ } \end{frame} \begin{frame} \frametitle{Proof in Iris} \begin{mathpar} \inferrule*[]{ \onslide<2->{ \inferrule*[]{ \onslide<3->{ \blue{P \vdash \WP\,e\{\Phi'\}} }\\\\ \onslide<3->{ \Phi'(v) \vdash \WP\,K[v]\{\Phi\} } % } }{ \onslide<2->{P \vdash \WP\,e\{v.\,\WP\,K[v]\{\Phi\}\}} } } ~~ \onslide<2->{\blue{\WP\,e\{v.\WP\, K[v] \{\Phi\}\} \vdash \WP\,K[e]\{\Phi\}}} }{ P \vdash \WP\,K[e]\{\Phi\} } \end{mathpar} \end{frame} \section{Program Logic: Iris-CF} \begin{frame} \frametitle{Weakest Precondition (Iris-CF)} $$ \begin{aligned} \sigma \vDash \wpre{e}{\Phi_N}{\Phi_B}{\Phi_C}{\Phi_R} &\text{ iff. } (e \in \textit{Val} \land \sigma \vDash \Phi_N(e)) \\ & \lor \blue{(\exists v \in \textit{Val}.\, e = \cmdbreak v \land \sigma \vDash \Phi_B(v))} \\ & \lor \blue{(e = \cmdcontinue \land \sigma \vDash \Phi_C())} \\ & \lor \blue{(\exists v \in \textit{Val}.\, e = \cmdreturn v \land \sigma \vDash \Phi_R(v))} \\ & \lor \biggl(e \notin \text{terminals} \land \cred(e, \sigma) \\ & \quad \land \forall e', \sigma'. \bigl((e, \sigma) \tred (e', \sigma', \epsilon)\bigr) \Rightarrow \\ & \quad\quad \sigma' \vDash \wpre{e'}{\Phi_N}{\Phi_B}{\Phi_C}{\Phi_R} \biggr) \end{aligned} $$ \end{frame} \begin{frame} \frametitle{Proof Rules: Basics} $$ \begin{aligned} \vdash \triple{\Phi_B(v)}{&\cmdbreak v}{{\bot},[{\Phi_B},{\bot},{\bot}]} \\ \vdash \triple{\Phi_C()}{&\cmdcontinue}{{\bot},[{\bot},{\Phi_C},{\bot}]} \\ \vdash \triple{\Phi_R(v)}{&\cmdreturn v}{{\bot},[{\bot},{\bot},{\Phi_R}]} \end{aligned} $$ $$ \Downarrow $$ $$ \begin{aligned} \Phi_B(v) &\vdash \wpre{\cmdbreak v}{\bot}{\Phi_B}{\bot}{\bot} \\ \Phi_C() &\vdash \wpre{\cmdcontinue}{\bot}{\bot}{\Phi_C}{\bot} \\ \Phi_R(v) &\vdash \wpre{\cmdreturn v}{\bot}{\bot}{\bot}{\Phi_R} \end{aligned} $$ \end{frame} \begin{frame} \frametitle{Proof Rules: Loop \& Call} \begin{mathpar} \inferrule*[]{ \vdash \triple{I}{e}{I,[\Phi_B,I,\Phi_R]} }{ \vdash \triple{I}{(\cfor{e}{})}{\Phi_B, [\bot, \bot, \Phi_R]} } \end{mathpar} $$ \onslide<2->{ \Downarrow } $$ $$ \onslide<2->{ \begin{aligned} \Box(I \wand{}\, \wpre{e}{\_. I}{\Phi_B}{\_. I}{\Phi_R}) * I &\vdash \wpre{(\cmdloop{e}e)}{\Phi_B}{\bot}{\bot}{\Phi_R} \end{aligned} } $$ \onslide<3-> $$ \wpre{e}{\Phi}{\bot}{\bot}{\Phi} \vdash \wpre{\cmdcall e}{\Phi}{\bot}{\bot}{\bot} $$ \end{frame} \begin{frame} \frametitle{Proof Rule: Bind} $$ \WP\,e\{v.\WP\, K[v] \{\Phi\}\} \vdash \WP\,K[e]\{\Phi\} $$ $$ \onslide<2->{ \Downarrow } $$ $$ \onslide<2->{ \only<2>{ \begin{aligned} \WP\,e\,& \progspec{v. \wpre{K[v]}{\Phi_N}{\Phi_B}{\Phi_C}{\Phi_R}} \\ & \progspec{v. K \in \pure{\cmdbreak\!} \land \Phi_B(v)} \\ & \progspec{v. K \in \pure{\cmdcontinue} \land \Phi_C(v)} \\ & \progspec{v. K \in \pure{\cmdreturn\!} \land \Phi_R(v)} \end{aligned} \vdash \wpre{K[e]}{\Phi_N}{\Phi_B}{\Phi_C}{\Phi_R} } \onslide<3>{ \begin{aligned} \WP\,e\,& \progspec{v. \wpre{K[v]}{\Phi_N}{\Phi_B}{\Phi_C}{\Phi_R}} \\ & \progspec{v. \blue{K \in \pure{\cmdbreak\!}} \land \Phi_B(v)} \\ & \progspec{v. \blue{K \in \pure{\cmdcontinue}} \land \Phi_C(v)} \\ & \progspec{v. \blue{K \in \pure{\cmdreturn\!}} \land \Phi_R(v)} \end{aligned} \vdash \wpre{K[e]}{\Phi_N}{\Phi_B}{\Phi_C}{\Phi_R} } } $$ \onslide<3>{ If \blue{blue} assertions are invalidated, we should use other rules to prove the consequence. } \end{frame} \begin{frame} \frametitle{Proof Rule: Sequence} $$ \begin{aligned} \WP\,e\,& \progspec{v. \wpre{K[v]}{\Phi_N}{\Phi_B}{\Phi_C}{\Phi_R}} \\ & \progspec{v. K \in \pure{\cmdbreak\!} \land \Phi_B(v)} \\ & \progspec{v. K \in \pure{\cmdcontinue} \land \Phi_C(v)} \\ & \progspec{v. K \in \pure{\cmdreturn\!} \land \Phi_R(v)} \end{aligned} \vdash \wpre{K[e]}{\Phi_N}{\Phi_B}{\Phi_C}{\Phi_R} $$ $$ \Downarrow $$ $$ \begin{aligned} \WP\,e_1\,& \progspec{\_.\wpre{e_2}{\Phi_N}{\Phi_B}{\Phi_C}{\Phi_R}} \\ & \progspec{\Phi_B}\,\progspec{\Phi_C}\,\progspec{\Phi_R} \end{aligned} \vdash \wpre{e_1 \cmdseq e_2}{\Phi_N}{\Phi_B}{\Phi_C}{\Phi_R} $$ \end{frame} \section{Conclusion} \begin{frame} \frametitle{Contextual Local Reasoning} \begin{mathpar} \inferrule[Hoare-Bind]{ \vdash \triple{P}{e}{\Psi} \\ \forall v.\, \triple{\Psi(v)}{K[e]}{\Phi} }{ \vdash \triple{P}{K[e]}{\Phi} } \end{mathpar} \begin{center} Untangle the proof of $e$ and $K$! \end{center} \end{frame} \begin{frame} \frametitle{Our Achievement} \begin{itemize} \item A lambda calculus like language supporting control flow and contextual local reasoning. \item A program logic build on Iris and allowing contextual local reasoning about control flow. \end{itemize} \end{frame} \begin{frame} \frametitle{Compare with Continuation} $$ \begin{aligned} (K\big[\text{\blue{call/cc}}(x.\,e)\big], \sigma) &\rightarrow (K\big[e[\mathrm{cont}(K)/x]\big], \sigma) \\ (K[\text{\blue{throw} $v$ \blue{to} } \mathrm{cont}(K')], \sigma) &\rightarrow (K'[v], \sigma) \end{aligned} $$ \begin{mathpar} \inferrule[callcc-wp]{ \WP\,K\big[e[\mathrm{cont}(K)/x]\big]\{\Phi\} }{ \WP\,K\big[\text{\blue{call/cc}}(x.\,e)\big]\{\Phi\} } \end{mathpar} \end{frame} \appendix \begin{frame}{References} \bibliography{paper.bib} \bibliographystyle{abbrv} \end{frame} \begin{frame}[standout] Questions? \end{frame} \begin{frame}{Backup slides} \end{frame} \end{document}
{ "alphanum_fraction": 0.5941792582, "avg_line_length": 23.674796748, "ext": "tex", "hexsha": "563ead7ea209191ef37176e5202ca77b9ab91b41", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "18e5ed7145d3364382a72e9ab31bd77be9913759", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "BruceZoom/Iris-ControlFlow", "max_forks_repo_path": "docs/Slides/slides.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "18e5ed7145d3364382a72e9ab31bd77be9913759", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "BruceZoom/Iris-ControlFlow", "max_issues_repo_path": "docs/Slides/slides.tex", "max_line_length": 151, "max_stars_count": null, "max_stars_repo_head_hexsha": "18e5ed7145d3364382a72e9ab31bd77be9913759", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "BruceZoom/Iris-ControlFlow", "max_stars_repo_path": "docs/Slides/slides.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4648, "size": 11648 }
\documentclass{article} \usepackage{amssymb} \usepackage{comment} \usepackage{courier} \usepackage{fancyhdr} \usepackage{fancyvrb} \usepackage[T1]{fontenc} \usepackage[top=.75in, bottom=.75in, left=.75in,right=.75in]{geometry} \usepackage{graphicx} \usepackage{lastpage} \usepackage{listings} \lstset{basicstyle=\small\ttfamily} \usepackage{mdframed} \usepackage{parskip} \usepackage{ragged2e} \usepackage{soul} \usepackage{upquote} \usepackage{xcolor} % http://www.monperrus.net/martin/copy-pastable-ascii-characters-with-pdftex-pdflatex \lstset{ upquote=true, columns=fullflexible, literate={*}{{\char42}}1 {-}{{\char45}}1 {^}{{\char94}}1 } % http://tex.stackexchange.com/questions/40863/parskip-inserts-extra-space-after-floats-and-listings \lstset{aboveskip=6pt plus 2pt minus 2pt, belowskip=-4pt plus 2pt minus 2pt} \usepackage[colorlinks,urlcolor={blue}]{hyperref} \begin{document} \fancyfoot[L]{\color{gray} C4CS -- F'18} \fancyfoot[R]{\color{gray} Revision 1.1} \fancyfoot[C]{\color{gray} \thepage~/~\pageref*{LastPage}} \pagestyle{fancyplain} \title{\textbf{Advanced Homework 10\\}} \author{\textbf{\color{red}{Due: Wednesday, November 21st, 11:59PM (Hard Deadline)}}} \date{} \maketitle \section*{Submission Instructions} To receive credit for this assignment you will need to stop by someone's office hours, demo your running code, and answer some questions. \textbf{\color{red}{Make sure to check the office hour schedule as the real due date is at the last office hours before the date listed above.}} This applies to assignments that need to be gone over with a TA only. \textbf{Extra credit is given for early turn-ins of advanced exercises. These details can be found on the website under the advanced homework grading policy.} \medskip \section*{Docker} Docker is an \href{https://github.com/docker}{open source} tool designed to make it easy to build and run applications using containers. Containers allow developers to package an application with all of its dependencies and ship it as one image. As a result, developers can be assured that their program will run exactly how they expect it to on any machine that has Docker installed. In some ways Docker is quite similar to virtual machines like the one you've been using throughout the course as both give you a contained environment mostly isolated from the rest of your system. Some of the key differences, however, are that Docker containers are both faster and more lightweight than virtual machines. You can read more info on Docker and how its containers compare to VM's \href{https://docs.microsoft.com/en-us/dotnet/standard/microservices-architecture/container-docker-introduction/docker-defined}{here}. To get started, install docker from \href{https://docs.docker.com/install/}{here}. To verify that your installation completed successfully, run \lstinline{docker run hello-world} in your terminal, you should see the following: \begin{lstlisting} $ docker run hello-world Unable to find image 'hello-world:latest' locally latest: Pulling from library/hello-world d1725b59e92d: Pull complete Digest: sha256:0add3ace90ecb4adbf7777e9aacf18357296e799f81cabc9fde470971e499788 Status: Downloaded newer image for hello-world:latest Hello from Docker! This message shows that your installation appears to be working correctly. ... \end{lstlisting} \subsection*{The Assignment} \textbf{Set up a Docker environment with GDB installed and push it to the Docker Hub.} For many of your C++ projects (i.e. EECS 281 projects), you might want to use the latest version of GDB which isn't available on MacOS or Windows. One way to get around this is to build a docker container to run GDB in. This allows you to compile, run, and debug the program locally, eliminating the need to sync to CAEN every time you want to test something out. For this assignment, set up a linux based Docker container with GDB and push it to the \href{https://hub.docker.com/}{Docker Hub} (you will need an account for this). You can use any distro you want (\href{https://hub.docker.com/_/alpine/}{Alpine} is a nice 5 MB linux distro used a lot with docker) but it'll be simplest to build from an \href{https://hub.docker.com/_/ubuntu/}{Ubuntu} image. Some links you might find useful are \href{https://docker-curriculum.com/}{this Docker tutorial} and \href{https://www.digitalocean.com/community/tutorials/docker-explained-using-dockerfiles-to-automate-building-of-images}{this Dockerfile tutorial} \subsection*{Submission checkoff} \begin{itemize} \item[$\square$] Show off your Dockerfile \item[$\square$] Use GDB on a project without copying the project fiels over to the container \item[$\square$] Show that you've got your image on the Docker Hub \end{itemize} \section*{\texttt{Expect} Respect} This sections touches on a handy scripting language called \texttt{expect} that can be used to automate interactions with remote servers. What we'll be doing is partially automating the login procedure for Michigan servers. We say partially because of the two-factor step during the login procedure. First you'll need to install \texttt{expect} for your system (Ubuntu and MacOS both have packages available for download). Some guiding steps this script should accomplish are: \begin{enumerate} \item Prompt the user for their uniqname password and store it in a variable. This output should \textbf{not} be displayed on the screen when the user types in their password! \item Initiate a connection to CAEN and \texttt{send} the password previously provided by the user. \item Select one of the Duo options to fire off the push/call/text automatically. \end{enumerate} Although the creating of this script doesn't save too much time during the login process for CAEN, this tool can be useful for a lot of other tasks where you are using key-based access to servers and aren't prompted for passwords or Duo options. As always, there are multiple ways to accomplish some of the tasks above. A potential solution requires no more than about 20 lines of \texttt{expect} scripting. \subsection*{Submission checkoff} \begin{itemize} \item[$\square$] Explain why it's a bad idea to hardcode the password for the user into the expect script \item[$\square$] You'll almost certainly have \texttt{\textbackslash r} in your script. Explain what this means/does \item[$\square$] Show off your script working \begin{itemize} \item[$\square$] Prompting the user for their password (and not displaying it on the screen) \item[$\square$] Automatically selecting one of the Duo options \item[$\square$] Successfully logging the user into CAEN \end{itemize} \end{itemize} \end{document}
{ "alphanum_fraction": 0.7660199882, "avg_line_length": 43.3375796178, "ext": "tex", "hexsha": "23a12dfff0f9c635ddceb8d363365e5b3789429d", "lang": "TeX", "max_forks_count": 349, "max_forks_repo_forks_event_max_datetime": "2020-11-04T05:38:21.000Z", "max_forks_repo_forks_event_min_datetime": "2016-01-06T04:13:55.000Z", "max_forks_repo_head_hexsha": "4df2319f5d894a2fe3daef12d43772611130a9a0", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "wangjess/c4cs.github.io", "max_forks_repo_path": "static/f18/advanced/c4cs-wk10-advanced.tex", "max_issues_count": 622, "max_issues_repo_head_hexsha": "4df2319f5d894a2fe3daef12d43772611130a9a0", "max_issues_repo_issues_event_max_datetime": "2020-02-25T07:29:08.000Z", "max_issues_repo_issues_event_min_datetime": "2016-01-22T06:17:25.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "wangjess/c4cs.github.io", "max_issues_repo_path": "static/f18/advanced/c4cs-wk10-advanced.tex", "max_line_length": 248, "max_stars_count": 49, "max_stars_repo_head_hexsha": "4df2319f5d894a2fe3daef12d43772611130a9a0", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "wangjess/c4cs.github.io", "max_stars_repo_path": "static/f18/advanced/c4cs-wk10-advanced.tex", "max_stars_repo_stars_event_max_datetime": "2021-06-08T03:21:28.000Z", "max_stars_repo_stars_event_min_datetime": "2016-01-05T02:42:14.000Z", "num_tokens": 1750, "size": 6804 }
\chapter{LITERATURE REVIEW} \label{ch:litreview} \section{Introduction} This is a \LaTeX{} book~\parencite{urban1986introduction}. \textcite{lamport1994latex} wrote a good~\LaTeX{} book. \textcite{Knu86book} claimed that \ldots \section{Previous Study on Problem B} Three or more authors \parencite{mittelbach2004latex} \textcite{mittelbach2004latex} has three or more authors. Two authors \parencite{kopka1995guide}. \textcite{kopka1995guide} has two authors. Cite more than one articles \parencite{urban1986introduction,lamport1994latex}. Newspaper \parencite{afp_covid_2021}. \section{Improved Method for Problem B} \lipsum[4-5]
{ "alphanum_fraction": 0.7965838509, "avg_line_length": 25.76, "ext": "tex", "hexsha": "05313c8ff5785566e4c42dc950c1e95d12455a0e", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2021-03-26T01:37:04.000Z", "max_forks_repo_forks_event_min_datetime": "2021-03-22T05:47:44.000Z", "max_forks_repo_head_hexsha": "1bbdd81b2e61273030c02604f592639d0fcd3bdf", "max_forks_repo_licenses": [ "LPPL-1.3c" ], "max_forks_repo_name": "rizauddin/uitmthesis", "max_forks_repo_path": "mainmatter/chapLiterature.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "1bbdd81b2e61273030c02604f592639d0fcd3bdf", "max_issues_repo_issues_event_max_datetime": "2021-03-26T04:24:43.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-26T04:24:43.000Z", "max_issues_repo_licenses": [ "LPPL-1.3c" ], "max_issues_repo_name": "rizauddin/uitmthesis", "max_issues_repo_path": "mainmatter/chapLiterature.tex", "max_line_length": 156, "max_stars_count": 3, "max_stars_repo_head_hexsha": "1bbdd81b2e61273030c02604f592639d0fcd3bdf", "max_stars_repo_licenses": [ "LPPL-1.3c" ], "max_stars_repo_name": "rizauddin/uitmthesis", "max_stars_repo_path": "mainmatter/chapLiterature.tex", "max_stars_repo_stars_event_max_datetime": "2021-09-21T09:02:01.000Z", "max_stars_repo_stars_event_min_datetime": "2021-03-01T11:39:00.000Z", "num_tokens": 203, "size": 644 }
% $Id$ % \subsection{Pacbox} \screenshot{plugins/images/ss-pacbox}{Pacbox}{img:pacbox} Pacbox is an emulator of the Pacman arcade machine hardware. It is a port of \emph{PIE -- Pacman Instructional Emulator} by Alessandro Scotti. \subsubsection{ROMs} To use the emulator to play Pacman, you need a copy of ROMs for ``Midway Pacman''. \begin{table} \begin{rbtabular}{0.8\textwidth}{lX}{\textbf{Filename} & \textbf{MD5 checksum}}{}{} pacman.5e & 2791455babaf26e0b396c78d2b45f8f6\\ pacman.5f & 9240f35d1d2beee0ff17195653b5e405\\ pacman.6e & 290aa5eae9e2f63587b5dd5a7da932da\\ pacman.6f & 19a886fcd8b5e88b0ed1b97f9d8659c0\\ pacman.6h & d7cce8bffd9563b133ec17ebbb6373d4\\ pacman.6j & 33c0e197be4c787142af6c3be0d8f6b0\\ \end{rbtabular} \end{table} These need to be stored in the \fname{/.rockbox/pacman/} directory on your \dap. In the MAME ROMs collection the necessary files can be found in \fname{pacman.zip} and \fname{puckman.zip}. The MAME project itself can be found at \url{http://www.mame.net}. \subsubsection{Keys} \begin{btnmap} % 20GB H10 and 5/6GB H10 have different direction key mappings to match the % orientation of the playing field on their different displays - don't use *_PAD ! \opt{IRIVER_H100_PAD,IRIVER_H300_PAD,IAUDIO_X5_PAD,IPOD_4G_PAD,% IPOD_3G_PAD,iriverh10,MROBE100_PAD,SANSA_FUZE_PAD,SAMSUNG_YH92X_PAD,% SAMSUNG_YH820_PAD}{\ButtonRight} \opt{GIGABEAT_PAD,GIGABEAT_S_PAD,SANSA_E200_PAD,PBELL_VIBE500_PAD% ,SANSA_FUZEPLUS_PAD}{\ButtonUp} \opt{iriverh10_5gb}{\ButtonScrollUp} \opt{COWON_D2_PAD}{\TouchTopMiddle} \opt{HAVEREMOTEKEYMAP}{& } & Move Up\\ \opt{IRIVER_H100_PAD,IRIVER_H300_PAD,IAUDIO_X5_PAD,IPOD_4G_PAD,% IPOD_3G_PAD,iriverh10,MROBE100_PAD,SANSA_FUZE_PAD,SAMSUNG_YH92X_PAD,% SAMSUNG_YH820_PAD}{\ButtonLeft} \opt{iriverh10_5gb}{\ButtonScrollDown} \opt{GIGABEAT_PAD,GIGABEAT_S_PAD,SANSA_E200_PAD,PBELL_VIBE500_PAD,% SANSA_FUZEPLUS_PAD}{\ButtonDown} \opt{COWON_D2_PAD}{\TouchBottomMiddle} \opt{HAVEREMOTEKEYMAP}{& } & Move Down\\ \opt{IRIVER_H100_PAD,IRIVER_H300_PAD,IAUDIO_X5_PAD,MROBE100_PAD,SANSA_FUZE_PAD% ,SAMSUNG_YH92X_PAD,SAMSUNG_YH820_PAD}{\ButtonUp} \opt{IPOD_4G_PAD,IPOD_3G_PAD}{\ButtonMenu} \opt{iriverh10}{\ButtonScrollUp} \opt{iriverh10_5gb,GIGABEAT_PAD,GIGABEAT_S_PAD,SANSA_E200_PAD,PBELL_VIBE500_PAD% ,SANSA_FUZEPLUS_PAD}{\ButtonLeft} \opt{COWON_D2_PAD}{\TouchMidLeft} \opt{HAVEREMOTEKEYMAP}{& } & Move Left\\ \opt{IRIVER_H100_PAD,IRIVER_H300_PAD,IAUDIO_X5_PAD,MROBE100_PAD,SANSA_FUZE_PAD% ,SAMSUNG_YH92X_PAD,SAMSUNG_YH820_PAD}{\ButtonDown} \opt{IPOD_4G_PAD,IPOD_3G_PAD}{\ButtonPlay} \opt{iriverh10}{\ButtonScrollDown} \opt{iriverh10_5gb,GIGABEAT_PAD,GIGABEAT_S_PAD,SANSA_E200_PAD,PBELL_VIBE500_PAD% ,SANSA_FUZEPLUS_PAD}{\ButtonRight} \opt{COWON_D2_PAD}{\TouchMidRight} \opt{HAVEREMOTEKEYMAP}{& } & Move Right\\ \opt{IRIVER_H100_PAD,IRIVER_H300_PAD,IAUDIO_X5_PAD}{\ButtonRec} \opt{IPOD_4G_PAD,IPOD_3G_PAD}{\ButtonSelect} \opt{IRIVER_H10_PAD}{\ButtonFF} \opt{SANSA_E200_PAD,SANSA_FUZE_PAD}{\ButtonSelect+\ButtonDown} \opt{GIGABEAT_PAD}{\ButtonA} \opt{MROBE100_PAD}{\ButtonDisplay} \opt{GIGABEAT_S_PAD,SANSA_FUZEPLUS_PAD,SAMSUNG_YH92X_PAD,SAMSUNG_YH820_PAD}{\ButtonPlay} \opt{COWON_D2_PAD}{\TouchCenter} \opt{PBELL_VIBE500_PAD}{\ButtonOK} \opt{HAVEREMOTEKEYMAP}{& } & Insert Coin\\ \opt{IRIVER_H100_PAD,IRIVER_H300_PAD,IAUDIO_X5_PAD,IPOD_4G_PAD,IPOD_3G_PAD% ,SANSA_E200_PAD,SANSA_FUZE_PAD,GIGABEAT_PAD,GIGABEAT_S_PAD% ,SANSA_FUZEPLUS_PAD}{\ButtonSelect} \opt{IRIVER_H10_PAD}{\ButtonRew} \opt{COWON_D2_PAD}{\TouchBottomLeft} \opt{PBELL_VIBE500_PAD}{\ButtonPlay} \opt{SAMSUNG_YH92X_PAD,SAMSUNG_YH820_PAD}{\ButtonFF} \opt{HAVEREMOTEKEYMAP}{& } & 1-Player Start\\ \opt{IRIVER_H100_PAD,IRIVER_H300_PAD}{\ButtonOn} \opt{IPOD_4G_PAD,IPOD_3G_PAD}{n/a} \opt{IAUDIO_X5_PAD,IRIVER_H10_PAD,GIGABEAT_PAD,GIGABEAT_S_PAD}{\ButtonPower} \opt{SANSA_E200_PAD,PBELL_VIBE500_PAD}{\ButtonRec} \opt{MROBE100_PAD}{\ButtonMenu} \opt{SANSA_FUZEPLUS_PAD}{\ButtonBottomRight} \opt{COWON_D2_PAD}{\TouchBottomRight} \opt{SAMSUNG_YH92X_PAD}{n/a} \opt{SAMSUNG_YH820_PAD}{\ButtonRew} \opt{HAVEREMOTEKEYMAP}{& } & 2-Player Start\\ \opt{IRIVER_H100_PAD,IRIVER_H300_PAD}{\ButtonMode} \opt{IPOD_4G_PAD,IPOD_3G_PAD}{\ButtonSelect+\ButtonMenu} \opt{IAUDIO_X5_PAD,IRIVER_H10_PAD,MROBE100_PAD}{\ButtonPlay} \opt{SANSA_E200_PAD,SANSA_FUZEPLUS_PAD}{\ButtonPower} \opt{SANSA_FUZE_PAD}{\ButtonHome} \opt{GIGABEAT_PAD,GIGABEAT_S_PAD,COWON_D2_PAD,PBELL_VIBE500_PAD}{\ButtonMenu} \opt{SAMSUNG_YH92X_PAD}{\ButtonRew} \opt{SAMSUNG_YH820_PAD}{\ButtonRec} \opt{HAVEREMOTEKEYMAP}{& } & Menu\\ \end{btnmap}
{ "alphanum_fraction": 0.7448626653, "avg_line_length": 45.0917431193, "ext": "tex", "hexsha": "b9d9f0cb8d5173c4f5a15a9d35e787f7447f9c7d", "lang": "TeX", "max_forks_count": 15, "max_forks_repo_forks_event_max_datetime": "2020-11-04T04:30:22.000Z", "max_forks_repo_forks_event_min_datetime": "2015-01-21T13:58:13.000Z", "max_forks_repo_head_hexsha": "a701aefe45f03ca391a8e2f1a6e3da1b8774b2f2", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "Rockbox-Chinese-Community/Rockbox-RCC", "max_forks_repo_path": "manual/plugins/pacbox.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "a701aefe45f03ca391a8e2f1a6e3da1b8774b2f2", "max_issues_repo_issues_event_max_datetime": "2018-05-18T05:33:33.000Z", "max_issues_repo_issues_event_min_datetime": "2015-07-04T18:15:33.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "Rockbox-Chinese-Community/Rockbox-RCC", "max_issues_repo_path": "manual/plugins/pacbox.tex", "max_line_length": 92, "max_stars_count": 24, "max_stars_repo_head_hexsha": "a701aefe45f03ca391a8e2f1a6e3da1b8774b2f2", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "Rockbox-Chinese-Community/Rockbox-RCC", "max_stars_repo_path": "manual/plugins/pacbox.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-05T14:09:46.000Z", "max_stars_repo_stars_event_min_datetime": "2015-03-10T08:43:56.000Z", "num_tokens": 1936, "size": 4915 }
\documentclass[12pt, a4paper]{article} \usepackage[margin=1.1in]{geometry} \usepackage{indentfirst, setspace, titlesec} \usepackage{graphicx} \usepackage{float} \usepackage{graphicx} \usepackage{mathptmx} \usepackage[backend=biber, sorting=nyt, uniquename=init, style=bath, maxcitenames=3, maxbibnames=99, giveninits=true]{biblatex} \DeclareNameAlias{sortname}{family-given} %\addbibresource{Reference.bib} \title{Title} \author{Yau Siu Fung Brian} \date{} % Avoid work breaking \tolerance=1 \emergencystretch=\maxdimen \hyphenpenalty=10000 \hbadness=10000 % Section Format \renewcommand{\thesection}{\arabic{section}} \titleformat*{\section}{\normalfont\large\bfseries} \titlespacing*{\section}{0pt}{5pt}{8pt} \titleformat*{\subsection}{\normalfont\normalsize\bfseries\itshape} \titlespacing*{\subsection}{0pt}{4pt}{5pt} \titleformat*{\subsubsection}{\normalfont\normalsize\itshape} \titlespacing*{\subsubsection}{0pt}{2pt}{0pt} % Hyperlink % \usepackage[colorlinks=true,urlcolor=blue,citecolor=black,linkcolor=black]{hyperref} \begin{document} \maketitle \onehalfspace \bigskip \section{} \end{document}
{ "alphanum_fraction": 0.7763975155, "avg_line_length": 22.54, "ext": "tex", "hexsha": "9ad1afe3fc8944e8f7cbab1665cd59b6f969967b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0b84e16fde77bf8bf11c48bd3e9ed49c4a9afce2", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "brianyau0309/BLFM", "max_forks_repo_path": "vim/template/template.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0b84e16fde77bf8bf11c48bd3e9ed49c4a9afce2", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "brianyau0309/BLFM", "max_issues_repo_path": "vim/template/template.tex", "max_line_length": 86, "max_stars_count": null, "max_stars_repo_head_hexsha": "0b84e16fde77bf8bf11c48bd3e9ed49c4a9afce2", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "brianyau0309/BLFM", "max_stars_repo_path": "vim/template/template.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 377, "size": 1127 }
% -*- latex -*- \subsubsection{\stid{4.13} ECP/VTK-m} \paragraph{Overview} The ECP/VTK-m project is providing the core capabilities to perform scientific visualization on Exascale architectures. The ECP/VTK-m project fills the critical feature gap of performing visualization and analysis on processors like graphics-based processors and many integrated core. The results of this project will be delivered in tools like ParaView, VisIt, and Ascent as well as in stand-alone form. Moreover, these projects are depending on this ECP effort to be able to make effective use of ECP architectures. One of the biggest recent changes in high-performance computing is the increasing use of accelerators. Accelerators contain processing cores that independently are inferior to a core in a typical CPU, but these cores are replicated and grouped such that their aggregate execution provides a very high computation rate at a much lower power. Current and future CPU processors also require much more explicit parallelism. Each successive version of the hardware packs more cores into each processor, and technologies like hyper threading and vector operations require even more parallel processing to leverage each core's full potential. VTK-m is a toolkit of scientific visualization algorithms for emerging processor architectures. VTK-m supports the fine-grained concurrency for data analysis and visualization algorithms required to drive extreme scale computing by providing abstract models for data and execution that can be applied to a variety of algorithms across many different processor architectures. The ECP/VTK-m project is building up the VTK-m codebase with the necessary visualization algorithm implementations that run across the varied hardware platforms to be leveraged at the Exascale. We will be working with other ECP projects, such as ALPINE, to integrate the new VTK-m code into production software to enable visualization on our HPC systems. \paragraph{Key Challenges} The scientific visualization research community has been building scalable HPC algorithms for over 15 years, and today there are multiple production tools that provide excellent scalability. However, our current visualization tools are based on a message-passing programming model. More to the point, they rely on a coarse decomposition with ghost regions to isolate parallel execution \cite{Ahrens2001,Childs2010}. However, this decomposition works best when each processing element has on the order of a hundred thousand to a million data cells \cite{ParaViewTutorial} and is known to break down as we approach the level of concurrency needed on modern accelerators \cite{Moreland2012:Ultravis,Moreland2013:UltraVis}. DOE has made significant investments in HPC visualization capabilities. For us to feasibly update this software for the upcoming Exascale machines, we need to be selective on what needs to be updated, and we need to maximize the code we can continue to use. Regardless, there is a significant amount of software to be engineered and implemented, so we need to extend our development resources by simplifying algorithm implementation and providing performance portability across current and future devices. \paragraph{Solution Strategy} The ECP/VTK-m project leverages VTK-m \cite{Moreland2016:VTKm} to overcome these key challenges. VTK-m has a software framework that provides the following critical features. \begin{enumerate} \item \textbf{Visualization building blocks:} VTK-m contains the common data structures and operations required for scientific visualization. This base framework simplifies the development of visualization algorithms \cite{VTKmUsersGuide}. \item \textbf{Device portability:} VTK-m uses the notion of an abstract device adapter, which allows algorithms written once in VTK-m to run well on many computing architectures. The device adapter is constructed from a small but versatile set of data parallel primitives, which can be optimized for each platform \cite{Blelloch1990}. It has been shown that this approach not only simplifies parallel implementations, but also allows them to work well across many platforms \cite{Lo2012,Larsen2015,Moreland2015}. \item \textbf{Flexible integration:} VTK-m is designed to integrate well with other software. This is achieved with flexible data models to capture the structure of applications' data \cite{Meredith2012} and array wrappers that can adapt to target memory layouts \cite{Moreland2012:PDAC}. \end{enumerate} Even with these features provided by VTK-m, we have a lot of work ahead of us to be ready for Exascale. Our approach is to incrementally add features to VTK-m and expose them in tools like ParaView and VisIt. \begin{figure}[t] \centering \includegraphics[width=2in]{projects/2.3.4-DataViz/2.3.4.13-ECP-VTK-m/VTKm-Multiblock}\quad \includegraphics[width=2in]{projects/2.3.4-DataViz/2.3.4.13-ECP-VTK-m/VTKm-Gradients}\quad \includegraphics[width=2in]{projects/2.3.4-DataViz/2.3.4.13-ECP-VTK-m/VTKm-FieldToColors} \caption{ Examples of recent progress in VTK-m include (from left to right) multiblock data structures, gradient estimation, and mapping of fields to colors. } \label{fig:VTKmRecent} \end{figure} \paragraph{Recent Progress} The VTK-m project is organized into many implementation activities. The following features have been completed in the past 12 months. \begin{itemize} \item \textbf{Key Reduce Worklet:} This adds a basic building block to VTK-m that is very useful in constructing algorithms that manipulate or generate topology \cite{Miller2014}. \item \textbf{Spatial Division:} Introductory algorithms to divide space based on the distribution of geometry within it. This is an important step in building spatial lookup structures. \item \textbf{Basic Particle Advection:} Particle advection traces the path of particles in a vector field. This tracing is fundamental for many flow visualization techniques. Our initial implementation works on simple structures \item \textbf{Surface Normals:} Normals, unit vectors that point perpendicular to a surface, are important to provide shading of 3D surfaces while rendering. These often need to be derived from the geometry itself. \item \textbf{Multiblock Data:} Treat multiple blocks of data, such as those depicted in Figure \ref{fig:VTKmRecent} at left, as first-class data sets. Direct support of multiblock data not only provides programming convenience but also allows us to improve scheduling tasks for smaller groups of data. \item \textbf{Gradients:} Gradients, depicted in Figure \ref{fig:VTKmRecent} at center, are an important metric of fields and must often be derived using topological data. Gradients are also fundamental in finding important vector field qualities like divergence, vorticity, and q-criterion. \item \textbf{Field to Colors:} Pseudocoloring, demonstrated in Figure \ref{fig:VTKmRecent} at right, is a fundamental feature of scientific visualization, and it depends on a good mechanism of converting field data to colors. \item \textbf{VTK-m 1.1 Release:} VTK-m 1.1 was released in December 2017. \end{itemize} \paragraph{Next Steps} Our next efforts include: \begin{itemize} \item \textbf{External Surface:} Extracting the external faces of solid geometry is important for efficient solid rendering. \item \textbf{Location Structures:} Many scientific visualization algorithms require finding points or cells based on a world location. \item \textbf{Dynamic Types:} The initial implementation of VTK-m used templating to adjust to different data structures. However, when data types are not known at compile time, which is common in applications like ParaView and VisIt, templating for all possible combinations becomes infeasible. Provide mechanisms to enable runtime polymorphism. \item \textbf{OpenMP:} Our current multicore implementation uses TBB \cite{TBB} for its multicore support. However, much of the code we wish to integrate with uses OpenMP \cite{OpenMP}, and the two threading implementations can conflict with each other. Thus, add a device adapter to VTK-m that uses OpenMP so this conflict will not happen. \end{itemize} \noindent {\tiny Sandia National Laboratories is a multimission laboratory managed and operated by National Technology \& Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525. \hfill SAND~2018-4009~R \par}
{ "alphanum_fraction": 0.7974698102, "avg_line_length": 74.9568965517, "ext": "tex", "hexsha": "1af06074dea7433c2a2c9b169ecb80a6c61cf40d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "74d6fb18bae7ff1c32b78dd8cd7ae29e91218c33", "max_forks_repo_licenses": [ "BSD-2-Clause" ], "max_forks_repo_name": "tgamblin/ECP-ST-CAR-PUBLIC", "max_forks_repo_path": "projects/2.3.4-DataViz/2.3.4.13-ECP-VTK-m/2.3.4.13-ECP-VTK-m.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "74d6fb18bae7ff1c32b78dd8cd7ae29e91218c33", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-2-Clause" ], "max_issues_repo_name": "tgamblin/ECP-ST-CAR-PUBLIC", "max_issues_repo_path": "projects/2.3.4-DataViz/2.3.4.13-ECP-VTK-m/2.3.4.13-ECP-VTK-m.tex", "max_line_length": 339, "max_stars_count": null, "max_stars_repo_head_hexsha": "74d6fb18bae7ff1c32b78dd8cd7ae29e91218c33", "max_stars_repo_licenses": [ "BSD-2-Clause" ], "max_stars_repo_name": "tgamblin/ECP-ST-CAR-PUBLIC", "max_stars_repo_path": "projects/2.3.4-DataViz/2.3.4.13-ECP-VTK-m/2.3.4.13-ECP-VTK-m.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1919, "size": 8695 }
<% macro create_place(_p) %> <%- set p = _p.place -%> <%- set uris = _p.external_uris -%> <%- set alternative_names = _p.alternative_names -%> \section*{\hypertarget{place<<p.id>>}{\textsc{[p<< p.id >>]} << p.name | placename >>}} <% if p.place_type == 'Unknown' %> \emph{Unspecified place} <% else %> \emph{<< p.place_type | texsafe >>} <% endif %> at << p.geoloc | render_geoloc >> \emph{(<< p.confidence or 'unknown confidence of location' >>)} <%- if p.comment -%> : \emph{\enquote{<< p.comment | texsafe >>}} <%- else -%> . <%- endif %> << create_evidence_linklist(p.evidence_ids, before='', middle=' in this place: ', after='') >> <% if alternative_names %> \emph{<< p.name | placename >>} is also known as: \begin{itemize} <% for a in alternative_names %> \item \textbf{<< a.name | placename >>} <% if a.transcription %> \emph{(<< a.transcription | texsafe >>)} <% endif %> <% if a.language != 'Undefined' %> in \emph{<< a.language | texsafe >>} <% endif %> <% endfor %> \end{itemize} <% endif %> <% if uris %> \emph{<< p.name | placename >>} is linked to: \begin{itemize} <% for e in uris %> \item << e.name | texsafe >> <%- if e.comment -%> \footnote{<< e.comment | texsafe >>} <%- endif -%> : \url{<< e.uri >>} <% endfor %> \end{itemize} <% endif %> <% endmacro %>
{ "alphanum_fraction": 0.5403283369, "avg_line_length": 24.1551724138, "ext": "tex", "hexsha": "4c171ecc25dc82d2c6ac9419659ed816afdf4df7", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2022-01-04T09:04:47.000Z", "max_forks_repo_forks_event_min_datetime": "2022-01-04T09:04:47.000Z", "max_forks_repo_head_hexsha": "05ceb150e86227d81f0f4bb441af6d4f8bce65ee", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "UniStuttgart-VISUS/damast", "max_forks_repo_path": "damast/reporting/templates/reporting/tex/fragments/place.tex", "max_issues_count": 94, "max_issues_repo_head_hexsha": "05ceb150e86227d81f0f4bb441af6d4f8bce65ee", "max_issues_repo_issues_event_max_datetime": "2022-03-31T22:40:56.000Z", "max_issues_repo_issues_event_min_datetime": "2021-12-22T11:21:57.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "UniStuttgart-VISUS/damast", "max_issues_repo_path": "damast/reporting/templates/reporting/tex/fragments/place.tex", "max_line_length": 94, "max_stars_count": 2, "max_stars_repo_head_hexsha": "05ceb150e86227d81f0f4bb441af6d4f8bce65ee", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "UniStuttgart-VISUS/damast", "max_stars_repo_path": "damast/reporting/templates/reporting/tex/fragments/place.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-18T17:05:48.000Z", "max_stars_repo_stars_event_min_datetime": "2021-12-22T17:01:02.000Z", "num_tokens": 469, "size": 1401 }
\section{Introduction} \label{sec:introduction} %########################################################## % SEC INTRODUCTION % %########################################################## \acp{wsn} are composed of a large number of nodes with sensing, computation, and wireless communication capability. These networks have computing and communication energy constraints. Many applications in \ac{wsn} need to transport large amount of data (image, audio, video monitoring). These applications are not tolerant to data loss, thus it is important to provide mechanisms to reliable collect data. The \acp{wsn} have the following communication paradigms: \textit{many-to-one} (data collection), \textit{one-to-many} (data dissemination), and a more complex way that enables communication \textit{any-to-any}. First two paradigms allow the collection and dissemination of data respectively. However, with routing on only one direction, it is infeasible to build reliable mechanisms to ensure the delivery of data end-to-end. \textit{Any-to-any} communication paradigm allows communication between any pair of nodes in the network, but adds more complexity and also requires large amounts of memory to store all possible routes. In this work, we present \acf{xctp}, a routing protocol that is an extension of the \ac{ctp}. \ac{ctp} creates a routing tree to transfer data from one or more sensors nodes to a root (sink) node. But, \ac{ctp} does not create the reverse path between the root node and sensors. This reverse path is important, for example, for feedback commands or acknowledgment packets. \ac{xctp} enables communication in both ways: root to node and node to root. \ac{xctp} requires low storage of states and very low additional overhead in packets. Our main contribution are as follows: \begin{itemize} \item We propose \acl{xctp} (\ac{xctp}), which allows routing of messages in the reverse direction of \ac{ctp}, using a few extra memory to store reverse routes. \item We compare the performance of \ac{xctp}, \ac{aodv}, \ac{rpl}, and \ac{ctp}. In the experiments, \ac{xctp} proved to be more reliable, efficient, agile, and robust. \item We show that it is possible to implement reliable data transport protocol over \ac{xctp}. \end{itemize} \ac{ctp} optimizes data traffic towards the root thus achieves high packet delivery rate. However, our \ac{xctp} approach goes beyond, allowing bi-directional communication between sensor nodes and the root. \ac{xctp} and \textit{any-to-any} routing protocols enable reliable communication. However, \ac{xctp} reduces the cost to store routes, since \ac{xctp} does not need to maintain routes to every peer. Our work is organized as follows. In the next section, we present some works related to \ac{xctp}. In Section~\ref{sec:problem}, we formally define the problem being solved in this work. We describe \ac{xctp} architecture in Section~\ref{sec:solution}. We compare \ac{xctp} with \ac{aodv}, \ac{rpl}, \ac{ctp}, and present the simulation results in Section~\ref{sec:evaluation}. Finally, we conclude in Section~\ref{sec:conclusion}.
{ "alphanum_fraction": 0.738758708, "avg_line_length": 131.5833333333, "ext": "tex", "hexsha": "f3e0bb5cb296a8f5654241ec1c9879846a73fde0", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f2d7e893d314924ea396824d0674ba4221802aa3", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "bps90/bps90.github.io", "max_forks_repo_path": "assets/files/papers/7/WCNC-2016/IEEE-WCNC-English-v3-03-10/introduction.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f2d7e893d314924ea396824d0674ba4221802aa3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "bps90/bps90.github.io", "max_issues_repo_path": "assets/files/papers/7/WCNC-2016/IEEE-WCNC-English-v3-03-10/introduction.tex", "max_line_length": 629, "max_stars_count": null, "max_stars_repo_head_hexsha": "f2d7e893d314924ea396824d0674ba4221802aa3", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "bps90/bps90.github.io", "max_stars_repo_path": "assets/files/papers/7/WCNC-2016/IEEE-WCNC-English-v3-03-10/introduction.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 709, "size": 3158 }
% !TEX root = ../../main.tex % !TEX spellcheck = en_US \chapter{Doxygen documentetanion} \label{sec:doxygen} Since the doxygen documentation would be several hundreds of pages and be hard to search for, we refer to the online documentation found at: \url{http://bats.senth.org/doxygen/}. If the page is down and you want the documentation, please send an email to \href{mailto:[email protected]}{[email protected]} and he will send you a new working link.
{ "alphanum_fraction": 0.7705263158, "avg_line_length": 95, "ext": "tex", "hexsha": "14a5b59b1c0127c259a34f1ab3a9af28d607679a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "51d4ec39f3a118ed0eb90ec27a1864c0ceef3898", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "Senth/bats", "max_forks_repo_path": "BATS/docs/thesis/chapters/appendices/doxygen.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "51d4ec39f3a118ed0eb90ec27a1864c0ceef3898", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "Senth/bats", "max_issues_repo_path": "BATS/docs/thesis/chapters/appendices/doxygen.tex", "max_line_length": 366, "max_stars_count": null, "max_stars_repo_head_hexsha": "51d4ec39f3a118ed0eb90ec27a1864c0ceef3898", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "Senth/bats", "max_stars_repo_path": "BATS/docs/thesis/chapters/appendices/doxygen.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 128, "size": 475 }
\documentclass[nohyper,justified]{tufte-handout}\usepackage[]{graphicx}\usepackage[]{color} %% maxwidth is the original width if it is less than linewidth %% otherwise use linewidth (to make sure the graphics do not exceed the margin) \makeatletter \def\maxwidth{ % \ifdim\Gin@nat@width>\linewidth \linewidth \else \Gin@nat@width \fi } \makeatother \definecolor{fgcolor}{rgb}{0.345, 0.345, 0.345} \newcommand{\hlnum}[1]{\textcolor[rgb]{0.686,0.059,0.569}{#1}}% \newcommand{\hlstr}[1]{\textcolor[rgb]{0.192,0.494,0.8}{#1}}% \newcommand{\hlcom}[1]{\textcolor[rgb]{0.678,0.584,0.686}{\textit{#1}}}% \newcommand{\hlopt}[1]{\textcolor[rgb]{0,0,0}{#1}}% \newcommand{\hlstd}[1]{\textcolor[rgb]{0.345,0.345,0.345}{#1}}% \newcommand{\hlkwa}[1]{\textcolor[rgb]{0.161,0.373,0.58}{\textbf{#1}}}% \newcommand{\hlkwb}[1]{\textcolor[rgb]{0.69,0.353,0.396}{#1}}% \newcommand{\hlkwc}[1]{\textcolor[rgb]{0.333,0.667,0.333}{#1}}% \newcommand{\hlkwd}[1]{\textcolor[rgb]{0.737,0.353,0.396}{\textbf{#1}}}% \usepackage{framed} \makeatletter \newenvironment{kframe}{% \def\at@end@of@kframe{}% \ifinner\ifhmode% \def\at@end@of@kframe{\end{minipage}}% \begin{minipage}{\columnwidth}% \fi\fi% \def\FrameCommand##1{\hskip\@totalleftmargin \hskip-\fboxsep \colorbox{shadecolor}{##1}\hskip-\fboxsep % There is no \\@totalrightmargin, so: \hskip-\linewidth \hskip-\@totalleftmargin \hskip\columnwidth}% \MakeFramed {\advance\hsize-\width \@totalleftmargin\z@ \linewidth\hsize \@setminipage}}% {\par\unskip\endMakeFramed% \at@end@of@kframe} \makeatother \definecolor{shadecolor}{rgb}{.97, .97, .97} \definecolor{messagecolor}{rgb}{0, 0, 0} \definecolor{warningcolor}{rgb}{1, 0, 1} \definecolor{errorcolor}{rgb}{1, 0, 0} \newenvironment{knitrout}{}{} % an empty environment to be redefined in TeX \usepackage{alltt} \usepackage{mathtools} %%\usepackage{marginnote} %%\usepackage[top=1in, bottom=1in, outer=5.5in, inner=1in, heightrounded, marginparwidth=1in, marginparsep=1in]{geometry} \usepackage{enumerate} %% mess with the fonts %%\usepackage{fontspec} %%\defaultfontfeatures{Ligatures=TeX} % To support LaTeX quoting style \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} % For package xtable \usepackage{booktabs} % Nice toprules and bottomrules \heavyrulewidth=1.5pt % Change the default to heavier lines \usepackage{longtable} %%\usepackage{tabularx} % To control the width of the table % this should make caption font bold. %%\usepackage{xstring} %%\usepackage{etoolbox} %%\usepackage{url} %% xetex only \usepackage{breakurl} \usepackage{float} % for fig.pos='H' %%\usepackage{wrapfig} %%\usepackage{tikz} \usepackage{colortbl,xcolor} \makeatletter % Paragraph indentation and separation for normal text \renewcommand{\@tufte@reset@par}{% \setlength{\RaggedRightParindent}{0pc}% \setlength{\JustifyingParindent}{0pc}% \setlength{\parindent}{0pc}% \setlength{\parskip}{3pt}% } \@tufte@reset@par % Paragraph indentation and separation for marginal text \renewcommand{\@tufte@margin@par}{% \setlength{\RaggedRightParindent}{0pc}% \setlength{\JustifyingParindent}{0pc}% \setlength{\parindent}{0pc}% \setlength{\parskip}{2pt}% } \makeatother \makeatletter \title{Descriptive Statistics -- Associations} \author{Kate Davis} \makeatother \newcommand{\dev}[1] {Dev_{\bar{#1}}} \IfFileExists{upquote.sty}{\usepackage{upquote}}{} \begin{document} \widowpenalty=10000 \clubpenalty=10000 \section{Association of Hours Studied to Exam Grade} Six students enrolled in a reading section of organic chemistry are preparing for their first exam. How are the hours each student studied and their exam grade associated? \section{Scatterplot} A \textbf{Scatterplot} of exam grade by hours studied variables shows the relationship on the same observation, in this case, student. \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{figure} {\centering \includegraphics[width=\maxwidth]{figure/graphics-scatterplotsxy-1} } \caption[A scatterplot of Hours Studied v Exam Grade shows a possible linear relationship]{A scatterplot of Hours Studied v Exam Grade shows a possible linear relationship}\label{fig:scatterplotsxy} \end{figure} \end{knitrout} % latex table generated in R 3.1.2 by xtable 1.7-4 package % Thu Feb 26 11:17:42 2015 \begin{table}[ht] \centering \begin{tabular}{rrr} \hline & examgrade & studyhours \\ \hline Min. & 57.0 & 1.0 \\ 1st Qu. & 64.2 & 2.0 \\ Median & 71.5 & 2.5 \\ Mean & 72.2 & 3.2 \\ 3rd Qu. & 80.2 & 4.5 \\ Max. & 88.0 & 6.0 \\ Sum Sq Deviation & 686.6 & 18.6 \\ Variance & 114.4 & 3.1 \\ Standard Deviation & 10.7 & 1.8 \\ \hline \end{tabular} \caption{Summary Statistics Hours Studied and Grades} \end{table} % latex table generated in R 3.1.2 by xtable 1.7-4 package % Thu Feb 26 11:17:42 2015 \begin{table}[ht] \centering \begin{tabular}{rp{1.5cm}p{1.5cm}p{1.5cm}p{1.5cm}} \toprule & Exam Grade & Hours Studied & $\dev{x} hours$ & $\dev{y} grade$ \\ \midrule A & 82 & 6 & 9.8 & 2.8 \\ \rowcolor[gray]{0.95}B & 63 & 2 & -9.2 & -1.2 \\ C & 57 & 1 & -15.2 & -2.2 \\ \rowcolor[gray]{0.95}D & 88 & 5 & 15.8 & 1.8 \\ E & 68 & 3 & -4.2 & -0.2 \\ \rowcolor[gray]{0.95}F & 75 & 2 & 2.8 & -1.2 \\ \bottomrule Total & 433.0& 19.0& 0.0& 0.0 \\ \rowcolor[gray]{0.95}Total/N & $\bar{x}=10.7$ & $\bar{y}= 1.8$ & 0.0 & 0.0 \\ \end{tabular} \caption{} \end{table} % latex table generated in R 3.1.2 by xtable 1.7-4 package % Thu Feb 26 11:17:42 2015 \begin{table}[ht] \centering \begin{tabular}{p{1.5cm}p{1.5cm}p{1.5cm}p{1.5cm}p{1.5cm}p{1.5cm}} \toprule & Exam Grade & Hours Studied & $(\dev{x})^2$ & $(\dev{y})^2$ & $\dev{x}\dev{y} hours grade $ \\ \midrule A & 82.0 & 6.0 & 96.0 & 7.8 & 27.4 \\ \rowcolor[gray]{0.95}B & 63.0 & 2.0 & 84.6 & 1.4 & 11.0 \\ C & 57.0 & 1.0 & 231.0 & 4.8 & 33.4 \\ \rowcolor[gray]{0.95}D & 88.0 & 5.0 & 249.6 & 3.2 & 28.4 \\ E & 68.0 & 3.0 & 17.6 & 0.0 & 0.8 \\ \rowcolor[gray]{0.95}F & 75.0 & 2.0 & 7.8 & 1.4 & -3.4 \\ \bottomrule & & Total & 686.6& 18.6& 97.6 \\ \rowcolor[gray]{0.95}& & Total/N & $Var(X)=114.4$ & $Var(Y)= 3.1$ & $Cov(X,Y)=16.3$ \\ & & StdDev & $\sqrt{Var(X)}=72.2$ & $\sqrt{Var(Y)}= 3.2$ & \\ \end{tabular} \caption{} \end{table} \section{Covariance} The \textbf{Covariance}, a measure of strength of the association between any two variables $X$ and $Y$, denoted $Cov(X,Y)$ is calculated by first multiplying the deviations from their means, $\dev{x}$ and $\dev{y}$, then summing over all observations and dividing by $N$, the number of observations. This is very similar to the population variance calculation, and the variance can be thought of as the covariance of a variable with itself ie. $Var(X)=Cov(X,X)$. \begin{equation*} Cov(X,Y)=\frac{\Sigma_{i=1}^{N} Dev_{\bar{x}}Dev_{\bar{y}}}{N} \end{equation*} The Covariance of Hours Studied with Exam Grade is 16.3 "Hours x Grade". These units make very little sense. We cannot compare covariances among variables in a data set if the units are different. \section{Linear Correlation} A standardized Covariance is the \textbf{Linear Correlation}, calculated by dividing each Covariance by the Standard Deviations of each of the variables: \begin{equation*} Corr(X,Y)=\frac{Cov(Y,X)}{(StdDev(X)StdDev(Y))} \end{equation*} The Correlation of Hours Studied with Exam Grade is 0.84631 with \textbf{no units}, so the correlations of multiple pairs of variables can be compared. Correlations are always between $-1$ and $1$, and are a quantification of the linear relationship between two variables. A correlation of zero means that there is linear relationship between two variables, although there may be a non-linear relationship. A correlation of $1$ or $-1$ is indicates a perfect positive or negative linear relationship. $Corr(X,X)=1$ always. \textbf{Correlation does not imply Causation!} Even if two variables have a high or perfect correlation, there is not necessarily causation. Causation means X depends on Y or Y depends on X. The Squared value of the correlation, 71.6\%, called the Coefficient of Determination, and noted as $R^2$ is a measure of the "shared variance" of two variable, and the complement 28.4\% is the proportion of variance not explained by the association. \section{Simple Linear Regression} When a linear correlation exists between two variables, we can explore causation using a \textbf{Simple Linear Regression}, also called Ordinary Least Squares (OLS), regressing a dependent variable, denoted $Y$, on an independent variable, denoted $X$ as a line with the form: \begin{equation*} Y=\alpha + {\beta}X +{\epsilon} \hat{Y}={\alpha} + {\beta}X \end{equation*} \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{marginfigure} {\centering \includegraphics[width=\maxwidth]{figure/graphics-ols-1} } \caption[Green regression line with prediction error, as noted in red on the chart]{Green regression line with prediction error, as noted in red on the chart}\label{fig:ols} \end{marginfigure} \end{knitrout} This is very similar to the traditional algebra formula $y=mx+b$ with slope $m$ and y-intercept $b$. In this case, the slope is ${\beta}$. \begin{equation*} {\beta}=\frac{Cov(X,Y)}{Var(X)}=Corr(X,Y)\frac{StdDev(Y)}{StdDev(X)} \end{equation*} Regressing exam grade on hours studied \begin{equation*} {\beta}=\frac{16.3}{114.4}=0.14 \end{equation*} The linear regression always goes through the point $(\bar{x},\bar{y})$, so returning to algebra, any point plus the slope determines the line: \begin{equation*} {\alpha}=\bar{y}-{\beta}\bar{x} \end{equation*} $\hat{\alpha}=\ensuremath{-6.91}$ for our regression. So, \begin{equation*} \hat{y}=\ensuremath{-6.91} +0.14\bar{x} \end{equation*} The predicted value for any $y_i$ is $\hat{y_i}$, and the prediction error is $\hat{\epsilon}_i=y_i - \hat{y_i}$. Some properties of the Simple Linear Regression: \begin{itemize} \item $\Sigma_{i=1}^{N} \hat{\epsilon}_i=0 $ \item $\Sigma_{i=1}^{N} x_i \hat{\epsilon}_i=0 $ \item The predicted values $\hat{y_i}$ minimize the sum of the squared prediction errors, $\Sigma_{i=1}^{N} \hat{\epsilon}_i^2$, often referred to as Sum Squared Errors, or SSE. \item The regression equation is valid to predict $\hat{y}$ values in the range of X, that is, on the interval (min(X),max(X)), and any prediction will be in the range of (min(Y),max(Y)) \end{itemize} \end{document}
{ "alphanum_fraction": 0.6967877228, "avg_line_length": 38.0108695652, "ext": "tex", "hexsha": "0b94bc7a9b3f4dcd92468e52255dfd2c11d37edc", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d58bdd7ff72c8b259f0d5bceb22586aa676c456a", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "KateDavis/MA2300", "max_forks_repo_path": "homework/studyhours/StudyHours.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d58bdd7ff72c8b259f0d5bceb22586aa676c456a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "KateDavis/MA2300", "max_issues_repo_path": "homework/studyhours/StudyHours.tex", "max_line_length": 465, "max_stars_count": null, "max_stars_repo_head_hexsha": "d58bdd7ff72c8b259f0d5bceb22586aa676c456a", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "KateDavis/MA2300", "max_stars_repo_path": "homework/studyhours/StudyHours.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3733, "size": 10491 }
\input{topmatter.tex} \input{macros} \title{Formalized Higher-Ranked Polymorphic Type Inference Algorithms} \author{\textbf{Jinxu Zhao}} \date{July 2021} \begin{document} \maketitle \begin{abstract} \input{Sources/Abstract} \end{abstract} %%---------------------%% \frontmatter %%---------------------%% \makedeclaration \makeAck \tableofcontents \listoffigures \listoftables % \listoftheorems[ignoreall, show={theorem,lemma}] %%---------------------%% \mainmatter %%---------------------%% \part{Prologue} \include{Sources/Introduction} \include{Sources/Background} %\begin{comment} \part{Higher-Ranked Type Inference Algorithms} \include{Sources/ITP} \include{Sources/ICFP} \include{Sources/Top} \part{Related Work} \include{Sources/Related} \part{Epilogue} \include{Sources/Conclusion} %\end{comment} % This ensures that the subsequent sections are being included as root % items in the bookmark structure of your PDF reader. \bookmarksetup{startatroot} %%---------------------%% % \backmatter %%---------------------%% \cleardoublepage \bibliographystyle{ACM-Reference-Format} \bibliography{main} % \part{Technical Appendix} % \appendix % \chapter{Proof Experience with Abella} \end{document}
{ "alphanum_fraction": 0.6778413737, "avg_line_length": 16.7534246575, "ext": "tex", "hexsha": "d23a6a75738067db88715735d0672c7945cdba4b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "823bfe90e4b5cc5b7d90c045670bdf4b087877cf", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "JimmyZJX/Dissertation", "max_forks_repo_path": "Thesis.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "823bfe90e4b5cc5b7d90c045670bdf4b087877cf", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "JimmyZJX/Dissertation", "max_issues_repo_path": "Thesis.tex", "max_line_length": 70, "max_stars_count": null, "max_stars_repo_head_hexsha": "823bfe90e4b5cc5b7d90c045670bdf4b087877cf", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "JimmyZJX/Dissertation", "max_stars_repo_path": "Thesis.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 333, "size": 1223 }
\subsection{Candidate set verification} \label{sec:verification} The pruning methods based on the \khop and the \ncl labels start with a \CR and prune some of the candidate vertices based on the conditions described in theorems \ref{thm:khop} and \ref{thm:ncl}. The verification step reduces \CR to \RS by retaining only those vertices $v$ for which there exists an isomorphism $\phi$ in which $\phi(u) = v$. Informally, it does this by checking if the pattern $P$ can be embedded at $v$ such that total cost of label mismatch is at most $\alpha$. A vertex $v \in R(u)$ iff for any walk $w_p = (u_0=u), u_1,\ldots,u_m$ that covers all the edges in pattern $P$ there exists atleast one walk $w_d = (v_0=v), v_1,\ldots, v_m$ in the database $G$ and satisfying the following three conditions: i) $u_i = u_j \implies v_i = v_j$ ii) $(v_i, v_{i+1}) \in \eg$ iii) $\sum\matij{C}{L(u_i)}{L(v_i)} \leq \alpha$. Unlike the \ncl label condition, the above conditions are necessary and sufficient and can be verified by following the definition of isomorphism. Now, to check whether $v \in R(u)$, we first map $u$ to $v$ and subtract the cost of $\matij{C}{L(u)}{L(v)}$ from the threshold $\alpha$. We then try to map the remaining vertices in $P$ by following $w_p$ one edge at a time. In any step $(u_i, u_{i+1})$, if $u_i$ and $u_{i+1}$ are mapped to $x$ and $y$ respectively then we ensure that $(x, y) \in \eg$ (condition ii). If on the other hand, $u_{i+1}$ is not mapped then we map it to some vertex in $y \in R'(u_{i+1})$ and subtract the cost $\matij{C}{L(u_{i+1})}{L(y)}$ from the remaining $\alpha$ threshold. We back track if the remaining threshold is less than $0$. The vertex $v \in R(u)$, if we can complete the walk $w_p$ satisfying the above three conditions. Consider checking whether the vertex $30 \in R(1)$ in the pattern in the figure~\ref{subfig:ex_sub} and let $\alpha = 0.5$. The sequence $w_p = 1, 2, 4, 3, 1$ is a walk in the pattern that covers all the edges. In general, finding a walk that covers all the edges in a graph is a special case of Chinese postman problem \cite{chinesepostman}. We first map $1$ to $30$ an subtract the cost $\matij{C}{L(1)}{L(30)} = 0.2$ from $0.5$. In the first step $(1,2)$, since $2$ is not mapped we map it some vertex, say $20$. The cost of the mapping is $0.2$ and the remaining threshold is $0.3 -0.2 = 0.1$. It can be verified that these mappings cannot complete the walk $w_p$. So we backtrack and map $2$ to another vertex say $10$. The walk can be completed with the mappings as in $\phi_1$ in Table~\ref{subfig:ex_occur} and the remaining cost is $0.1$. The mappings of the pattern vertices not only implies that $30 \in R(1)$, it also tells us that $10, 60, 40$ represent vertices $2, 3, 4$ respectively. The above procedure can be easily extended to enumerate all the isomorphims of the pattern.
{ "alphanum_fraction": 0.7124433601, "avg_line_length": 66.7209302326, "ext": "tex", "hexsha": "8ecf619d1313259637de734fe70e8ac11aa4b86c", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-05-08T11:17:33.000Z", "max_forks_repo_forks_event_min_datetime": "2020-05-08T11:17:33.000Z", "max_forks_repo_head_hexsha": "4bb1d78b52175add3955de47281c3ee0073c7943", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "PranayAnchuri/approx-graph-mining-with-label-costs", "max_forks_repo_path": "finalversion/sigkdd13/verify.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "4bb1d78b52175add3955de47281c3ee0073c7943", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "PranayAnchuri/approx-graph-mining-with-label-costs", "max_issues_repo_path": "finalversion/sigkdd13/verify.tex", "max_line_length": 86, "max_stars_count": null, "max_stars_repo_head_hexsha": "4bb1d78b52175add3955de47281c3ee0073c7943", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "PranayAnchuri/approx-graph-mining-with-label-costs", "max_stars_repo_path": "finalversion/sigkdd13/verify.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 907, "size": 2869 }
\subsection{Discussion} The quantative and qualitative data from iteration 2 shows that the app works for "validating the coaches' level of knowledge during their education", which was the main purpose of the master thesis (see section \ref{sec:research-questions}. Now, the app should be tested in Uganda as well, and there can be an increased focus on "distance training" and "certify all staff". Iteration 2 has translated the theoretical understanding of answering research question 1, 2, 3 and 5 into practical experience. Now there are observable evidence for what the interactions from Iteration 1 showed: \begin{itemize} \item The purpose of the coach training should be to prepare the coach in having great youth sessions \item Therefore, this is what the quizzes should assess \item What it really means being a good YoungDrive coach, is having good youth sessions \end{itemize} \subsubsection{Validating the coaches' level of knowledge} The quiz results data shows that a lecture is still currently more valuable than taking a quiz in the app two times to improve (15.0\% to 12.8\%). For iteration 2, work on answering research question 4 has started. While the teacher has appreciated the multiple-choice assessment, challenges of designing test questions to support entrepreneurial learning has been found. It is clear that the app is valuable for assessment, increasing coach self-awareness and being a valuable indicator for the teacher. However, the questions formulated scores low on Bloom's Revised Taxonomy \citep{krathwohl} compared with YoungDrive's educational objectives for the topics. There are previously found issues with using multiple-choice for assessment and learning, but they seem to become especially relevant in the context of teaching entrepreneurship. Either question formulation needs to be improved, or creative design solutions needs to be experimented with which can increase coach understanding and identify and reduce guessing. This needs to be further investigated for iteration 3. Test with university student scored 100\% correct, means that common sense can go a long way, and that the results can't be 100\% trustworthy, and that multiple-choice questions has serious issues - this, we already knew during and before the coach training - but it needs to be taken care of. Similarly business and leadership experience for the coach seems to lead to a higher quiz average, while a low quiz average can't seem to be connected to any of the coach characteristics found during the interviews. This makes it hard to design the app for different types of coaches, without testing other parameters, which should be done in iteration 4 for the summative test. \subsubsection{Distance Learning} In addition to the formative app tests, workshop \# 2 heavily informs what is necessary when designing for use case 2, distance learning: preparing a session in regards to building confidence. The results from the workshop are somewhat surprising, factors not only those that relate to the four parameters from iteration 1 ("I am well prepared" and "I believe in myself"), but for some also "I believe in God" or "I am certified" (which relates to purpose 3 of the app). These should be considered for iteration 3. Further, app tests expose how the app is currently not actively designed with learning in mind, and thus not distance learning. This is unfortunate, both because distance learning is important, and as the app test with refugee innovators shows that there is an opportunity doing entrepreneurship training in rural areas outside of YoungDrive's coverage area. In order for online coach-training to work for distance learning, learning and feedback, and not only assessment, is however essential. While it may be technically possible, the teacher desires the app support her during the coach training, not replace her. Therefore, completely replacing the teacher with an app should be avoided. The teacher is very important for giving coaching and educating in a way that the app can't. But the teacher can also be empowered by the app. For the future, Josefina would have liked to be able to stop coaches from having taught, if they do not have 90-100 \% correct information on the subject. Today, Josefina can not assess this. This means that some coaches, are teaching incorrect information to hundreds of youth. Here, the quiz has a very good need to fill. If wrong on an answer, the app today has no means of giving high pay-off tips to get to 100\%, or exposing you to deliberate practice or perceptual exposure. If the coach gets 9/10 correct answers reliably, or gets 5/10 answers with guesses, the coach still needs to retake all answers, not having learned the correct answer before taking the quiz again. How to develop the app to solve these issues, is not obvious. Multiple strategies could and should be used. The app could benefit from introducing smart feedback encouraging a growth mindset ("You did not get 100\% \textit{yet}") \cite{dweck}. \subsection{Next Iteration} After the meeting with the partner and expert group, the following was concluded from iteration \#2: \begin{itemize} \item The app is partly working on assessment now, but not for learning. Are coaches really learning via the app, especially learning to be better coaches? \begin{itemize} \item Multiple-choice is flawed in its current form. How can guesses be identified and reduced in a multiple-choice format? How can answering questions improve confidence and encourage learning? \item How can questions be formulated in a way that teaches entrepreneurship, which is so practical? \end{itemize} \item The need for a field app still feels relevant (especially for sessions long since the coach training) \begin{itemize} \item An app could be used, either before you start planning (to guide what you need to study the most on), or after you think you are ready (so you can assess and improve). \item When designing the app, it is concluded that an app for coach training, and an app to use before a youth session, should be able to be the same app if possible, since the purpose of preparing the coach to be great with its youth session is the same. \item Discussing the importance of self-reflection after a youth session with Josefina, led to asking more of such questions in coach quizzes. While Fun Atmosphere can be hard to assess using multiple-choice, can Correct Structure and Time Management be assessed? \end{itemize} \end{itemize} After the partner and expert meeting, it was decided that the following needs to be done for iteration 3: \begin{itemize} \item Make sure that the coach actually learns the desired educational objectives \begin{itemize} \item Create a new quiz guided by Josefina, "Are you ready for Session 9?", also to test if Correct Structure and Time Management could be assessed using multiple-choice \item See if design additions to multiple-choice can increase learning in-line with Bloom's revised taxonomy \end{itemize} \item Design quiz app for learning, focus on field app, and have a design that works stand-alone from the YoungDrive coach training in mind. \begin{itemize} \item Investigate the effect of giving growth mindset feedback in the app (The Power of Yet approach) \end{itemize} \item Test if the app created in Zambia could work also in Uganda \begin{itemize} \item This also means converting all the questions from the new (Zambia) manual to the old (Uganda) manual, since both structure and content of the manuals has changed. \end{itemize} \end{itemize}
{ "alphanum_fraction": 0.7941793393, "avg_line_length": 113.8507462687, "ext": "tex", "hexsha": "2b3c7c31559e0a915cbaa59dbf5a3b9464cf65c4", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "1e8639a356a7d2d4866819d7a569a24cc06e6a17", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "marcusnygren/YoungDriveMasterThesis", "max_forks_repo_path": "result/iteration2/iteration_2_together.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "1e8639a356a7d2d4866819d7a569a24cc06e6a17", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "marcusnygren/YoungDriveMasterThesis", "max_issues_repo_path": "result/iteration2/iteration_2_together.tex", "max_line_length": 672, "max_stars_count": null, "max_stars_repo_head_hexsha": "1e8639a356a7d2d4866819d7a569a24cc06e6a17", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "marcusnygren/YoungDriveMasterThesis", "max_stars_repo_path": "result/iteration2/iteration_2_together.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1603, "size": 7628 }
\documentclass[../searching.tex]{subfiles} \begin{document} To search in a sorted array or string using brute force with a for loop, it takes $O(n)$ time. Binary search is designed to reduce search time if the array or string is already sorted. It uses the divide and conquer method; each time we compare our target with the middle element of the array and with the comparison result to decide the next search region: either the left half or the right half. Therefore, each step we filter out half of the array which gives the time complexity function $T(n) = T(n/2) + O(1)$, which decrease the time complexity to $O(\log n)$. Binary Search can be applied to different tasks: \begin{enumerate} \item Find Exact target, find the first position that value>= target, find the last position that value<= target.(this is called lower\_bound, and upper\_bound. \end{enumerate} \subsection{Standard Binary Search and Python Module bisect} Binary search is usually carried out on a Static sorted array or 2D matrix. There are three basic cases: (1) find the exact target that value = target; If there are duplicates, we are more likely to be asked to (2) find the first position that has value >= target; (3) find the first position that has value <= target. Here, we use two example array: one without duplicates and the other has duplicates. \begin{lstlisting}[language=Python] a = [2, 4, 5, 9] b = [0, 1, 1, 1, 1, 1] \end{lstlisting} \paragraph{Find the Exact Target} This is the most basic application of binary search. We can set two pointers, l and r. Each time we compute the middle position, and check if it is equal to the target. If it is, return the position; if it is smaller than the target, move to the left half, otherwise, move to the right half. The Python code is given: \begin{lstlisting}[language=Python] def standard_binary_search(lst, target): l, r = 0, len(lst) - 1 while l <= r: mid = l + (r - l) // 2 if lst[mid] == target: return mid elif lst[mid] < target: l = mid + 1 else: r = mid - 1 return -1 # target is not found \end{lstlisting} Now, run the example: \begin{lstlisting}[language=Python] print("standard_binary_search: ", standard_binary_search(a,3), standard_binary_search(a,4), standard_binary_search(b, 1)) \end{lstlisting} The print out is: \begin{lstlisting} standard_binary_search: -1 1 2 \end{lstlisting} From the example, we can see that multiple \textbf{duplicates} of the target exist, it can possibly return any one of them. And for the case when the target does not exist, it simply returns -1. In reality, we might need to find a position where we can potentially insert the target to keep the sorted array sorted. There are two cases: (1) the first position that we can insert, which is the first position that has value>= target (2) and the last position we can insert, which is the first position that has value > target. For example, if we try to insert 3 in a, and 1 in b, the first position should be 1 and 1 in each array, and the last position is 1 and 6 instead. For these two cases, we have a Python built-in Module \textbf{bisect} which offers two methods: bisect\_left() and bisect\_right() for these two cases respectively. \paragraph{Find the First Position that value >= target} This way the target position separates the array into two halves: value < target, target\_position, value>= target. In order to meet the purpose, we make sure that if value < target, we move to the right side, else, move to the left side. \begin{lstlisting}[language=Python] # bisect_left, no longer need to check the mid element, # it separate the list in to two halfs: value < target, mid, value >= target def bisect_left_raw(lst, target): l, r = 0, len(lst)-1 while l <= r: mid = l + (r-l)//2 if lst[mid] < target: # move to the right half if the value < target, till l = mid + 1 #[mid+1, right] else:# move to the left half is value >= target r = mid - 1 #[left, mid-1] return l # the final position is where \end{lstlisting} % Now insert the value with: % \begin{lstlisting}[language=Python] % lst.insert(l+1, target) % \end{lstlisting} \paragraph{Find the First Position that value > target} This way the target position separates the array into two halves: value <= target, target\_position, value> target. Therefore, we simply change the condition of if value < target to if value <= target, then we move to the right side. \begin{lstlisting}[language=Python] #bisect_right: separate the list into two halfs: value<= target, mid, value > target def bisect_right_raw(lst, target): l, r = 0, len(lst)-1 while l <= r: mid = l + (r-l)//2 if lst[mid] <= target: l = mid + 1 else: r = mid -1 return l \end{lstlisting} Now, run an example: \begin{lstlisting}[language=Python] print("bisect left raw: find 3 in a :", bisect_left_raw(a,3), 'find 1 in b: ', bisect_left_raw(b, 1)) print("bisect right raw: find 3 in a :", bisect_right_raw(a, 3), 'find 1 in b: ', bisect_right_raw(b, 1)) \end{lstlisting} The print out is: \begin{lstlisting} bisect left raw: find 3 in a : 1 find 1 in b: 1 bisect right raw: find 3 in a : 1 find 1 in b: 6 \end{lstlisting} \paragraph{Bonus} For the last two cases, if we return the position as l-1, then we get the last position that value < target, and the last position value <= target. \paragraph{Python Built-in Module bisect} This module provides support for maintaining a list in sorted order without having to sort the list after each insertion. It offers six methods as shown in Table~\ref{tab:method_bisect}. However, only two are most commonly used: bisect\_left and bisect\_right. \begin{table}[h] \begin{small} \centering \noindent\captionof{table}{ Methods of \textbf{bisect}} \noindent \begin{tabular}{|p{0.25\columnwidth}|p{0.75\columnwidth}| } \hline Method & Description \\ \hline bisect\_left(a, x, lo=0, hi=len(a) & The parameters lo and hi may be used to specify a subset of the list; the function is the same as bisect\_left\_raw \\\hline bisect\_right(a, x, lo=0, hi=len(a) & The parameters lo and hi may be used to specify a subset of the list; the function is the same as bisect\_right\_raw \\\hline bisect(a, x, lo=0, hi=len(a)) &Similar to bisect\_left(), but returns an insertion point which comes after (to the right of) any existing entries of x in a.\\ \hline insort\_left(a, x, lo=0, hi=len(a)) &This is equivalent to a.insert(bisect.bisect\_left(a, x, lo, hi), x).\\ \hline insort\_right(a, x, lo=0, hi=len(a)) & This is equivalent to a.insert(bisect.bisect\_right(a, x, lo, hi), x).\\ \hline insort(a, x, lo=0, hi=len(a)) & Similar to insort\_left(), but inserting x in a after any existing entries of x.\\ \hline \end{tabular} \label{tab:method_bisect} \end{small} \end{table} Let's see come examplary code: \begin{lstlisting}[language=Python] from bisect import bisect_left,bisect_right, bisect print("bisect left: find 3 in a :", bisect_left(a,3), 'find 1 in b: ', bisect_left(b, 1)) # lower_bound, the first position that value>= target print("bisect right: find 3 in a :", bisect_right(a, 3), 'find 1 in b: ', bisect_right(b, 1)) # upper_bound, the last position that value <= target \end{lstlisting} The print out is: \begin{lstlisting} bisect left: find 3 in a : 1 find 1 in b: 1 bisect right: find 3 in a : 1 find 1 in b: 6 \end{lstlisting} \subsection{Binary Search in Rotated Sorted Array} \label{concept_binary_search_in_array} The extension of the standard binary search is on array that the array is ordered in its own way like rotated array. \paragraph{Binary Search in Rotated Sorted Array } (See LeetCode problem, 33. Search in Rotated Sorted Array (medium). Suppose an array (without duplicates) sorted in ascending order is rotated at some pivot unknown to you beforehand. (i.e., 0 1 2 4 5 6 7 might become 4 5 6 7 0 1 2). You are given a target value to search. If found in the array return its index, otherwise return -1. You may assume no duplicate exists in the array. \begin{lstlisting}[numbers=none] Example 1: Input: nums = [3, 4,5,6,7,0,1,2], target = 0 Output: 5 Example 2: Input: nums = [4,5,6,7,0,1,2], target = 3 Output: -1 \end{lstlisting} In the rotated sorted array, the array is not purely monotonic. Instead, there is one drop in the array because of the rotation, where it cuts the array into two parts. Suppose we are starting with a standard binary search with example 1, at first, we will check index 3, then we need to move to the right side? Assuming we compare our middle item with the left item, \begin{lstlisting}[numbers=none] if nums[mid] > nums[l]: # the left half is sorted elif nums[mid] < nums[l]: # the right half is sorted else: # for case like [1,3], move to the right half \end{lstlisting} For a standard binary search, we simply need to compare the target with the middle item to decide which way to go. In this case, we can use objection. Check which side is sorted, because no matter where the left, right and the middle index is, there is always one side that is sorted. So if the left side is sorted, and the value is in the range of the [left, mid], then we move to the left part, else we object the left side, and move to the right side instead. \begin{figure}[h] \centering \includegraphics[width=0.7\columnwidth]{fig/rotated_array.png} \caption{Example of Rotated Sorted Array} \label{fig:rotated_sorted_array} \end{figure} The code is shown: \begin{lstlisting}[language=Python] '''implemente the rotated binary search''' def RotatedBinarySearch(nums, target): if not nums: return -1 l,r = 0,len(nums)-1 while l<=r: mid = l+ (r-l)//2 if nums[mid] == target: return mid if nums[l] < nums[mid]: # if the left part is sorted if nums[l] <= target <= nums[mid]: r = mid-1 else: l = mid+1 elif nums[l] > nums[mid]: # if the right side is sorted if nums[mid] <= target <= nums[r]: l = mid+1 else: r = mid-1 else: l = mid + 1 return -1 \end{lstlisting} \begin{bclogo}[couleur = blue!30, arrondi=0.1,logo=\bccrayon,ombre=true]{What happens if there is duplicates in the rotated sorted array? } In fact, similar comparing rule applies: \begin{lstlisting}[numbers=none] if nums[mid] > nums[l]: # the left half is sorted elif nums[mid] < nums[l]: # the right half is sorted else: # for case like [1,3], or [1, 3, 1, 1, 1] or [3, 1, 2, 3, 3, 3] only l++ \end{lstlisting} \end{bclogo} %%%%%%%%%%%%%%binary search on result space%%%%%%% \subsection{Binary Search on Result Space} If the question gives us the context: the target is in the range [left, right], we need to search the first or last position that satisfy a condition function. We can apply the concept of standard binary search and bisect\_left and bisect\_right and its mutant. Where we use the condition function to replace the value comparison between target and element at middle position. The steps we need: \begin{enumerate} \item get the result search range [l, r] which is the initial value for l and r pointers. \item decide the valid function to replace such as if lst[mid] < target \item decide which binary search we use: standard, bisect\_left/ bisect\_right or its mutant. \end{enumerate} For example: \begin{examples}[resume] \item \textbf{441. Arranging Coins (easy)}. You have a total of n coins that you want to form in a staircase shape, where every k-th row must have exactly k coins. Given n, find the total number of full staircase rows that can be formed. n is a non-negative integer and fits within the range of a 32-bit signed integer. \begin{lstlisting}[numbers=none] Example 1: n = 5 The coins can form the following rows: * * * * * Because the 3rd row is incomplete, we return 2. \end{lstlisting} \textbf{Analysis: } Given a number n>=1, the minimum row is 1, and the maximum is n. Therefore, our possible result range is [1, n]. These can be treated as indexes of the sorted array. For a given row, we write a function to check if it is possible. We need a function $r* (r+1) // 2 <= n$. For this problem, we need to search in the range of [1, n] to find the last position that is valid. This is bisect\_left or bisect\_right, where we use the function replace the condition check: \begin{lstlisting}[language=Python] def arrangeCoins(self, n): def isValid(row): return (row*(row+1))//2 <= n # we need to find the last position that is valid (<=) def bisect_right(): l, r = 1, n while l <= r: mid = l + (r-l) // 2 if isValid(mid): # replaced compared with the standard binary search l = mid + 1 else: r = mid - 1 return l-1 return bisect_right() \end{lstlisting} \item \textbf{278. First Bad Version.} You are a product manager and currently leading a team to develop a new product. Unfortunately, the latest version of your product fails the quality check. Since each version is developed based on the previous version, all the versions after a bad version are also bad. Suppose you have n versions [1, 2, ..., n] and you want to find out the first bad one, which causes all the following ones to be bad. You are given an API bool isBadVersion(version) which will return whether version is bad. Implement a function to find the first bad version. You should minimize the number of calls to the API. Solution: we keep doing binary search until we have searched all possible areas. \begin{lstlisting}[language = Python] class Solution(object): def firstBadVersion(self, n): """ :type n: int :rtype: int """ l,r=0,n-1 last = -1 while l<=r: mid = l+(r-l)//2 if isBadVersion(mid+1): #move to the left, mid is index, s r=mid-1 last = mid+1 #to track the last bad one else: l=mid-1 return last \end{lstlisting} \end{examples} % \subsection{Bisection Method} (second edition) % The binary search principle can be used to find the root of a function that may be difficult to compute mathematically. We have not seen any problems that require this method on LeetCode yet. Thus we define the problem as: % Find the monthly payment for a loan: You want to buy a car using loan and want to pay in monthly installment of d d % \subsection{Python Library} % Python has \textbf{bisect} module for binary search. % \begin{lstlisting}[numbers=none] % bisect.bisect_left(a, x): Return the leftmost index where we can insert x into a to maintain sorted order! Leftmost rl that satisfy: x<=a[rl] % bisect.bisect_right(a, x): Return the rightmost index where we can insert x into a to maintain sorted order! Right most rr that satisfy: x>=a[rr] % \end{lstlisting} % For example: % \begin{lstlisting}[language=Python] % from bisect import bisect_left,bisect_right % a = [1, 2, 3, 3, 3, 4, 5] % p1, p2= bisect_left(a,3), bisect_right(a, 3) % print(p1, p2) % # output % # 2, 5 % \end{lstlisting} \subsection{LeetCode Problems} \begin{examples} \item \textbf{35. Search Insert Position (easy).} Given a sorted array and a target value, return the index if the target is found. If not, return the index where it would be if it were inserted in order. You can assume that there are no duplicates in the array. \begin{lstlisting}[numbers=none] Example 1: Input: [1,3,5,6], 5 Output: 2 Example 2: Input: [1,3,5,6], 2 Output: 1 Example 3: Input: [1,3,5,6], 7 Output: 4 Example 4: Input: [1,3,5,6], 0 Output: 0 \end{lstlisting} \textbf{Solution: Standard Binary Search Implementation.} For this problem, we just standardize the Python code of binary search, which takes $O(logn)$ time complexity and O(1) space complexity without using recursion function. In the following code, we use exclusive right index with len(nums), therefore it stops if l == r; it can be as small as 0 or as large as n of the array length for numbers that are either smaller or equal to the nums[0] or larger or equal to nums[-1]. We can also make the right index inclusive. \begin{lstlisting}[language = Python] # exclusive version def searchInsert(self, nums, target): l, r = 0, len(nums) #start from 0, end to the len (exclusive) while l < r: mid = (l+r)//2 if nums[mid] < target: #move to the right side l = mid+1 elif nums[mid] > target: #move to the left side, not mid-1 r= mid else: #found the traget return mid #where the position should go return l \end{lstlisting} \begin{lstlisting}[language = Python] # inclusive version def searchInsert(self, nums, target): l = 0 r = len(nums)-1 while l <= r: m = (l+r)//2 if target > nums[m]: #search the right half l = m+1 elif target < nums[m]: # search for the left half r = m-1 else: return m return l \end{lstlisting} \end{examples} Standard binary search \begin{enumerate} \item 611. Valid Triangle Number (medium) \item 704. Binary Search (easy) \item 74. Search a 2D Matrix) Write an efficient algorithm that searches for a value in an m x n matrix. This matrix has the following properties: \begin{enumerate} \item Integers in each row are sorted from left to right. \item The first integer of each row is greater than the last integer of the previous row. \end{enumerate} \begin{lstlisting}[numbers=none] For example, Consider the following matrix: [ [1, 3, 5, 7], [10, 11, 16, 20], [23, 30, 34, 50] ] Given target = 3, return true. \end{lstlisting} Solution: 2D matrix search, time complexity from $O(n^2)$ to $O(lgm+lgn)$. \begin{lstlisting}[language = Python] def searchMatrix(self, matrix, target): """ :type matrix: List[List[int]] :type target: int :rtype: bool """ if not matrix: return False row, col = len(matrix), len(matrix[0]) if row==0 or col==0: #for [[]] return False sr, er = 0, row-1 #fisrst search the mid row while sr<=er: mid = sr+(er-sr)//2 if target>matrix[mid][-1]: #go to the right side sr=mid+1 elif target < matrix[mid][0]: #go the the left side er = mid-1 else: #value might be in this row #search in this row lc, rc = 0, col-1 while lc<=rc: midc = lc+(rc-lc)//2 if matrix[mid][midc]==target: return True elif target<matrix[mid][midc]: #go to left rc=midc-1 else: lc=midc+1 return False return False \end{lstlisting} Also, we can treat is as one dimensional, and the time complexity is $O(lg(m*n))$, which is the same as $O(log(m)+log(n))$. \begin{lstlisting}[language = Python] class Solution: def searchMatrix(self, matrix, target): if not matrix or target is None: return False rows, cols = len(matrix), len(matrix[0]) low, high = 0, rows * cols - 1 while low <= high: mid = (low + high) / 2 num = matrix[mid / cols][mid % cols] if num == target: return True elif num < target: low = mid + 1 else: high = mid - 1 return False \end{lstlisting} \end{enumerate} Check \url{http://www.cnblogs.com/grandyang/p/6854825.html} to get more examples. Search on rotated and 2d matrix: \begin{enumerate} \item 81. Search in Rotated Sorted Array II (medium) \item 153. Find Minimum in Rotated Sorted Array (medium) The key here is to compare the mid with left side, if mid-1 has a larger value, then that is the minimum \item 154. Find Minimum in Rotated Sorted Array II (hard) \end{enumerate} Search on Result Space: \begin{enumerate} \item 367. Valid Perfect Square (easy) (standard search) \item 363. Max Sum of Rectangle No Larger Than K (hard) \item 354. Russian Doll Envelopes (hard) \item 69. Sqrt(x) (easy) \end{enumerate} \end{document}
{ "alphanum_fraction": 0.6725560625, "avg_line_length": 47.911627907, "ext": "tex", "hexsha": "3427a4c6d3e7248b1c6dbe69bef03f728720519a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "131199fea0b082d92c0f272a495c7a56a3242b71", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "stungkit/Algorithms-and-Coding-Interviews", "max_forks_repo_path": "Easy-Book/chapters/mastering/learning/search/binary_search.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "131199fea0b082d92c0f272a495c7a56a3242b71", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "stungkit/Algorithms-and-Coding-Interviews", "max_issues_repo_path": "Easy-Book/chapters/mastering/learning/search/binary_search.tex", "max_line_length": 838, "max_stars_count": null, "max_stars_repo_head_hexsha": "131199fea0b082d92c0f272a495c7a56a3242b71", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "stungkit/Algorithms-and-Coding-Interviews", "max_stars_repo_path": "Easy-Book/chapters/mastering/learning/search/binary_search.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5681, "size": 20602 }
\chapter{Statics} % https://en.wikipedia.org/wiki/Timeline_of_fundamental_physics_discoveries \emph{Thermodynamics} began as a theory of steam engines. \emph{Volume} is how much space something occupies. \emph{Density} is weight per volume. % https://en.wikipedia.org/wiki/Work_(physics) \index{definitions!work}% \index{work!definition}% \emph{Work}: If one lifts a weight \(F\) so that its height increases by \(h\), then he does a \emph{work} of \( W = F \cdot h \). Coriolis defined this in 1826 \cite{coriolis1829calcul} when steam engines lifted buckets of water out of flooded ore mines. We shall generalize this definition later, if the force and the displacement make an angle. % https://en.wikipedia.org/wiki/Power_(physics) \index{definitions!power}% \index{power!definition}% \emph{Power} is work done per unit time: \( P = W / t \). This means that a steam engine with twice the power will clean the same mine in half the time. % https://hsm.stackexchange.com/questions/414/when-were-the-modern-notions-of-work-and-energy-created % Helmholtz 1847? \section{Archimedes's principle of buoyancy} % https://en.wikipedia.org/wiki/Archimedes%27_principle % https://en.wikipedia.org/wiki/On_Floating_Bodies Put a solid into a container full of liquid. The volume of the spilled part of the liquid is equal to the volume of the submerged part of the solid. \index{Archimedes!principle of buoyancy}% \index{laws named after people!Archimedes's principle of buoyancy}% \index{laws!buoyancy}% \paragraph{Archimedes's principle of buoyancy} Equal are the weight of the object and the buoyant force on the object. (???) \section{Pascal's law of fluid pressure transmission} Blaise Pascal 1647 Pascal's law: Incompressible fluid spreads pressure evenly. \index{Pascal!law of fluid pressure transmission}% \index{laws named after people!Pascal's law of fluid pressure transmission}% \index{laws!fluid pressure transmission}% \index{statics!Pascal's law of fluid pressure transmission}% \( P = \rho g h \) \paragraph{Appreciating Pascal's barrel demonstration} Counterintuitive: The hydrostatic pressure does not depend on \emph{how much} fluid. It depends on \emph{how deep}. \footnote{\url{https://www.youtube.com/watch?v=EJHrr21UvY8}} \section{Understanding the zeroth law of thermodynamics} Put hot iron into cold water. Eventually both become equally warm. \index{laws!thermodynamics, zeroth}% \emph{Zeroth law of thermodynamics}: Heat never spontaneously flows from cold to hot. \section{Unstructured content} TODO Pendulum \index{definitions!pendulum}% \index{pendulum!definition}% A pendulum is a bob hung on a string. \emph{Conservation of mechanical energy}: A released pendulum comes back to the same height. TODO Interplay between potential and kinetic energy: Galileo's interrupted pendulum TODO Vacuum Boyle showed that objects of different masses fall with the same acceleration. TODO Toricelli manometer TODO von Guericke, Magdeburg TODO Boyle TODO Pascal Boyle's experiments \index{laws named after people!Lavoisier's law of conservation of mass}% TODO Lavoisier's law of conservation of mass \section{Understanding energy} Conservation of energy Kinetic energy \emph{Kinetic energy} is \( \frac{1}{2} m |v|^2 \) which can also be written as \( |p|^2 / (2m) \). This is explained by energy conservation and work by a constant force \(F\) that accelerates an initially resting mass. \(F = ma\) and \(s = \frac{1}{2}at^2\) and \( W = Fs \) and \( v = at \) therefore \( W = E_k = \frac{1}{2} m(at)^2 = \frac{1}{2}mv^2 \). \section{Understanding gases} % https://en.wikipedia.org/wiki/Perfect_gas % https://en.wikipedia.org/wiki/Gas#Historical_synthesis A \emph{gas} is ... \emph{Pressure} is measured by a manometer. In statics, the \emph{volume} of a gas is the volume of its container. Statics assumes that a gas fills its container evenly. \emph{Temperature} is measured by a thermometer. The unit of temperature is \emph{kelvin} (K). % ? Gas and piston at equilibrium: Gas and a piston with weight \(F\). \section{Using gas laws} Let there be a container of gas with pressure \(P_1\) and volume \(V_1\). Let this gas expand or shrink without changing its temperature so that its pressure becomes \(P_2\) and its volume becomes \(V_2\). \index{laws!gas pressure and volume}% \index{laws named after people!Boyle's law of gas pressure and volume}% \index{Boyle!Boyle's law of gas pressure and volume}% \emph{Boyle's law}: \( P_1 V_1 = P_2 V_2 \). Other gas laws \emph{Charles's law}? \emph{Dalton's law}? % https://en.wikipedia.org/wiki/Dalton%27s_law % https://en.wikipedia.org/wiki/Combined_gas_law % https://en.wikipedia.org/wiki/Gay-Lussac%27s_law#Pressure-temperature_law % https://en.wikipedia.org/wiki/Avogadro%27s_law \index{laws!ideal gas} \emph{Ideal gas law}: \( PV = nRT \). Kinetic energy of one mole of gas is \( \frac{3}{2} RT \). Statistical thermodynamics: kinetic theory of gases? \section{Understanding Boltzmann's constant} % https://en.wikipedia.org/wiki/Boltzmann_constant \emph{Boltzmann's constant} relates the average kinetic energy of particles in a gas and the temperature of the gas? % https://en.wikipedia.org/wiki/Gas_constant The \emph{gas constant} (molar gas constant, universal gas constant, ideal gas constant)? \section{Understanding Avogadro's number} \emph{Avogadro's number} is? Terms? System and environment Thermodynamic equilibrium \section{Understanding heat} Heat capacity \emph{Black's principle}: When two liquids are mixed, the heat released by one equals the heat absorbed by the other. ??? ??? If \(m_1\) amount of water at temperature \(T_1\) is mixed with \(m_2\) amount of water at temperature \(T_2\), then the result, after equilibrium, is \(m_1+m_2\) amount of water at temperature \(\frac{m_1 T_1 + m_2 T_2}{m_1+m_2}\). Specific heat Latent heat \section{Understanding thermodynamic process and cycle?} Isobaric? Isochoric? Adiabatic? Expansion of gas? Work done by a gas? Carnot engine? Thermodynamic efficiency? \section{Understanding the laws of thermodynamics} % https://en.wikipedia.org/wiki/Laws_of_thermodynamics % https://en.wikipedia.org/wiki/History_of_entropy \section{Working with simple machines} % https://en.wikipedia.org/wiki/Simple_machine \UnorderedList{ \item Lever \item Wheel and axle \item{Pulley} \item{Tilted plane} \item{Wedge} \item{Screw} } TODO: Modern machine theory: Kinematic chains \section{On ignorance} In the 18th century, occasionally, steam boilers and coal mines exploded, killing tens of people. Then nuclear power plants exploded. What if a Dyson sphere exploded... % Chemistry % Thermostatics % Heat and fluids
{ "alphanum_fraction": 0.7580909769, "avg_line_length": 27.5925925926, "ext": "tex", "hexsha": "8d77666d5873060e89ccca7eb9cf28af054651bb", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2018-10-02T15:20:22.000Z", "max_forks_repo_forks_event_min_datetime": "2018-10-02T15:20:22.000Z", "max_forks_repo_head_hexsha": "df55868caa436efc631e145a43e833220b8da1d0", "max_forks_repo_licenses": [ "Apache-2.0", "CC0-1.0" ], "max_forks_repo_name": "edom/work", "max_forks_repo_path": "research/physics/statics.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "df55868caa436efc631e145a43e833220b8da1d0", "max_issues_repo_issues_event_max_datetime": "2022-02-16T00:55:32.000Z", "max_issues_repo_issues_event_min_datetime": "2020-12-02T18:37:37.000Z", "max_issues_repo_licenses": [ "Apache-2.0", "CC0-1.0" ], "max_issues_repo_name": "edom/work", "max_issues_repo_path": "research/physics/statics.tex", "max_line_length": 137, "max_stars_count": null, "max_stars_repo_head_hexsha": "df55868caa436efc631e145a43e833220b8da1d0", "max_stars_repo_licenses": [ "Apache-2.0", "CC0-1.0" ], "max_stars_repo_name": "edom/work", "max_stars_repo_path": "research/physics/statics.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1822, "size": 6705 }
\providecommand{\main}{..} \documentclass[\main/thesis.tex]{subfiles} \begin{document} \chapter{A gentle introduction to dependently typed programming in Agda}\label{agda} There are already plenty of tutorials and introductions of Agda \cite{norell2009dependently}\cite{FLOLAC16DTP}\cite{brutal}. We will nonetheless compile a simple and self-contained tutorial from the materials cited above, covering the parts (and only the parts) we need in this thesis. Some of the more advanced constructions (such as views and universes) will not be introduced in this chapter, but in other places where we need them. \paragraph{Remark} We assume that readers have some basic understanding of Haskell. Readers who are familiar with Agda and dependently typed programming may skip this chapter. \section{Some basics} Agda is a \textit{dependently typed functional programming language} and also an \textit{interactive proof assistant}. This language can serve both purposes because it is based on \textit{Martin-Löf type theory}\cite{martin1984intuitionistic}, hence the Curry-Howard correspondence\cite{sorensen2006lectures}, which states that: ``propositions are types'' and ``proofs are programs.'' In other words, proving theorems and writing programs are essentially the same. In Agda we are free to interchange between these two interpretations. The current version (Agda2) is a completely rewrite by Ulf Norell during his Ph.D. at Chalmers University of Technology. We say that Agda is interactive because theorem proving involves a lot of conversations between the programmer and the type checker. Moreover, it is often difficult, if not impossible, to develop and prove a theorem at one stroke. Just like programming, the process is incremental. So Agda allows us to leave some ``holes'' in a program, refine them gradually, and complete the proofs ``hole by hole''. Take this half-finished function definition for example. \begin{lstlisting} is-zero : ℕ → Bool is-zero x = ? \end{lstlisting} We can leave out the right-hand side and ask: ``what's the type of the goal?'', ``what's the context of this case?'', etc. Agda would reply us with: \begin{lstlisting} GOAL : Bool x : ℕ \end{lstlisting} Next, we may ask Agda to pattern match on {\lstinline|x|} and rewrite the program for us: \begin{lstlisting} is-zero : ℕ → Bool is-zero zero = ? is-zero (suc x) = ? \end{lstlisting} We could fulfill these goals by giving an answer, or even ask Agda to solve the problem (by pure guessing) for us if it is not too difficult. \begin{lstlisting} is-zero : Int → Bool is-zero zero = true is-zero (suc x) = false \end{lstlisting} After all of the goals have been accomplished and type-checked, we consider the program to be finished. Often, there is not much point in running an Agda program, because it is mostly about compile-time static constructions. This is what programming and proving things looks like in Agda. \section{Simply typed programming in Agda} Since Agda was heavily influenced by Haskell, simply typed programming in Agda is similar to that in Haskell. \paragraph{Datatypes} Unlike in other programming languages, there are no ``built-in'' datatypes such as \textit{Int}, \textit{String}, or \textit{Bool}. The reason is that they can all be created out of thin air, so why bother having them predefined? Datatypes are introduced with {\lstinline|data|} declarations. Here is a classical example, the type of booleans. \begin{lstlisting} data Bool : Set where true : Bool false : Bool \end{lstlisting} This declaration brings the name of the datatype ({\lstinline|Bool|}) and its constructors ({\lstinline|true|} and {\lstinline|false|}) into scope. The notation allow us to explicitly specify the types of these newly introduced entities. \begin{enumerate} \item {\lstinline|Bool|} has type {\lstinline|Set|}\footnote{{\lstinline|Set|} is the type of small types, and {\lstinline|Set₁|} is the type of {\lstinline|Set|}, and so on. They form a hierarchy of types.} \item {\lstinline|true|} has type {\lstinline|Bool|} \item {\lstinline|false|} has type {\lstinline|Bool|} \end{enumerate} \paragraph{Pattern matching} Similar to Haskell, datatypes are eliminated by pattern matching. Here is a function that pattern matches on {\lstinline|Bool|}. \begin{lstlisting} not : Bool → Bool not true = false not false = true \end{lstlisting} Agda is a \textit{total} language, which means that partial functions are not valid constructions. Programmers are obliged to convince Agda that a program terminates and does not crash on all possible inputs. The following example will not be accepted by the termination checker because the case {\lstinline|false|} is missing. \begin{lstlisting} not : Bool → Bool not true = false \end{lstlisting} \paragraph{Inductive datatype} Let us move on to a more interesting datatype with an inductive definition. Here is the type of natural numbers. \begin{lstlisting} data ℕ : Set where zero : ℕ suc : ℕ → ℕ \end{lstlisting} The decimal number ``4'' is represented as {\lstinline|suc (suc (suc (suc zero)))|}. Agda also accepts decimal literals if the datatype {\lstinline|ℕ|} complies with certain language pragma. Addition on {\lstinline|ℕ|} can be defined as a recursive function. \begin{lstlisting} _+_ : ℕ → ℕ → ℕ zero + y = y suc x + y = suc (x + y) \end{lstlisting} We define {\lstinline|_+_|} by pattern matching on the first argument, which results in two cases: the base case, and the inductive step. We are allowed to make recursive calls, as long as the type checker is convinced that the function would terminate. Those underlines surrounding {\lstinline|_+_|} act as placeholders for arguments, making it an infix function in this instance. \paragraph{Dependent functions and type arguments} Up till now, everything looks much the same as in Haskell, but a problem arises as we move on to defining something that needs more power of abstraction. Take identity functions for example: \begin{lstlisting} id-Bool : Bool → Bool id-Bool x = x id-ℕ : ℕ → ℕ id-ℕ x = x \end{lstlisting} In order to define a more general identity function, those concrete types need to be abstracted away. That is, we need \textit{parametric polymorphism}, and this is where dependent types come into play. A dependent type is a type whose definition may depend on a value. A dependent function is a function whose type may depend on a value of its arguments. In Agda, function types are denoted as: \begin{lstlisting} A → B \end{lstlisting} % where {\lstinline|A|} is the type of domain and {\lstinline|B|} is the type of codomain. To let {\lstinline|B|} depends on the value of {\lstinline|A|}, the value has to \textit{named}. In Agda we write: \begin{lstlisting} (x : A) → B x \end{lstlisting} The value of {\lstinline|A|} is named {\lstinline|x|} and then fed to {\lstinline|B|}. As a matter of fact, {\lstinline|A → B|} is just a syntax sugar for {\lstinline|(_ : A) → B|} with the name of the value being irrelevant. The underline {\lstinline|_|} here means ``I don't bother naming it''. Back to our identity function, if {\lstinline|A|} happens to be {\lstinline|Set|}, the type of all small types, and the result type happens to be solely {\lstinline|x|}: \begin{lstlisting} (x : Set) → x \end{lstlisting} Voila, we have polymorphism, and thus the identity function can now be defined as: \begin{lstlisting} id : (A : Set) → A → A id A x = x \end{lstlisting} {\lstinline|id|} now takes an extra argument, the type of the second argument. {\lstinline|id Bool true|} evaluates to {\lstinline|true|}. \paragraph{Implicit arguments} We have implemented an identity function and seen how polymorphism can be modeled with dependent types. However, the additional argument that the identity function takes is rather unnecessary, since its value can always be determined by looking at the type of the second argument. Fortunately, Agda supports \textit{implicit arguments}, a syntax sugar that could save us the trouble of having to spell them out. Implicit arguments are enclosed in curly brackets in the type expression. We are free to dispense with these arguments when their values are irrelevant to the definition. \begin{lstlisting} id : {A : Set} → A → A id x = x \end{lstlisting} Or when the type checker can figure them out on function application. \begin{lstlisting} val : Bool val = id true \end{lstlisting} Any arguments can be made implicit, but it does not imply that values of implicit arguments can always be inferred or derived from context. We can always make them implicit arguments explicit on application: \begin{lstlisting} val : Bool val = id {Bool} true \end{lstlisting} Or when they are relevant to the definition: \begin{lstlisting} silly-not : {_ : Bool} → Bool silly-not {true} = false silly-not {false} = true \end{lstlisting} \paragraph{More syntax sugars} We could skip arrows between arguments in parentheses or braces: \begin{lstlisting} id : {A : Set} (a : A) → A id {A} x = x \end{lstlisting} Also, there is a shorthand for merging names of arguments of the same type, implicit or not: \begin{lstlisting} const : {A B : Set} → A → B → A const a _ = a \end{lstlisting} Sometimes when the type of some value can be inferred, we could either replace the type with an underscore, say {\lstinline|(A : _)|}, or we could write it as {\lstinline|∀ A|}. For the implicit counterpart, {\lstinline|{A : _}|} can be written as {\lstinline|∀ {A}|}. \paragraph{Parameterized Datatypes} Just as functions can be polymorphic, datatypes can be parameterized by other types, too. The datatype of lists is defined as follows: \begin{lstlisting} data List (A : Set) : Set where [] : List A _∷_ : A → List A → List A \end{lstlisting} The scope of the parameters spreads over the entire declaration so that they can appear in the constructors. Here are the types of the datatype and its constructors. \begin{lstlisting} infixr 5 _∷_ [] : {A : Set} → List A _∷_ : {A : Set} → A → List A → List A List : Set → Set \end{lstlisting} % where {\lstinline|A|} can be anything, even {\lstinline|List (List (List Bool))|}, as long as it is of type {\lstinline|Set|}. {\lstinline|infixr|} specifies the precedence of the operator {\lstinline|_∷_|}. \paragraph{Indexed Datatypes} % Indexed datatypes, or inductive families, allow us to not only {\lstinline|Vec|} is a datatype that is similar to {\lstinline|List|}, but more powerful, in that it encodes not only the type of its element but also its length. \begin{lstlisting} data Vec (A : Set) : ℕ → Set where [] : Vec A zero _∷_ : {n : ℕ} → A → Vec A n → Vec A (suc n) \end{lstlisting} {\lstinline|Vec A n|} is a vector of values of type {\lstinline|A|} and has the length of {\lstinline|n|}. Here are some of its inhabitants: \begin{lstlisting} nil : Vec Bool zero nil = [] vec : Vec Bool (suc (suc zero)) vec = true ∷ false ∷ [] \end{lstlisting} We say that {\lstinline|Vec|} is \textit{parameterized} by a type of {\lstinline|Set|} and is \textit{indexed} by values of {\lstinline|ℕ|}. We distinguish indices from parameters. However, it is not obvious how they are different by looking at the declaration. Parameters are \textit{parametric}, in the sense that, they have no effect on the ``shape'' of a datatype. The choice of parameters only effects which kind of values are placed there. Pattern matching on parameters does not reveal any insights into their whereabouts. Because they are \textit{uniform} across all constructors, one can always replace the value of a parameter with another one of the same type. On the other hand, indices may affect which inhabitants are allowed in the datatype. Different constructors may have different indices. In that case, pattern matching on indices may yield relevant information about their constructors. For example, given a term whose type is {\lstinline|Vec Bool zero|}, then we are certain that the constructor must be {\lstinline|[]|}, and if the type is {\lstinline|Vec Bool (suc n)|} for some {\lstinline|n|}, then the constructor must be {\lstinline|_∷_|}. We could, for instance, define a {\lstinline|head|} function that cannot crash. \begin{lstlisting} head : ∀ {A n} → Vec A (suc n) → A head (x ∷ xs) = x \end{lstlisting} As a side note, parameters can be thought as a degenerate case of indices whose distribution of values is uniform across all constructors. \paragraph{With abstraction} Say, we want to define {\lstinline|filter|} on {\lstinline|List|}: \begin{lstlisting} filter : ∀ {A} → (A → Bool) → List A → List A filter p [] = [] filter p (x ∷ xs) = ? \end{lstlisting} We are stuck here because the result of {\lstinline|p x|} is only available at runtime. Fortunately, with abstraction allows us to pattern match on the result of an intermediate computation by adding the result as an extra argument on the left-hand side: \begin{lstlisting} filter : ∀ {A} → (A → Bool) → List A → List A filter p [] = [] filter p (x ∷ xs) with f x filter p (x ∷ xs) | true = x ∷ filter p xs filter p (x ∷ xs) | false = filter p xs \end{lstlisting} \paragraph{Absurd patterns} The \textit{unit type}, or \textit{top}, is a datatype inhabited by exactly one value, denoted {\lstinline|tt|}. \begin{lstlisting} data ⊤ : Set where tt : ⊤ \end{lstlisting} The \textit{empty type}, or \textit{bottom}, on the other hand, is a datatype that is inhabited by nothing at all. \begin{lstlisting} data ⊥ : Set where \end{lstlisting} These types seem useless, and without constructors, it is impossible to construct an instance of {\lstinline|⊥|}. What is an type that cannot be constructed good for? Say, we want to define a safe {\lstinline|head|} on {\lstinline|List|} that does not crash on any inputs. Naturally, in a language like Haskell, we would come up with a predicate like this to filter out empty lists {\lstinline|[]|} before passing them to {\lstinline|head|}. \begin{lstlisting} non-empty : ∀ {A} → List A → Bool non-empty [] = false non-empty (x ∷ xs) = true \end{lstlisting} The predicate only works at runtime. It is impossible for the type checker to determine whether the input is empty or not at compile time. However, things are quite different quite in Agda. With \textit{top} and \textit{bottom}, we could do some tricks on the predicate, making it returns a \textit{Set}, rather than a \textit{Bool}! \begin{lstlisting} non-empty : ∀ {A} → List A → Set non-empty [] = ⊥ non-empty (x ∷ xs) = ⊤ \end{lstlisting} Notice that now this predicate is returning a type. So we can use it in the type expression. {\lstinline|head|} can thus be defined as: \begin{lstlisting} head : ∀ {A} → (xs : List A) → non-empty xs → A head [] proof = ? head (x ∷ xs) proof = x \end{lstlisting} In the {\lstinline|(x ∷ xs)|} case, the argument {\lstinline|proof|} would have type {\lstinline|⊤|}, and the right-hand side is simply {\lstinline|x|}; in the {\lstinline|[]|} case, the argument {\lstinline|proof|} would have type {\lstinline|⊥|}, but what should be returned at the right-hand side? It turns out that, the right-hand side of the {\lstinline|[]|} case would be the least thing to worry about because it is completely impossible to have such a case. Recall that {\lstinline|⊥|} has no inhabitants, so if a case has an argument of that type, it is too good to be true. Type inhabitance is, in general, an undecidable problem. However, when pattern matching on a type that is obviously empty (such as {\lstinline|⊥|}), Agda allows us to drop the right-hand side and eliminate the argument with {\lstinline|()|}. \begin{lstlisting} head : ∀ {A} → (xs : List A) → non-empty xs → A head [] () head (x ∷ xs) proof = x \end{lstlisting} Whenever {\lstinline|head|} is applied to some list {\lstinline|xs|}, the programmer is obliged to convince Agda that {\lstinline|non-empty xs|} reduces to {\lstinline|⊤|}, which is only possible when {\lstinline|xs|} is not an empty list. On the other hand, applying an empty list to {\lstinline|head|} would result in a function of type {\lstinline|head [] : ⊥ → A|} which is impossible to be fulfilled. \paragraph{Propositions as types, proofs as programs} The previous paragraphs are mostly about the \textit{programming} aspect of the language, but there is another aspect to it. Recall the Curry–Howard correspondence, propositions are types and proofs are programs. A proof exists for a proposition the way that a value inhabits a type. {\lstinline|non-empty xs|} is a type, but it can also be thought of as a proposition stating that {\lstinline|xs|} is not empty. When {\lstinline|non-empty xs|} evaluates to {\lstinline|⊥|}, no value inhabits {\lstinline|⊥|}, which means no proof exists for the proposition {\lstinline|⊥|}; when {\lstinline|non-empty xs|} evaluates to {\lstinline|⊤|}, {\lstinline|tt|} inhabits {\lstinline|⊥|}, a trivial proof exists for the proposition {\lstinline|⊤|}. In intuitionistic logic, a proposition is considered to be ``true'' when it is inhabited by a proof, and considered to be ``false'' when there exists no proof. Contrary to classical logic, where every propositions are assigned one of two truth values. We can see that {\lstinline|⊤|} and {\lstinline|⊥|} corresponds to \textit{true} and \textit{false} in this sense. Negation is defined as a function from a proposition to {\lstinline|⊥|}. \begin{lstlisting} ¬ : Set → Set ¬ P = P → ⊥ \end{lstlisting} We could exploit {\lstinline|⊥|} further to deploy the principle of explosion of intuitionistic logic, which states that: ``from falsehood, anything (follows)'' (Latin: \textit{ex falso (sequitur) quodlibet}). \begin{lstlisting} ⊥-elim : ∀ {Whatever : Set} → ⊥ → Whatever ⊥-elim () \end{lstlisting} \paragraph{Decidable propositions} A proposition is decidable when it can be proved \textit{or} disapproved. \footnote{The connective \textit{or} here is not a disjunction in the classical sense. Either way, a proof or a disproval has to be given.} \begin{lstlisting} data Dec (P : Set) : Set where yes : P → Dec P no : ¬ P → Dec P \end{lstlisting} {\lstinline|Dec|} is very similar to its two-valued cousin {\lstinline|Bool|}, but way more powerful, because it also explains (with a proof) why a proposition holds or why it does not. Suppose we want to know if a natural number is even or odd. We know that {\lstinline|zero|} is an even number, and if a number is even then its successor's successor is even as well. \begin{lstlisting} data Even : ℕ → Set where base : Even zero step : ∀ {n} → Even n → Even (suc (suc n)) \end{lstlisting} We need the opposite of {\lstinline|step|} as a lemma as well. \begin{lstlisting} 2-steps-back : ∀ {n} → ¬ (Even n) → ¬ (Even (suc (suc n))) 2-steps-back ¬p q = ? \end{lstlisting} {\lstinline|2-steps-back|} takes two arguments instead of one because the return type {\lstinline|¬ (Even (suc (suc n)))|} is actually a synonym of {\lstinline|Even (suc (suc n)) → ⊥|}. Pattern matching on the second argument of type {\lstinline|Even (suc (suc n))|} further reveals that it could only be constructed by {\lstinline|step|}. By contradicting {\lstinline|¬p : ¬ (Even n)|} and {\lstinline|p : Even n|}, we complete the proof of this lemma. \begin{lstlisting} contradiction : ∀ {P Whatever : Set} → P → ¬ P → Whatever contradiction p ¬p = ⊥-elim (¬p p) two-steps-back : ∀ {n} → ¬ (Even n) → ¬ (Even (suc (suc n))) two-steps-back ¬p (step p) = contradiction p ¬p \end{lstlisting} Finally, {\lstinline|Even?|} determines a number be even by induction on its predecessor's predecessor. {\lstinline|step|} and {\lstinline|two-steps-back|} can be viewed as functions that transform proofs. \begin{lstlisting} Even? : (n : ℕ) → Dec (Even n) Even? zero = yes base Even? (suc zero) = no (λ ()) Even? (suc (suc n)) with Even? n Even? (suc (suc n)) | yes p = yes (step p) Even? (suc (suc n)) | no ¬p = no (two-steps-back ¬p) \end{lstlisting} The syntax of {\lstinline|λ ()|} looks weird, as the result of contracting an argument of type {\lstinline|⊥|} of a lambda expression {\lstinline|λ x → ?|}. It is a convention to suffix a decidable function's name with {\lstinline|?|}. \paragraph{Propositional equality} Saying that two things are ``equal'' is a notoriously intricate topic in type theory. There are many different notions of equality \cite{equality}. We will not go into each kind of equalities in depth but only skim through those exist in Agda. \textit{Definitional equality}, or \textit{intensional equality} is simply a synonym, a relation between linguistic expressions. It is a primitive judgement of the system, stating that two things are the same to the type checker \textbf{by definition}. \textit{Computational equality} is a slightly more powerful notion. Two programs are consider equal if they compute (beta-reduce) to the same value. For example, {\lstinline|1 + 1|} and {\lstinline|2|} are equal in Agda in this notion. However, expressions such as {\lstinline|a + b|} and {\lstinline|b + a|} are not considered equal by Agda, neither \textit{definitionally} nor \textit{computationally}, because there are simply no rules in Agda saying so. {\lstinline|a + b|} and {\lstinline|b + a|} are only \textit{extensionally equal} in the sense that, given \textbf{any} pair of numbers, say {\lstinline|1|} and {\lstinline|2|}, Agda can see that {\lstinline|1 + 2|} and {\lstinline|2 + 1|} are computationally equal. But when it comes to \textbf{every} pair of numbers, Agda fails to justify that. We could convince Agda about the fact that {\lstinline|a + b|} and {\lstinline|b + a|} are equal for every pair of {\lstinline|a|} and {\lstinline|b|} by encoding this theorem in a \textit{proposition} and then prove that the proposition holds. This kind of proposition can be expressed with \textit{identity types}. \begin{lstlisting} data _≡_ {A : Set} (x : A) : A → Set where refl : x ≡ x \end{lstlisting} This inductive datatype says that: for all {\lstinline|a b : A|}, if {\lstinline|a|} and {\lstinline|b|} are \textit{computationally equal}, that is, both computes to the same value, then {\lstinline|refl|} is a proof of {\lstinline|a ≡ b|}, and we say that {\lstinline|a|} and {\lstinline|b|} are \textit{propositionally equal}! {\lstinline|_≡_|} is an equivalence relation. It means that {\lstinline|_≡_|} is \textit{reflexive} (by definition), \textit{symmetric} and \textit{transitive}. \begin{lstlisting} sym : {A : Set} {a b : A} → a ≡ b → b ≡ a sym refl = refl trans : {A : Set} {a b c : A} → a ≡ b → b ≡ c → a ≡ c trans refl refl = refl \end{lstlisting} {\lstinline|_≡_|} is congruent, meaning that we could \textbf{substitute equals for equals}. \begin{lstlisting} cong : {A B : Set} {a b : A} → (f : A → B) → a ≡ b → f a ≡ f b cong f refl = refl \end{lstlisting} Although these {\lstinline|refl|}s look all the same at term level, they are proofs of different propositional equalities. \paragraph{Dotted patterns} Consider an alternative version of {\lstinline|sym|} on {\lstinline|ℕ|}. \begin{lstlisting} sym' : (a b : ℕ) → a ≡ b → b ≡ a sym' a b eq = ? \end{lstlisting} % where {\lstinline|eq|} has type {\lstinline|a ≡ b|}. If we pattern match on {\lstinline|eq|} then Agda would rewrite {\lstinline|b|} as {\lstinline|.a|} and the goal type becomes {\lstinline|a ≡ a|}. \begin{lstlisting} sym' : (a .a : ℕ) → a ≡ a → a ≡ a sym' a .a refl = ? \end{lstlisting} What happened under the hood is that {\lstinline|a|} and {\lstinline|b|} are \textit{unified} as the same thing. The second argument is dotted to signify that it is \textit{constrained} by the first argument {\lstinline|a|}. {\lstinline|a|} becomes the only argument available for further binding or pattern matching. \paragraph{Standard library} It would be inconvenient if we have to construct everything we need from scratch. Luckily, the community has maintained a standard library that comes with many useful and common constructions. The standard library is not ``chartered'' by the compiler or the type checker, there's simply nothing special about it. We may as well as roll our own library. \footnote{Some primitives that require special treatments, such as IO, are taken care of by language pragmas provided by Agda.} \end{document}
{ "alphanum_fraction": 0.7288128641, "avg_line_length": 36.4394618834, "ext": "tex", "hexsha": "8167e0223ba27e4d942d58871d533cc3d974ef5d", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2015-05-30T05:50:50.000Z", "max_forks_repo_forks_event_min_datetime": "2015-05-30T05:50:50.000Z", "max_forks_repo_head_hexsha": "aae093cc9bf21f11064e7f7b12049448cd6449f1", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "banacorn/numeral", "max_forks_repo_path": "Thesis/tex/agda.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "aae093cc9bf21f11064e7f7b12049448cd6449f1", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "banacorn/numeral", "max_issues_repo_path": "Thesis/tex/agda.tex", "max_line_length": 145, "max_stars_count": 1, "max_stars_repo_head_hexsha": "aae093cc9bf21f11064e7f7b12049448cd6449f1", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "banacorn/numeral", "max_stars_repo_path": "Thesis/tex/agda.tex", "max_stars_repo_stars_event_max_datetime": "2015-04-23T15:58:28.000Z", "max_stars_repo_stars_event_min_datetime": "2015-04-23T15:58:28.000Z", "num_tokens": 6800, "size": 24378 }
\chapter{Introduction} \section{Autonomous UAV} Unmanned aerial vehicles (UAVs) are aircraft with no human on board. They are controlled remotely or automatically. Unmanned Aerial Vehicles (UAV) are gaining popularity, both in terms of academic research and potential applications \cite{valavanis2015handbook}. Classification of the UAVs has two major sub-classes of fixed-wing and rotary-wing. the rotary-wing UAVs received growing attention in recent years thanks to the improvements in embedded microprocessors and batteries. surveillance \cite{semsch2009autonomous,puri2005survey}, disaster management \cite{maza2011experimental, birk2011safety}, and rescue missions \cite{alotaibi2019lsar} are only a few numbers of examples of the broad implementation field of the rotary-wing UAVs. The majority of recent years' research is focused on quadcopters which are rotary-wing aircraft with four rotors \cite{luukkonen2011modelling, gheorghictua2015quadcopter, wang2016dynamics, bashi2017unmanned} Thanks to their agility and ease of control. On the other hand, single rotor helicopters have gotten less attention from researchers, mainly because they are intrinsically unstable; they have highly coupled nonlinear dynamics, and wind gusts can easily disturb them. The helicopter is the principal representation of the rotary wing family. The conventional helicopter layout has two engine-powered rotors: the main rotor and the tail rotor. The main rotor generates the thrust power for the helicopter's elevation. The tail rotor offsets the main rotor torque and maintains the helicopter orientation. The change in body orientations of the helicopter results in the inclination of the main rotor, and therefore generating the propulsive force for the helicopter's longitudinal/lateral movement. All flying features and physical principles of their full-sized counterpart are retained by small helicopters. Moreover, in comparison to full scale helicopters, they are inherently more manoeuvrable and competent. Due to their satisfactory flying ability, size and low expense, UAV science community has engaged in developing minimal cost and reliable autonomous navigation technologies. Four control inputs are used for the helicopter. Two cyclic controls which handle the helicopter's longitudinal/lateral movement, a collective control of vertical movement and, lastly, the control of pedal control of the helicopter's heading movement. unrestrained helicopter movement is governed by an underactuated structure, in which the number of control inputs (4) is less than the number of degrees of freedom to be controlled (6 DOF), making it difficult to use the traditional approach for controlling Euler–Lagrange systems (which is usually used in the industrial automation). For these reasons, much research has concentrated on control method for unmanned drones that ensures stability and durability. These factors lead to a complex control problem for single rotor small-scaled helicopters. However, the payload capacity of these helicopters is superior to quadcopters, making them more suitable for transportation in emergency situations \cite{quan2017introduction}. As single rotor small, scaled helicopters received less attention, in this study, we will focus on this type of UAVs. The exact dynamics of the helicopter are unidentified and represented using mechanical relevant mathematical formulas of lesser order, as in most engineering disciplines. It should be emphasized that the estimated model is simply a "abstract concept" since a comprehensive description of the real dynamics of the helicopter is almost infeasible \cite{ren2012modeling}. As a single-rotor helicopter is unstable by nature, it requires a flight control system that operates the vehicle, which is like a human pilot in a large, scaled helicopter. As a result, the flight control can either accept remote control input from an operator or operate autonomously. Remote control of single rotor helicopters is not economically viable, so autonomous control is preferred for most commercial applications. Therefore, the autonomous control of unmanned aerial vehicles (UAVs) is the goal of this research. \section{Traditional Control Systems} Control of single rotor helicopters is studied through classic (continuous) or modern (digital) control approaches. Most helicopter systems are inherently non-linear, with non-linear differential equations specified for their dynamics. Researchers, however, generally construct linearized helicopter systems models for analytical purposes. In particular, if this system runs around an operational point and the signals involved are minor, a linear model that estimates a certain non-linear helicopter system may be produced. A large number of approaches have been suggested by researchers for the design and study of control systems for linear systems. Traditional flight control systems are primarily classified as linear or nonlinear. This categorization is often based on the rotor-craft model expression provided by the controller. Linearization designs are more application-focused and have been used on the majority of helicopter models. Their appeal derives from the ease of control, which reduces both computation cost and duration of the project. In general, most control systems are based on the broadly established idea of stabilization derivatives, utilizing a linear system of helicopter dynamics. However, a substantial study has been carried out in recent years on non-linear dynamic formulations in the context of helicopter control flight. The concepts of nonlinear controllers are mostly assessed for their conceptual framework to the problem of helicopter navigation. Their application remains a major issue, mostly because of the control system's increasing order and complex nature. Its contribution, however, is crucially important to understand the constraints and possibilities of helicopter navigation. A linear Multiple-Input Multiple-Output (MIMO) coupled helicopter model serves as the foundation for the linear controller architecture. The internal model method and integral control design are two common design strategies for dealing with the trajectory tracking of linear systems. The proposed control method has the drawback of being complicated to build, whereas integral control is limited to instances where the reference output is a continuous signal. The key principle underlying the linear controller design is to identify the desired state vector for each of these two subsystems, such that when the helicopter status variables converge with their intended state values, the tracking error asymptotically converges to zero. For each subsystem, the desired state vectors are components and higher derivatives of the reference output vectors. The linear H$\infty$ control theory is used for a linear helicopter model such as the one done. However, control laws based on linear helicopter dynamics is not globalized since it shows desired behavior just around a region of operation. This has led to a large number of studies using non-linear control approaches to implement dynamic helicopter models. The feedback linearity control for trajectory tracking was implemented based on a lower order component of the Lagrangian helicopter model \cite{vilchis2003nonlinear}. Because of its highly cross-coupling nature of single rotor small scale helicopters (SRSSH), usually, a MIMO approach is implemented \cite{koo1998output, mahony1999hover}. H$\infty$ method is also used in \cite{la2003integrated,civita2006design} using a 30-state nonlinear model by an inner loop and outer loop technique. Sliding mode controller is also used for control of SRSSH \cite{khaligh2014control}. Controller design approaches ignore the multivariate character of rotor-craft dynamics as well as the strong link between rotorcraft variables and control inputs. In this sort of framework, each control input is in charge of regulating a single rotorcraft outlet.  interconnections between rotorcraft outputs are ignored, and each control input is linked to a SISO feedback loop. The SISO feedback mechanisms associated with the control inputs are totally independent of one another. The SISO feedback mechanisms are built using standard looping platforms \cite{walker1996advanced}. The amplitude and gain tolerances of a feedback loop determine the other's stability. These tolerances define the amount of amplitude and timing that the controller may inject to keep the feedback cycle dynamics constant. However, in the case of multivariable systems, these tolerances can readily lead to erroneous findings. An 11 state linear model was developed to examine the feedback controller features of the PID technique \cite{mettler1999system}. Based on the prediction error technique, a time-domain identification procedure was used to identify the set of parameters. The PID design proved unable to reduce the mutual coupling among helicopter's lateral and longitudinal movements, and the aircraft control system was confined to standstill flying. The obtained findings revealed that SISO strategies have mediocre reliability and that multidimensional procedures are essential to minimize the helicopter dynamics' intrinsic strongly coupled impact. Because of the lag time between the helicopter's translational and attitude subsystems, most linear control schemes employ a multi-loop control method \cite{kim2003flight, johnson2005adaptive, marconi2007robust}. Each input controls one helicopter output via a single-input single-output (SISO) feedback system, and the helicopter's attitude equations are separated from translational motion using two primary control loops. The slower outer-loop regulates the helicopter's heaving, longitudinal, and lateral movements by computing the needed collective input and attitude angles to guide the aircraft along its intended route. The basis inputs to the inner feedback loop are then these desirable attitude angles. The inner-loop is used to regulate the helicopter's attitude, which moves at a considerably quicker rate than the translational motion. A linearized model of the helicopter dynamics is used in the multi-loop approach and the cross-couplings between different DOFs are neglected. Since the cross-coupling dynamics are important, this often results in poor performance of the controller. To account for the cross-couplings that exist between different DOFs of the helicopter, a multi-input multi-output (MIMO) control approach has been used in recent years \cite{koo1998output,raptis2009system}. Koo et al. use the input-output feedback linearization technique to provide a MIMO solution for the control of small-scale helicopters. The helicopter dynamics are not linearized by the accurate input-output linear system, resulting in instability zero dynamics. The zero dynamics are then stabilized in the simulated world by ignoring the connections between moments and forces and utilizing approximate input-output linearization to obtain limited tracking. Instead of controllable inputs like the collective, cyclic, and pedal inputs, unrealistic control inputs like the gradients of the main and tail rotor thrust and the flapping angles are employed to describe the system \cite{koo1998output}. The influence of thrust force components associated with the primary rotor disc displacement is ignored by most nonlinear dynamic systems. These parasite forces have a minimum impact on movement dynamics. This is standard procedure. This approximation leads to several mathematical models with a response form appropriate for backstep control designs laid forth in \cite{krstic1995nonlinear} and numerous researchers used this procedure \cite{fantoni2002non,azzam2010quad,mahony2004robust}.  Mahony et al. described a MIMO strategy for controlling small-scale aircraft in hover using a backstepping mechanism \cite{mahony1999hover}. To do this, the flapping behaviors and friction forces are ignored, and the control design is based on a mathematical model of the helicopter dynamics around hover. In a study done by Raptis et al., a time-dependent backstepping approach is used to create a MIMO control scheme for a small-scale helicopter \cite{raptis2009system}. Simplifying hypotheses are used to generate the helicopter's dynamic model in a cascading design appropriate for the backstepping control scheme. For instance, in all aviation phases, induced velocity is considered to be constant and the impacts on the thrust computations of the vehicle velocity are disregarded so that main and tail rotors are respectively proportionate in proportion to the input of collectives and pedals. The main and tail rotors' drag torque is also disregarded. Another non-linear control scheme is given in a work by Godbolt et al. \cite{godbolt2013experimental} employing a cascade method. In order to unite attitude and movement dynamics, the internal loop control mechanism is utilized. The control design uses simplification principles. For example, due to the rigidity of the main rotor shaft, the contributions of the rolling and pitching moments to the fuselage dynamic attitude are ignored. Also, because of the rotor blowing in the translational dynamics, it neglects the influence of smaller body forces. A nonlinear control technique is then taken into account to offset the tail rotor's impacts to small friction forces. An H$\infty$ controller's usual construction consists of two components. The first element consists of Proportional Integral compensators and low pass filters in a manner similar to the traditional approaches of single input single output systems. The Proportional Integral compensators enhance the system's low-frequency gain, reducing disturbances, and attenuating steady-state error. The low pass filters are generally employed for noise reduction. The second element of the control is the H1 synthesis component, which is determined by a constant signal gain for stabilizing multi-functional dynamic response, as well as being appropriate for a performance criterion \cite{kim2003flight,khalil1996robust}. A single value loop forming process based on two degrees of H$\infty$ freedom was created in the research done by Walker et al. \cite{walker1996advanced} which is an observation basis multivariate controller. The controls were to build a complete autopilot system for a helicopter. The flying system is incorporated with piloted aviation operations, as opposed to automated flight technologies. The aim of the remote control is for the helicopter to monitor the pilot's control input and speed control. The control scheme is designed to eliminate the connection between axes of helicopter dynamics, therefore lowering the burden of the pilot. The pilot is alone responsible for generating the benchmark and high-speed controls that are required to move the aircraft. An innovative architecture of static H$\infty$ output controls was given to stabilize an autonomous helicopter in a hovering cite{gadewadikar2009h}. The optimum control technology makes it possible to devise multivariate feedback systems that enhance the rank of the control unit utilizing fewer states. The structure of the controller feedback loops coincided with the actual flight experience of the helicopter such that the controller's design was acceptable. The H$\infty$ control system form decreases the influence on high-frequency Helicopters of un-modeled dynamics. In a research by Kendoul et al. \cite{kendoul2007real} the control design for a Yamaha R-50 helicopter using H$\infty$ loop forming technology is provided. The control design is composed of non 30-state model of helicopter dynamics in an internal loop approach that is linearized by various operational positions in the desired trajectory envelope. Then an H$\infty$ loop-fitting controller is built to cover this required flying area based on the acquired linear models. The UAV control scheme is studied in \cite{kim2002nonlinear} for a non-linear trajectory tracking control. The non-linear model of helicopter dynamics is discretized and the tracking control issue is then formulated to reduce costs using a quickly converged steepest descent approach. The primary problems of application are the coordination of the cost weight matrix and the constants in the probability density. In the majority of situations, three nonlinear matrix expressions are required to solve the final loop control issue. In \cite{gadewadikar2008structured}, the H$\infty$ synthesis portion of the controller was resolved by solving just two paired matrix formulas that do not need the information of the initial stabilization gain. There are two principal loops in the control system framework. The first loop is capable of stabilizing the dynamic behavior of the arrangement, and the second loop is for position monitoring. A 13-state linear model of the coupling fuselage and rotor dynamic is the architecture of the control unit. The sequence and structure of the model were adopted in \cite{mettler2013identification}. In another study, Riccati Equation concept is provided \cite{bogdanov2007state}. The complicated dynamics of the helicopter are modified to a pseudo linear, state-dependent (SDC) coefficient and a feedback-optimum matrix is produced at all times by solving the LQR equation. Because there are many non-parametric terms in SDC form and the fact that the helicopter model is not aligned in terms of the control system, it is ignored to achieve a control-affine SDC helicopter dynamics framework necessary for SDRE control designs in certain non-linearity models. A non-linear compensation is then built to increase the control signal to roughly cancel ignored nonlinear effects. Owing to its resilience with boundary parameter uncertainties, the sliding mode controller can be another non-linear, small-scale, unmanned helicopter management MIMO method. A robust, nonlinear, sliding mode controller flight control is given in \cite{ifassiouen2007robust} for a compact, standalone hover helicopter. The dynamics of the nonlinear helicopter are initially oversimplified by disregarding the drag torque of the rotors and the rear and the connections of the aerodynamic forces and momentum. Then the linearized model is transformed for a squared model into a linear system. For input refined systems, untrue control inputs such as the rolling, pitch, and yaw moments are considered instead of the actual control inputs, and the gradient is considered to be the primary rotor thrust. The Translational Rate Control (a technique for a UAV is detailed in a study by Pieper et al. \cite{pieper1995application}) for another sliding mode controller approach in hovers. A fundamental, linearized model of the hovering helicopter and a Sliding mode controller is built to comply with the operating quality requirements for the Translational Rate Control hover control system. A reference model sliding mode controller design is detailed in a study by Wang et al. \cite{wang2008model} and a multi-loop control method is employed to regulate the hover of a UAV. The non-linear helicopter model is modeled linearly around the hover and the coupling movement of the helicopter is ignored, to treat every DOF as a self-contained SISO system. The PID technique is then developed for each of the longitudinal and the lateral controller designs and heavily loaded loops. Another sliding mode controller technique is presented in a research for controlling a UAV \cite{fu2012chattering}. In this method, the DOFs of helicopter movement are decoupled in these three principal feedback loops: position, speed, and orientation loop. To get an appropriate form for the sliding mode controller method, the Equations of each loop are simplified. For instance, for the Euler angles in the speed cycle, the small-angle presumption is utilized to linearize the equations and get an input-affined shape. A sliding mode controller for each loop is then designed. A small-scale autonomous helicopter group control is presented adopting a sliding mode controller approach \cite{fahimi2008full}. In order to produce arbitrary tri-dimensional formations, a sliding mode controller is established for each technique, and the training will be maintained by two leaders/follower controllers. The rotor's flapping complexities are ignored and unrealistic control inputs including the main and tail rotor thrust and pitch and roller moments are applied to describe the system in an input-affine manner instead of actual controlled inputs. The square shape is then exploited to get the control design using a reference points technique. The aerodynamics of the helicopter is separated into three components with slow, medium, and rapid modes with a multiple time control based on the technique of the slider mode controller \cite{xu2010multi}. In all flying regimes, nonlinearities of the main and tail rotor intake are removed and the induced speed is presumed to be fixed. A nonlinear controller is built with a sliding mode controller for each mode and results for simulation are provided. Nevertheless, for controls that may result in a non-unique solution, the slow mode controller requires an iterative process. It is vital that the control architecture is strong enough in the case of the helicopter which has considerable uncertainty. In the presence of parametric and model uncertainty, there is a design that ensures limited traceability \cite{isidori2003robust}. The suggested control scheme includes stabilizing strategies for input saturation feedback systems as well as adaptive nonlinear output control techniques. In another study, the helicopter model includes the dynamic behavior of the helicopter movement equations that are augmented by a modified aerodynamic force and torque generating model. The Helicopter Dynamics nonlinear model is presented in \cite{koo1999differential}. In most studies into the design of a non-linear helicopter controller, this particular model was used. The precise linearization input-output fails to linearise the model of the helicopter which leads to instability of zero dynamics. The usage of the approximation model, which does not consider the thrust forces created by the main rotor flap movement, has also been demonstrated to be fully linearized. In \cite{koo1998output}, an approximation linearization in input-output was used to achieve a helicopter system that is dynamically linear without zero dynamics and that has the required characteristic of relative smoothness. The difficulty of an oscillatory ship deck helicopter landing \cite{isidori2012robust} has been appropriately controlled using a conceptual representation. In \cite{kadmiry2004fuzzy}, the design of a floating flight controller for the unmanned APID-MK3 helicopter is described with a unique approach. In the literature, the majority of control schemes, including Multi-loop and MIMO, are implementing the linear model of the helicopter under various trimming requirements, instead of using the non-linear model directly. This confines the correctness of the linear model to the neighborhood of its linearization of the trimmed requirements. Several linear models are therefore necessary to cover a variety of flying regimes and several gain programmed controllers are required in all such regimes to control the helicopter \cite{downs2007control}. Aerodynamic forces and moments fluctuate substantially across different flying circumstances due to the complicated aerodynamic performance of helicopter thrust output. These approximations are not desired for managing an autonomous helicopter over a broad variety of flying phases, through linear system and/or rejection of non-linear components \cite{raptis2011linear}. The issue of optimal control methods is that they all necessitate knowledge of the robot's dynamics, requiring system identification and model derivation for each UAV. Depending on the task, this can become tedious, if not impossible. Notably, the final control system will be a one-of-a-kind solution to a specialized study. These strategies may be insufficient to deal with changing conditions, unanticipated events, and stochastic environments \cite{zhou2019vision}. Previous approaches to nonlinear control using neural networks and nonlinear inversion were published in \cite{johnson2005adaptive}. Nonlinear control approaches have also been presented. In all situations, the requirements for nonlinear inversion and the increase of a NN raise the controller's order substantially. In this way, it becomes impossible to derive the controller from the helicopter's non-linear governing equations. Consequently, these cases have used developed controls based on the helicopter's linearization dynamics. In the research of Hovakimyan et al. \cite{hovakimyan2001adaptive} the reduced model uses just the heavy and longitudinal mobility of the helicopter, which further restricts it. In order to obtain adequate efficiency, the control strategies presented in the research stated above require accurate knowledge of the dynamic models involved. The issue is how to manage unforeseen disturbances to the nominal model in helicopter operations. Unexpected disturbances of this nature usually involve parameters and analytical uncertainty, unmodelled dynamics, and environmental disturbance. The existence of uncertainties and external disturbances can disrupt the feedback controller's operation and lead to significant deterioration. Approximation approaches utilizing artificial neural networks (NN) were suggested to address the presence of model uncertainty. In \cite{kim2004adaptive}, approximated NN-enhancing dynamic reversal was presented, while in \cite{enns2003helicopter} neuronal dynamic programming was demonstrated to be beneficial in the monitoring and trim control of the helicopter. On this basis, the following question is posed: What if the vehicle teaches itself how to perform a task optimally without using a model? This leads to the next section on reinforcement learning. \section{The Use of Reinforcement Learning as an Optimal Control Method} \label{intro_rl} Artificial intelligence (AI) has lately caused a breakthrough in various industries worldwide, ranging from engineering to medical services. Recent advancements in computer technology and data storage, along with AI's learning capacities, have propelled AI to the forefront of numerous applications, such as object recognition and natural language processing. AI is expected to contribute more than 15 trillion USD to the global economy while increasing GDP by 26\% by 2030. Overall, artificial intelligence (AI) is a powerful tool that covers many aspects of nowadays scientific achievements \cite{anand2019s}. Machine learning (ML) is arguably the most significant branch of AI. It is described as an ability in computer systems that allows them to learn without the need for continuous control over it \cite{pandey2021machine}. The area of machine learning may be divided further into supervised learning, unsupervised learning, semi-supervised learning, and reinforcement learning. The term "supervised learning" refers to a situation in which the "experience," or training example, provides essential information that is absent in the unknown "test examples" whereby the learned knowledge is to be implemented. An expert provides the additional information in experience. It tries to generalize across experiences and then applies this knowledge to predict labels for test examples \cite{shalev2014understanding}. Since the agent tries to mimic the expert, it will not wholly provide the same response as the expert. This error is called the Bayes error rate \cite{ng2017machine}. In unsupervised learning, there is no distinction between training data and test data. A typical example of such a job is grouping data collection into subgroups of related objects. Semi-supervised learning is a combination of supervised learning and unsupervised learning. During training, semi-supervised learning mixes a small quantity of labeled data with a lot of unlabeled data, which will improve learning accuracy. Ideally, supervised learning or semi-supervised learning can completely replicate the supervisor. However, it cannot outperform the supervisor in terms of outcomes. Reinforcement learning (RL) attempts to solve this dilemma by substantial changes to the learning process. Ultimately, the objective of RL is to enable machines to outperform all existing approaches. The RL agent tries to achieve a better result than the currently feasible ones by learning the best mapping of states to actions using a reward signal as a criterion. RL methods allow a vehicle to discover an optimal behavior on its own through trial-and-error interactions with its surroundings. This is based on the commonsense idea that if an action results in a satisfactory or better situation, the tendency to perform that action in the initial situation is \textit{reinforced}. RL is like classical optimal control theory \cite{sutton2018reinforcement} in engineering platform. Both theorems deal with the problem of determining an input (i.e., optimal controller in control theory or optimal policy in RL) for solving the optimization problem. Furthermore, both rely on a system's notation being described by an underlying set of states, actions, and a model that captures transitions between one state and the other. So RL can tackle the same problem that optimal control does \cite{nian2020review, powell2012ai}. However, because the agent does not have access to the state vector dynamics, the agent must learn the repercussions of its actions via trial and error while interacting with the environment. Although there are some recent achievements on model-based RL \cite{kaiser2019model}, most of the RL algorithms are model-free. They attempt to control without the knowledge of a dynamic model; in other words, it only receives the current states\footnote{in the fully observable Markov decision process (FOMDP). In the partially observable Markov decision process, a history of states is required in each step.} and a reward from the environment (helicopter in this case) in each step. This framework has received much attention in recent years, with promising outcomes in a range of domains, including outperforming human specialists on Atari games \cite{mnih2013playing}, Go \cite{silver2017mastering}, and replicating complex helicopter maneuvers. \cite{abbeel2007application, ng2006autonomous,ng2003autonomous} . A remarkable range of robotics challenges may be conveniently formulated as reinforcement learning problems dating back to 1992 when the OBELIX robot is trained to push objects \cite{mahadevan1992automatic}. A model-free policy gradient technique was used to teach a Zebra Zero robot arm how to perform a peg-in-hole insertion task \cite{gullapalli1994acquiring}. Recently, RL-based UAV control has received a lot of interest. The initial research generated an engineered reward function. They developed a model of robot dynamics through demonstration but then employed the model in simulation, leading to the simulation of robot state while using RL to optimize a NN controller for autonomous helicopter flying \cite{bagnell2001autonomous} or inverted helicopter maneuver \cite{ng2006autonomous}. However, defining the reward function could be an arduous task. One solution would be to utilize an expert and award the helicopter for emulating the expert's behavior. Abbeel et al. used this approach to perform aerobatic helicopter flight \cite{abbeel2007application}. In recent years, deep learning has been shown to improve the RL field \cite{li2017deep}. Deep learning relies on neural networks' powerful function approximation properties, which can automatically find compact low-dimensional representations of high-dimensional data (e.g., images). This enabled reinforcement learning methods to scale up to previously unreachable problems. Deep reinforcement learning has also gained attention recently in UAV control, William Koch et al. \cite{koch2019reinforcement} compared Deep Deterministic Policy Gradient (DDPG) \cite{lillicrap2015continuous}, Trust Region Policy Optimization (TRPO) \cite{schulman2015trust} and Proximal Policy Optimization (PPO) \cite{schulman2017proximal} algorithms on the Iris quadcopter and then comparing the result to a PID controller. Although TRPO and DDPG failed to reach stability, they have shown that PPO results are powerful enough to be comparable to a PID controller. Barros and Colombini \cite{barros2020using} also proved that the Soft Actor-Critic (SAC) \cite{haarnoja2018soft} method can perform a low-level control on a commercial quad-rotor Parrot AR Drone 2.0. However, there is still a lack of research on a small-scaled single-rotor helicopter. \section{Simulation Environment for RL} In RL, the amount of try and error required to learn beneficial actions is usually high. As a result, sampling the environment is the primary challenge with reinforcement learning. One way to approach this is by having parallel similar real-world environments doing the same thing \cite{levine2018learning}. However, in the case of the UAV, failure means the loss of a UAV, and hence it is costly. This problem is exacerbated by several real-world factors that make UAVs a problematic domain for RL \cite{kober2013reinforcement}. UAVs are frequently dangerous and costly to run during the initial training such that the aircraft will fail several times until it reaches a satisfactory performance. This will need high maintenance costs in addition to the original hardware expenditures. Moreover. Robotic have continuous high-dimensional state and action spaces, and finally, it requires a fast online response. As a result, the use of a simulation environment seems necessary for the initial learning procedure of an RL algorithm. To compensate for the expense of real-world interactions, the UAV must first learn the behavior in simulation and then transfer it to the real vehicle. Usage of a simulator provides an affordable approach in order to create samples. In a simulation, it is possible to crash the UAV as many times as needed; In addition, no safety measures must be taken for, and there would also be no lag in the process due to maintenance or any other real-world issues. Simulations are also more reproducible; For example, wind gusts are not easy to reproduce in the real world, while in simulation, the wind gust random model can be saved and reused elsewhere. The issue with using a simulation environment is that none of them can completely capture real-world complexity. When a policy is trained in simulation, it usually is not optimal to use in the real world \cite{zhao2020sim}. One possible solution would be to initially train the policy in simulation and then perform tuning in the real world \cite{tran2015reinforcement, tzeng2015simultaneous}.\\ \section{Thesis Objective and outline} In this research, we wish to expand on recent research in RL, especially Deep Reinforcement Learning (DRL) to control a SRSSH. More precisely, low-level control rules are learnt directly from the UAV simulation. Notably, the purpose of this thesis is only to train the DRL technique in a simulated setting and providing proof that the aformentioned method is capable of stabilizing the unmanned small scaled helicopter in an acceptable way, leaving future work to examine the transfer to the actual world or produce more complicated maneuvers. In the following, the outline of this thesis is included. \subsection{Chapter 2: Reinforcement Learning Background} A wrong choice of RL method or its hyper-parameters can be time-consuming or even impossible to reach good stability of the UAV. This is because it mainly necessitates an extensive exploration of the state-space in order to extract acceptable policies. So, in the second chapter, a review of reinforcement learning methods is discussed. By providing a mathematical framework and describing essential components, this chapter includes a formal introduction to RL. Following that, the chapter provides an overview of Value-based and policy-based methods. Finally, the chapter introduces the DRL algorithm, SAC, which will subsequently be used for UAV control. \subsection{Chapter 3: Simulation environment} This chapter introduces the Simulated environment used for interaction with the RL method. First, the helicopter dynamics are discussed, including the forces applied to the UAV, such as fuselage and main rotor forces. Secondly, its effect on the 6 degrees of freedom (DOF) UAV is discussed. In addition a traditional control approch, more specifically sliding mode controller is introduced to compare the result of RL policy with the optimal control theory method. Finally, the environment setup is discussed, including the actions and rewards in the RL platform. \subsection{Chapter 4: Result and discussion} This chapter contains the results of applying the RL algorithm on a simulated environment, as well as a discussion and comparing the result of the obtained policy with the one generated by the sliding mode controller. In addition the effect of disturbance on the controller is discussed. \subsection{Chapter 5: Conclusion and future work} The conclusion and recommendations for future work are given in the final section of this chapter.
{ "alphanum_fraction": 0.8267838126, "avg_line_length": 250.4, "ext": "tex", "hexsha": "e394ad3afb9a99c35b5976707364e446e9d383f1", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "41365c05064e440283599fcf084ee19902a6b5d6", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "MKamyab1991/Master_thesis", "max_forks_repo_path": "chp1/intro.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "41365c05064e440283599fcf084ee19902a6b5d6", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "MKamyab1991/Master_thesis", "max_issues_repo_path": "chp1/intro.tex", "max_line_length": 1101, "max_stars_count": 3, "max_stars_repo_head_hexsha": "41365c05064e440283599fcf084ee19902a6b5d6", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "MKamyab1991/Master_thesis", "max_stars_repo_path": "chp1/intro.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-26T03:23:24.000Z", "max_stars_repo_stars_event_min_datetime": "2022-02-25T22:02:57.000Z", "num_tokens": 7294, "size": 37560 }
\section{Introduction} Finite element simulations of heterogeneous materials constructed using classical finite elements frequently have difficulty capturing the internal stress-strain behavior which can be a primary driving force in the overall material response. Of particular interest are the responses of highly heterogeneous materials such as polymer bonded crystalline materials which are comprised of a hard crystalline component bound in a polymeric matrix. These materials can be found in applications as varied as asphalt or plastic explosives. Several approaches have been developed to address these concerns but few approach it with the generality of Eringen and Suhubi(~\cite{bib:eringen64} and \cite{bib:eringen64_2}) in their, theory of ``micromorphic,'' continuum mechanics. This approach holds the promise of passing information between simulations of the microscale on macroscale response through a straightforward coupling as well as the ability to develop constitutive models which can capture this response directly utilizing the micromorphic theory alone. The framework however requires specialized finite elements which are capable of handing the multi-field nature. Such a finite element is proposed to be developed here which will prove foundational for the research into the development and calibration of higher order continuum models. While these models are beyond the scope of the efforts detailed here, the framework to utilize them is a significant part of the overall effort. A roubust, highly verified, and documented element would provide a very useful tool in the research effort.
{ "alphanum_fraction": 0.8368355995, "avg_line_length": 231.1428571429, "ext": "tex", "hexsha": "44faf4289f331086f96a8c94a2d4e1a5f17f6a4b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "dafc66df8a308e9fef8af4907de902464b84302b", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "lanl/tardigrade-micromorphic-element", "max_forks_repo_path": "doc/Proposal/tex/introduction.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "dafc66df8a308e9fef8af4907de902464b84302b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "lanl/tardigrade-micromorphic-element", "max_issues_repo_path": "doc/Proposal/tex/introduction.tex", "max_line_length": 634, "max_stars_count": null, "max_stars_repo_head_hexsha": "dafc66df8a308e9fef8af4907de902464b84302b", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "lanl/tardigrade-micromorphic-element", "max_stars_repo_path": "doc/Proposal/tex/introduction.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 286, "size": 1618 }
\documentclass[11pt, english]{article} \usepackage{graphicx} \usepackage[colorlinks=true, linkcolor=blue]{hyperref} \usepackage[english]{babel} \selectlanguage{english} \usepackage[utf8]{inputenc} \usepackage[svgnames]{xcolor} \usepackage{listings} \usepackage{afterpage} \pagestyle{plain} \definecolor{dkgreen}{rgb}{0,0.6,0} \definecolor{gray}{rgb}{0.5,0.5,0.5} \definecolor{mauve}{rgb}{0.58,0,0.82} %\lstset{language=R, % basicstyle=\small\ttfamily, % stringstyle=\color{DarkGreen}, % otherkeywords={0,1,2,3,4,5,6,7,8,9}, % morekeywords={TRUE,FALSE}, % deletekeywords={data,frame,length,as,character}, % keywordstyle=\color{blue}, % commentstyle=\color{DarkGreen}, %} \lstset{frame=tb, language=R, aboveskip=3mm, belowskip=3mm, showstringspaces=false, columns=flexible, numbers=none, keywordstyle=\color{blue}, numberstyle=\tiny\color{gray}, commentstyle=\color{dkgreen}, stringstyle=\color{mauve}, breaklines=true, breakatwhitespace=true, tabsize=3 } \usepackage{here} \textheight=21cm \textwidth=17cm %\topmargin=-1cm \oddsidemargin=0cm \parindent=0mm \pagestyle{plain} %%%%%%%%%%%%%%%%%%%%%%%%%% % La siguiente instrucción pone el curso automáticamente% %%%%%%%%%%%%%%%%%%%%%%%%%% \usepackage{color} \usepackage{ragged2e} \global\let\date\relax \newcounter{unomenos} \setcounter{unomenos}{\number\year} \addtocounter{unomenos}{-1} \stepcounter{unomenos} \begin{document} \begin{titlepage} \begin{center} \vspace*{-1in} \begin{figure}[htb] \begin{center} \end{center} \end{figure} \begin{large} \textbf{LOG ME}\\ \end{large} \vspace*{0.2in} \begin{Large} \end{Large} \vspace*{0.3in} \begin{large} \\ \end{large} \vspace*{0.3in} \rule{80mm}{0.1mm}\\ \vspace*{0.1in} \begin{large} Made by: \\ Athira Nair K\\ \end{large} \end{center} \end{titlepage} \newcommand{\CC}{C\nolinebreak\hspace{-.05em}\raisebox{.4ex}{\tiny\bf +}\nolinebreak\hspace{-.10em}\raisebox{.4ex}{\tiny\bf +}} \def\CC{{C\nolinebreak[4]\hspace{-.05em}\raisebox{.4ex}{\tiny\bf ++}}} \tableofcontents \newpage \section{Introduction} \newline This paper gives an overview of a login application, Log-Me helps the user to login to his/her private page, Register if he/she is a new user. And it gives special consideration regarding security. Password checking, with the existing one present in the database. In case if both of the passwords matches, confirms the user to log in. Later, a page displaying Hello username will be shown. \subsection{Scope} \newline Log-Me is a mini web application which helps a user, to begin with Login if he/she is a current user or register if he is a new user. Registration has been made more secure by keeping a key bound for passwords, certain common passwords and easily typed ones like qwerty,123 and user names are unacceptable. Email is sent to the user via mail servers to log in. \newline If these procedures go on fine, a person is directed to a page displaying Hello with a username. Log-Me can be used as a platform by most of the apps to create their login feature. It follows a secured format in user login. \subsection{Definition} \newline \item Services : Functionalities offered by application. \item Validation: Checks validity of user. \item Update: Adding new information and changing previous one. \item Push notifications: Messages that are received in the mail. \subsection{Acronyms} \newline \item DD: Design Documentation \item API: Application Program interface \item DBMS: Database Management System. \subsection{Document Structure} \item Chapter1: Introduction \item Chapter2: Architectural Design \item Chapter3: User Interface Design \item Chapter4: Requirement Tractability \item Chapter5: Implementation and Test plan \item Chapter6: Effort Spent \item Chapter7: References \section{Architectural Design} \newline \subsection{Overview} \newline The whole project is divided into 5 classes: \newline \item\textbf{Admin:} \newline \newline Handles all the works and has the power to remove,add user. \newline \item\textbf{Login class:} \item\item User is asked to enter his username and password to log in. Hence two attributes used here are User Id and password. \item\item If entered details are incorrect(using validate login method), the user has accessibility to reset the password. \item\item Sign up is meant for a new user to login. \item\item Another option is present to reset the password, where the user is asked to enter a new password and methods like delete changed password and add new password are included. Sign up and reset password is connected with the login page. \newline \item\textbf{Sign up:} \item\item Here attributes like Username, password is used. \item\item Methods like a register() to provide a link to the database to create a new token. When a user is registered, in the database the email_confirmed column set to False. \item\item An email is concerned with a unique URL and each URL with a token. \newline \item\textbf{Reset password:} \item\item A request to reset for a given account's email is sent. \item\item When the user clicks the link, checks which email to confirm and sets email_confirmed column to True. This is the functionality of valid_mail(). \item\item Later the password of the user is changed in password column and the old one is deleted. \item\item Here API fetching is used to provide login link to the user to his G-Mail/outlook mail. \newline \item\textbf{Display:} \newline \newline Displays with username via gethello() method. \subsection{Class Diagram} \includegraphics[width=200mm,scale=5.0]{ClassDiagram.png} \subsection{Flow Chart} \begin{center} \newline \newline \includegraphics[width=100mm,scale=7.0]{Blank Diagram.png} \end{center} %uoooooooooooooooo tumadreuooooooooooooooooooo UOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO %AL FIN SE TERMINA ESTA PUTA MIERDA!!!! %USEGREAS OSTOJEOGIRN ojeogiek \end{document}
{ "alphanum_fraction": 0.7640449438, "avg_line_length": 30.2783505155, "ext": "tex", "hexsha": "81dc2d4c13cbfd3df758d545fab435cc9ee10c0e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "eb6cce52116beb20955a5f6d19eb2d510f663c8a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "athiranair2000/Log-Me", "max_forks_repo_path": "Log-Me/main.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "eb6cce52116beb20955a5f6d19eb2d510f663c8a", "max_issues_repo_issues_event_max_datetime": "2019-10-07T13:42:56.000Z", "max_issues_repo_issues_event_min_datetime": "2019-10-07T13:21:40.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "athiranair2000/Log-Me", "max_issues_repo_path": "Log-Me/main.tex", "max_line_length": 390, "max_stars_count": null, "max_stars_repo_head_hexsha": "eb6cce52116beb20955a5f6d19eb2d510f663c8a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "athiranair2000/Log-Me", "max_stars_repo_path": "Log-Me/main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1660, "size": 5874 }
\chapter{Technology - Wider Context} \section{Considerations on the challenges when building distributed systems} When creating distributing systems, different challenges had to be overcome by developers in other to develop such systems: \begin{itemize} \item Inherent complexities: Inter-node communication needs different mechanisms than intra-node communication, as well as policies and protocols. Synchronization and coordination is more complicated, and problems such as latency, jitter, transient failures and overload are also problems to consider. \item Accidental complexities: Those happen because of limitations with software tools and development techniques. Most of these happen because of choices made when the time arose to build the system. \item Inadequate methods and techniques: For a long time, software development studies and practices have focused on single-process and single-threaded applications, which has lead to a lack of expertise on software techniques distributed systems. \item Continuous re-invention and re-discovery of core concepts and techniques: For a long time, time has been used on solving the same problems over and over again, instead of trying to develop techniques that would facilitate the reuse of code across different platforms, which lead to less time being available to invest in innovation. \end{itemize} \section{Different Distributed Computing technologies} \subsection{Ad hoc Network Programming} Ad hoc network programming was one of the first solutions to allow the development of distributed computing systems. That technology used Interprocess Communication mechanisms. In IPC, there was a use of shared memory, pipes and sockets, which lead to a big coupling between the application code and the socket API, as well as paradigms mismatches, since local communication techniques use object-oriented techniques and remote communications use function-oriented techniques. \subsection{Structured communication} This technology was an improvement to ad hoc since it doesn't couple application code to low-level IPC mechanisms, offers higher-level communication to distributed systems, encapsulates machine-level details and embodies types and communication style closer to the application domain. One particular example of this technology are Remote Procedure Call platforms. They have the following characteristics: \begin{itemize} \item Allow distributed applications to cooperate with one another by: \begin{itemize} \item Invoking functions on each other \item Passing parameters along with each invocation \item Receiving results from the function that was invocated \end{itemize} \end{itemize} With this technique, however, components are still aware of their peers' remoteness, therefore it does not fulfil the following distributed systems requirements: \begin{itemize} \item Location-independence of components \item Flexible component deployment \end{itemize} \subsection{Middleware} Middleware is used as a bridge between the remote system and the application, being a distribution infrastructure software. It resides between an application and the operating system, network or database. Different types of middleware have appeared over time, which are listed below. \subsubsection{Distributed Object Computing} Also known as DOC, it represents the confluence of two major information technologies: RPC-based distributed computing systems and object-oriented design and programming. COBRA 2.x and Java RMI are examples of DOC. They focus on interfaces, which are contracts between clients and servers that define a location-independent means for clients to view and access object services provided by a server. Such technologies also define communication protocols and object information models to enable interoperability. Their key limitations are: \begin{itemize} \item Lack of functional boundaries: This type of middleware treats all interfaces as client/server contracts and don't provide standard assembly mechanisms to decouple dependencies among collaborating object implementations. This leads to a requirement for the explicit discovery and connection to objects that are only dependencies for the actual object that the application is trying to reach. \item Lack of software deployment and configuration standards: Results in systems that are harder to maintain and software component implementations that are much harder to reuse. \end{itemize} \subsubsection{Component middleware} This technology emerged to address the following limitations of DOC: \begin{itemize} \item Lack of functional boundaries: Allows a group of cohesive component objects to interact with each other through multiple provided and required interfaces and defines standard runtime mechanisms needed to execute these component objects in generic application servers. \item Lack of standard deployment and configuration mechanisms: Component middleware specifies the infrastructure to packages, which makes it easier to customize, assemble and disseminate components throughout a distributed system. \end{itemize} Examples of this technology are Enterprise JavaBeans and COBRA Component Model. For this technology, a set of rules and definitions has been applied: \begin{itemize} \item Component is an implementation entity that exposes a set of named interfaces and connection points that components use to communicate with each other. \item Containers provide the server runtime environment for component implementations. Contains various pre-defined hooks and operations that give components access to strategies and services, such as persistence, event notification, transaction, replication, load balancing and security. Each container is also responsible for initializing and providing runtime contexts for the managed components. \end{itemize} Component middleware also automates aspects of various stages in the application development lifecycle like component implementation, packaging, assembly and deployment. This approach enables developers to create applications more rapidly and robustly than DOC. \subsubsection{Publish/Subscribe and Message-Oriented Services} RPC, DOC and component middleware are all based on request/response communication. Aspects of this type of communication are: \begin{itemize} \item Synchonous communication: The client waits for a response and stops the rest of its process \item Designated communication: The client must know the identity of the server, which leads to tight coupling between the application code and the API \item Point-to-Point communication: Client talks to just one server at a time \end{itemize} In order to fix the problems that that type of communication created for some systems, the following alternatives were created: \begin{itemize} \item Message-Oriented Middleware: Applied in technologies such as IBM MQ Series, BEA MessageQ and TIBCO Rendezvous \item Publish/Subscribe middleware: Applied in technologies such as Java Messaging Service (JMS), Data Distribution Service (DDS) and WS-NOTIFICATION \end{itemize} Specifications of Message-Oriented: \begin{itemize} \item Support for asynchronous communication \item Transactional properties \end{itemize} On top of that, Publish/Subscribe also allows: \begin{itemize} \item Anonymous communication (loose coupling) \item Group communication \end{itemize} On the Publish/Subscribe paradigm: \begin{itemize} \item Publishers are the source of events. The may need to describe the type of event they generate sometimes \item Subscribers are the event sinks; they consume data on topics of interest to them. They may need to declare filtering information sometimes \item Event channels: Components in the system that propagate events. They can perform services such as filtering and routing, Quality of Service enforcement and fault management. \end{itemize} In these types of systems, events can be represented in many ways, and the interfaces can also be generic or specialized. \subsubsection{Service-Oriented Architectures and Web Services} Service-Oriented Architecture (SOA) is a style of organizing and utilizing distributed capabilities that may be controlled by different organizations and owners. It provides a uniform means to offer, discover, interact with and use capabilities of loosely coupled and interoperable software services to support the requirements of the business processes and application users. From the emergence of this technology along with the increasing popularity of the World Wide Web, SOAP was created. It is a protocol to exchange XML-based messages over a network, normally using HTTP. SOAP spawned a popular new variant of SOA called Web Services. It allows developers to package application logic into services whose interfaces are described with the Web Service Description Language (WSDL). WSDL-based services are often accessed using higher-level Internet protocols. They can also be used to build an Enterprise Service Bus(ESB), which is a distributed computing architecture that simplifies interworking between disparate systems. Web services have established themselves as the technology of choice for most enterprise business applications. They complement earlier successful middleware technologies and provide standard mechanisms for interoperability. Examples of Web Services technologies are: \begin{itemize} \item Microsoft Windows Component Foundation (WCF) \item Service Component Architecture (SCA) \end{itemize} Web Services combine aspects of component-based development and Web technologies. They also provide black-box functionality that can be described and reused without concern for how a service is implemented. They are not accessed using the object model-specific protocols, but rather using Web protocols and data formats. Web service technologies focus on middleware integration \& allow component reuse across an organization's entire application set, regardless of the technology implemented for those. In general, web services make it relatively easy to reuse and share common logic with such diverse clients as mobile, desktop and web applications. The broad reach of web services is possible because they rely on open standards that are ubiquitous, interoperable across different computing platforms and independent of the underlying execution technologies. All web services use HTTP and leverage data-interchange standards like XML and JSON, as well as common media types, but differ on the way they may use HTTP: \begin{itemize} \item As an application protocol to define standard service behaviours \item HTTP as a transport mechanism to convey data \end{itemize} In Web services, there are two prominent design models: \begin{itemize} \item SOAP \begin{itemize} \item Language, platform and transport independent \item Works well in distributed enterprise environments \item Provides significant pre-build extensibility in the form of WS* standards \item Built-in error handling \item XML message-format only \item Automation when used with certain language products \item Uses WSDL as a description language \end{itemize} \item REST \begin{itemize} \item Uses easy to understand standards like swagger and OpenAPI Specification 3.0 \item Can use different types of message formatting like JSON \item Fast (no extensive processing required) \item Close to other Web technologies in design philosophy \item Uses WADL as a description language \end{itemize} \end{itemize}
{ "alphanum_fraction": 0.7536162772, "avg_line_length": 59.3490566038, "ext": "tex", "hexsha": "a990ec829ac6ab3b280449015d4e16b5f2b0291a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "814832477204773e6af0853c71b3215a0b4016d6", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "alexcosta97/crypto-predictions", "max_forks_repo_path": "report/1-research/1-technology.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "814832477204773e6af0853c71b3215a0b4016d6", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "alexcosta97/crypto-predictions", "max_issues_repo_path": "report/1-research/1-technology.tex", "max_line_length": 117, "max_stars_count": null, "max_stars_repo_head_hexsha": "814832477204773e6af0853c71b3215a0b4016d6", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "alexcosta97/crypto-predictions", "max_stars_repo_path": "report/1-research/1-technology.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2375, "size": 12582 }
% !TeX root = RJwrapper.tex \title{Capitalized Title Here} \author{by Author One, Author Two and Author Three} \maketitle \abstract{ An abstract of less than 150 words. } \section{Section title in sentence case} Introductory section which may include references in parentheses \citep{R}, or cite a reference such as \citet{R} in the text. \section{Another section} This section may contain a figure such as Figure~\ref{figure:rlogo}. \begin{figure}[htbp] \centering \includegraphics{Rlogo-5} \caption{The logo of R.} \label{figure:rlogo} \end{figure} \section{Another section} There will likely be several sections, perhaps including code snippets, such as: \begin{example} x <- 1:10 result <- myFunction(x) \end{example} \section{Summary} This file is only a basic article template. For full details of \emph{The R Journal} style and information on how to prepare your article for submission, see the \href{https://journal.r-project.org/share/author-guide.pdf}{Instructions for Authors}. \bibliography{RJreferences} \address{Author One\\ Affiliation\\ Address\\ Country\\ (ORCiD if desired)\\ \email{author1@work}} \address{Author Two\\ Affiliation\\ Address\\ Country\\ (ORCiD if desired)\\ \email{author2@work}} \address{Author Three\\ Affiliation\\ Address\\ Country\\ (ORCiD if desired)\\ \email{author3@work}}
{ "alphanum_fraction": 0.730713246, "avg_line_length": 22.1612903226, "ext": "tex", "hexsha": "1d4bf14beb5c90fc032d84715c7b83cf0997a4e4", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "1c61efa2f98ab18a8fb75a3a08302d800fa575a7", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ddsjoberg/r-journal-gtsummary", "max_forks_repo_path": "template/RJtemplate.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "1c61efa2f98ab18a8fb75a3a08302d800fa575a7", "max_issues_repo_issues_event_max_datetime": "2021-06-21T18:58:29.000Z", "max_issues_repo_issues_event_min_datetime": "2021-06-21T18:58:29.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ddsjoberg/r-journal-gtsummary", "max_issues_repo_path": "template/RJtemplate.tex", "max_line_length": 248, "max_stars_count": null, "max_stars_repo_head_hexsha": "1c61efa2f98ab18a8fb75a3a08302d800fa575a7", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ddsjoberg/r-journal-gtsummary", "max_stars_repo_path": "template/RJtemplate.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 390, "size": 1374 }
\chapter{\label{cpt:conclusion}Conclusion} During this internship, I created a simple framework for Internet discussion analysis and visualization. Along with the framework, I provide three modules that illustrate the possibilities within the framework. Both modules and framework are extensible or replacable in various ways. The current visualization has a strong emphasis on stories as an entity. To get a visualization of all activity on the internet, a sample should be taken and the orbiting particles will no longer be entities on their own. They rather turn into classes of entities, which probably calls more for a cloud-like visualization. One of the ideas is to change Ambient Earth into Ambient Sun, where the Sun's corona (the brightly shining part which is visible during a total solar eclipse) is taking over the role of the particles orbiting Earth. \section{Unimplemented parts} It is not yet possible to ``travel through the past'' as was an initial idea. This feature would enable a user to set a certain time in the past and see the situation at that point. A very simple approach would be to create screenshots of the display for every day and then display the desired one. \section{Points for expansion} The nice thing about using a message and whiteboarding system like Psyclone is that every component can be replaced fairly easily. Currently, there is only one analysis module which can not make very intelligent decisions. The fact that this is a project within the AI department, makes it likely that someone will implement a smarter analysis module which can run in place or alongside the current one. It is only possible to get stories from RSS feeds, but another module could be created which gets information out of newsgroups. A lot of AI discussion is going on on Usenet, so this might also be an interesting expansion of the system. If the applet is to be used in a forum, a module with direct (read-only) database access could be created. The applet could also be used in software development, displaying for instance build status, bugreports assigned to programmers, server load etc. Then, finally, the visualization itself can be replaced by another one and one of the ideas is to make it look more like the Sun's corona, expanding it in those directions where much discussion is going on, shrinking it where discussion is quiet.
{ "alphanum_fraction": 0.8058823529, "avg_line_length": 45.7692307692, "ext": "tex", "hexsha": "4a078dcd3c650afdb00e471a3beca15639ef801b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "dbd34d919a3b725313b0fe36a363a51bb6e7e6d5", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "islandsvinur/amber", "max_forks_repo_path": "documentation/report/report-conclusion.inc.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "dbd34d919a3b725313b0fe36a363a51bb6e7e6d5", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "islandsvinur/amber", "max_issues_repo_path": "documentation/report/report-conclusion.inc.tex", "max_line_length": 77, "max_stars_count": null, "max_stars_repo_head_hexsha": "dbd34d919a3b725313b0fe36a363a51bb6e7e6d5", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "islandsvinur/amber", "max_stars_repo_path": "documentation/report/report-conclusion.inc.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 496, "size": 2380 }
% -*- ess-noweb-default-code-mode: haskell-mode; -*-% ===> this file was generated automatically by noweave --- better not edit it \documentclass[nobib]{tufte-handout} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage[american]{babel} \usepackage{blindtext} \usepackage[style=alphabetic,backend=biber]{biblatex} \usepackage{csquotes} \addbibresource{nix-info.bib} \usepackage{noweb} \usepackage{color} % https://commons.wikimedia.org/wiki/File:Erlang_logo.svg \definecolor{ErlangRed}{HTML}{A90533} \usepackage{hyperref} \hypersetup{ bookmarks=true, pdffitwindow=true, pdfstartview={FitH}, pdftitle={nix-info} pdfauthor={Eric Bailey <[email protected]>}, pdfsubject={brew info clone for Nix}, pdfkeywords={nix, nixpkgs, metadata, command-line-tool, haskell, literate programming, noweb}, colorlinks=true, linkcolor=ErlangRed, urlcolor=ErlangRed } \usepackage{amsmath} \usepackage{amssymb} \usepackage[outputdir=tex]{minted} % NOTE: Use Tufte instead of noweb page style. % \pagestyle{noweb} % NOTE: Use shift option for wide code. % \noweboptions{smallcode,shortxref,webnumbering,english} \noweboptions{shift,smallcode,shortxref,webnumbering,english,noidentxref} \title{ nix-info \thanks{a \tt{brew info} clone for \href{https://nixos.org/nix/}{Nix}.} } \author{Eric Bailey} % \date{March 18, 2017} % \newcommand{\stylehook}{\marginpar{\raggedright\sl style hook}} \usepackage{todonotes} \newmintinline[hsk]{haskell}{} \newmintinline[bash]{bash}{} \usepackage{tikz} \usetikzlibrary{cd} % \newcommand{\fnhref}[2]{% % \href{#1}{#2}\footnote{\url{#1}}% % } \begin{document} \maketitle \begin{abstract} \todo[inline]{\blindtext} \end{abstract} % \tableofcontents % \newpage \newthought{The motivation for using Haskell} to write \tt{nix-info} is its strong, static typing, etc, etc... \todo{fixme: obviously} \todo[inline]{\blindtext} \newpage \section{Data Types} \begin{margintable}[4.75em] \begin{tabular}{rl} {\Tt{}\nwlinkedidentq{Meta}{NW3lV6pB-3bcEWn-1}\nwendquote} & ``standard meta-attributes'' \cite{nixpkgs-manual} \\[1.75em] {\Tt{}\nwlinkedidentq{PackageInfo}{NW3lV6pB-jCvHb-1}\nwendquote} & {\Tt{}\nwlinkedidentq{name}{NW3lV6pB-jCvHb-1}\nwendquote}, {\Tt{}\nwlinkedidentq{system}{NW3lV6pB-jCvHb-1}\nwendquote} and {\Tt{}\nwlinkedidentq{meta}{NW3lV6pB-jCvHb-1}\nwendquote} \\[1.75em] {\Tt{}\nwlinkedidentq{Package}{NW3lV6pB-JwV0T-1}\nwendquote} & {\Tt{}\nwlinkedidentq{path}{NW3lV6pB-JwV0T-1}\nwendquote} and {\Tt{}\nwlinkedidentq{info}{NW3lV6pB-JwV0T-1}\nwendquote} \\[1.75em] {\Tt{}\nwlinkedidentq{PackageList}{NW3lV6pB-3Pyhm3-1}\nwendquote} & \hsk{[}{\Tt{}\nwlinkedidentq{Package}{NW3lV6pB-JwV0T-1}\nwendquote}\hsk{]} \\[1.75em] {\Tt{}\nwlinkedidentq{NixURL}{NW3lV6pB-39G3ut-1}\nwendquote} & {\Tt{}\nwlinkedidentq{URL}{NW3lV6pB-33WSsm-1}\nwendquote} \end{tabular} \end{margintable} \nwfilename{nix-info.nw}\nwbegincode{1}\sublabel{NW3lV6pB-17s3Rd-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW3lV6pB-17s3Rd-1}}}\moddef{Data Types~{\nwtagstyle{}\subpageref{NW3lV6pB-17s3Rd-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW3lV6pB-1ClFUp-1}\\{NW3lV6pB-4NdpI2-1}}\nwenddeflinemarkup \LA{}Meta~{\nwtagstyle{}\subpageref{NW3lV6pB-3bcEWn-1}}\RA{} \LA{}PackageInfo~{\nwtagstyle{}\subpageref{NW3lV6pB-jCvHb-1}}\RA{} \LA{}Package~{\nwtagstyle{}\subpageref{NW3lV6pB-JwV0T-1}}\RA{} \LA{}PackageList~{\nwtagstyle{}\subpageref{NW3lV6pB-3Pyhm3-1}}\RA{} \LA{}NixURL~{\nwtagstyle{}\subpageref{NW3lV6pB-39G3ut-1}}\RA{} \nwused{\\{NW3lV6pB-1ClFUp-1}\\{NW3lV6pB-4NdpI2-1}}\nwendcode{}\nwbegindocs{2}\nwdocspar The standard meta-attributes are documented in the Nixpkgs Contributors Guide \parencite{nixpkgs-manual}. \bash{nix-env}, which is called by \bash{nix-info} in {\Tt{}\LA{}nixQuery~{\nwtagstyle{}\subpageref{NW3lV6pB-1oSFZX-1}}\RA{}\nwendquote}, returns a nested \href{http://hackage.haskell.org/package/aeson-1.1.1.0/docs/Data-Aeson-Types.html\#t:Object}{\hsk{Object}}, with relationships as described by the following diagram. \begin{figure}[ht] \begin{tikzcd} {\Tt{}\nwlinkedidentq{PackageList}{NW3lV6pB-3Pyhm3-1}\nwendquote} \dar[maps to, two heads] & {\Tt{}\nwlinkedidentq{PackageInfo}{NW3lV6pB-jCvHb-1}\nwendquote} \ar[dd, "..."] \drar["{\Tt{}\nwlinkedidentq{meta}{NW3lV6pB-jCvHb-1}\nwendquote}"] & Bool \\ {\Tt{}\nwlinkedidentq{Package}{NW3lV6pB-JwV0T-1}\nwendquote} \urar["{\Tt{}\nwlinkedidentq{info}{NW3lV6pB-JwV0T-1}\nwendquote}"] \drar["{\Tt{}\nwlinkedidentq{path}{NW3lV6pB-JwV0T-1}\nwendquote}"] & & {\Tt{}\nwlinkedidentq{Meta}{NW3lV6pB-3bcEWn-1}\nwendquote} \dlar["..."] \ar[d, "..."] \rar["int"] \ar[u, "bool"] \drar["..."] & Int \\ & {\Tt{}\nwlinkedidentq{Text}{NW3lV6pB-33WSsm-1}\nwendquote} & \hsk{[}{\Tt{}\nwlinkedidentq{Text}{NW3lV6pB-33WSsm-1}\nwendquote}\hsk{]} & {\Tt{}\nwlinkedidentq{NixURL}{NW3lV6pB-39G3ut-1}\nwendquote} \end{tikzcd} \caption{} \end{figure} \todo[inline]{Flesh this out.} \todo{use better types than just \hsk{Text} everywhere \ldots} \nwenddocs{}\nwbegincode{3}\sublabel{NW3lV6pB-3bcEWn-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW3lV6pB-3bcEWn-1}}}\moddef{Meta~{\nwtagstyle{}\subpageref{NW3lV6pB-3bcEWn-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW3lV6pB-17s3Rd-1}}\nwenddeflinemarkup \nwindexdefn{\nwixident{Meta}}{Meta}{NW3lV6pB-3bcEWn-1}data \nwlinkedidentc{Meta}{NW3lV6pB-3bcEWn-1} = \nwlinkedidentc{Meta}{NW3lV6pB-3bcEWn-1} \nwindexdefn{\nwixident{description}}{description}{NW3lV6pB-3bcEWn-1} \{ \nwlinkedidentc{description}{NW3lV6pB-3bcEWn-1} :: Maybe \nwlinkedidentc{Text}{NW3lV6pB-33WSsm-1} \nwindexdefn{\nwixident{longDescription}}{longDescription}{NW3lV6pB-3bcEWn-1} , \nwlinkedidentc{longDescription}{NW3lV6pB-3bcEWn-1} :: Maybe \nwlinkedidentc{Text}{NW3lV6pB-33WSsm-1} \nwindexdefn{\nwixident{branch}}{branch}{NW3lV6pB-3bcEWn-1} , \nwlinkedidentc{branch}{NW3lV6pB-3bcEWn-1} :: Maybe \nwlinkedidentc{Text}{NW3lV6pB-33WSsm-1} \nwindexdefn{\nwixident{homepage}}{homepage}{NW3lV6pB-3bcEWn-1} , \nwlinkedidentc{homepage}{NW3lV6pB-3bcEWn-1} :: Maybe \nwlinkedidentc{NixURL}{NW3lV6pB-39G3ut-1} \nwindexdefn{\nwixident{downloadPage}}{downloadPage}{NW3lV6pB-3bcEWn-1} , \nwlinkedidentc{downloadPage}{NW3lV6pB-3bcEWn-1} :: Maybe \nwlinkedidentc{NixURL}{NW3lV6pB-39G3ut-1} \nwindexdefn{\nwixident{maintainers}}{maintainers}{NW3lV6pB-3bcEWn-1} , \nwlinkedidentc{maintainers}{NW3lV6pB-3bcEWn-1} :: Maybe [\nwlinkedidentc{Text}{NW3lV6pB-33WSsm-1}] \nwindexdefn{\nwixident{priority}}{priority}{NW3lV6pB-3bcEWn-1} , \nwlinkedidentc{priority}{NW3lV6pB-3bcEWn-1} :: Maybe Int \nwindexdefn{\nwixident{platforms}}{platforms}{NW3lV6pB-3bcEWn-1} , \nwlinkedidentc{platforms}{NW3lV6pB-3bcEWn-1} :: Maybe [\nwlinkedidentc{Text}{NW3lV6pB-33WSsm-1}] \nwindexdefn{\nwixident{hydraPlatforms}}{hydraPlatforms}{NW3lV6pB-3bcEWn-1} , \nwlinkedidentc{hydraPlatforms}{NW3lV6pB-3bcEWn-1} :: Maybe [\nwlinkedidentc{Text}{NW3lV6pB-33WSsm-1}] \nwindexdefn{\nwixident{broken}}{broken}{NW3lV6pB-3bcEWn-1} , \nwlinkedidentc{broken}{NW3lV6pB-3bcEWn-1} :: Maybe Bool \nwindexdefn{\nwixident{updateWalker}}{updateWalker}{NW3lV6pB-3bcEWn-1} , \nwlinkedidentc{updateWalker}{NW3lV6pB-3bcEWn-1} :: Maybe Bool \nwindexdefn{\nwixident{outputsToInstall}}{outputsToInstall}{NW3lV6pB-3bcEWn-1} , \nwlinkedidentc{outputsToInstall}{NW3lV6pB-3bcEWn-1} :: Maybe [\nwlinkedidentc{Text}{NW3lV6pB-33WSsm-1}] \nwindexdefn{\nwixident{position}}{position}{NW3lV6pB-3bcEWn-1} , \nwlinkedidentc{position}{NW3lV6pB-3bcEWn-1} :: Maybe \nwlinkedidentc{Text}{NW3lV6pB-33WSsm-1} \} deriving (Show) \nwused{\\{NW3lV6pB-17s3Rd-1}}\nwidentdefs{\\{{\nwixident{branch}}{branch}}\\{{\nwixident{broken}}{broken}}\\{{\nwixident{description}}{description}}\\{{\nwixident{downloadPage}}{downloadPage}}\\{{\nwixident{homepage}}{homepage}}\\{{\nwixident{hydraPlatforms}}{hydraPlatforms}}\\{{\nwixident{longDescription}}{longDescription}}\\{{\nwixident{maintainers}}{maintainers}}\\{{\nwixident{Meta}}{Meta}}\\{{\nwixident{outputsToInstall}}{outputsToInstall}}\\{{\nwixident{platforms}}{platforms}}\\{{\nwixident{position}}{position}}\\{{\nwixident{priority}}{priority}}\\{{\nwixident{updateWalker}}{updateWalker}}}\nwidentuses{\\{{\nwixident{NixURL}}{NixURL}}\\{{\nwixident{Text}}{Text}}}\nwindexuse{\nwixident{NixURL}}{NixURL}{NW3lV6pB-3bcEWn-1}\nwindexuse{\nwixident{Text}}{Text}{NW3lV6pB-3bcEWn-1}\nwendcode{}\nwbegindocs{4}\nwdocspar \todo{describe this} \nwenddocs{}\nwbegincode{5}\sublabel{NW3lV6pB-jCvHb-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW3lV6pB-jCvHb-1}}}\moddef{PackageInfo~{\nwtagstyle{}\subpageref{NW3lV6pB-jCvHb-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW3lV6pB-17s3Rd-1}}\nwenddeflinemarkup \nwindexdefn{\nwixident{PackageInfo}}{PackageInfo}{NW3lV6pB-jCvHb-1}data \nwlinkedidentc{PackageInfo}{NW3lV6pB-jCvHb-1} = \nwlinkedidentc{PackageInfo}{NW3lV6pB-jCvHb-1} \nwindexdefn{\nwixident{name}}{name}{NW3lV6pB-jCvHb-1} \{ \nwlinkedidentc{name}{NW3lV6pB-jCvHb-1} :: \nwlinkedidentc{Text}{NW3lV6pB-33WSsm-1} \nwindexdefn{\nwixident{system}}{system}{NW3lV6pB-jCvHb-1} , \nwlinkedidentc{system}{NW3lV6pB-jCvHb-1} :: \nwlinkedidentc{Text}{NW3lV6pB-33WSsm-1} \nwindexdefn{\nwixident{meta}}{meta}{NW3lV6pB-jCvHb-1} , \nwlinkedidentc{meta}{NW3lV6pB-jCvHb-1} :: \nwlinkedidentc{Meta}{NW3lV6pB-3bcEWn-1} \} deriving (Show) \nwused{\\{NW3lV6pB-17s3Rd-1}}\nwidentdefs{\\{{\nwixident{meta}}{meta}}\\{{\nwixident{name}}{name}}\\{{\nwixident{PackageInfo}}{PackageInfo}}\\{{\nwixident{system}}{system}}}\nwidentuses{\\{{\nwixident{Meta}}{Meta}}\\{{\nwixident{Text}}{Text}}}\nwindexuse{\nwixident{Meta}}{Meta}{NW3lV6pB-jCvHb-1}\nwindexuse{\nwixident{Text}}{Text}{NW3lV6pB-jCvHb-1}\nwendcode{}\nwbegindocs{6}\nwdocspar \todo{describe this} \nwenddocs{}\nwbegincode{7}\sublabel{NW3lV6pB-JwV0T-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW3lV6pB-JwV0T-1}}}\moddef{Package~{\nwtagstyle{}\subpageref{NW3lV6pB-JwV0T-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW3lV6pB-17s3Rd-1}}\nwenddeflinemarkup \nwindexdefn{\nwixident{Package}}{Package}{NW3lV6pB-JwV0T-1}data \nwlinkedidentc{Package}{NW3lV6pB-JwV0T-1} = \nwlinkedidentc{Package}{NW3lV6pB-JwV0T-1} \nwindexdefn{\nwixident{path}}{path}{NW3lV6pB-JwV0T-1} \{ \nwlinkedidentc{path}{NW3lV6pB-JwV0T-1} :: \nwlinkedidentc{Text}{NW3lV6pB-33WSsm-1} \nwindexdefn{\nwixident{info}}{info}{NW3lV6pB-JwV0T-1} , \nwlinkedidentc{info}{NW3lV6pB-JwV0T-1} :: \nwlinkedidentc{PackageInfo}{NW3lV6pB-jCvHb-1} \} deriving (Show) \nwused{\\{NW3lV6pB-17s3Rd-1}}\nwidentdefs{\\{{\nwixident{info}}{info}}\\{{\nwixident{Package}}{Package}}\\{{\nwixident{path}}{path}}}\nwidentuses{\\{{\nwixident{PackageInfo}}{PackageInfo}}\\{{\nwixident{Text}}{Text}}}\nwindexuse{\nwixident{PackageInfo}}{PackageInfo}{NW3lV6pB-JwV0T-1}\nwindexuse{\nwixident{Text}}{Text}{NW3lV6pB-JwV0T-1}\nwendcode{}\nwbegindocs{8}\nwdocspar This \hsk{newtype} is a cheap trick to avoid using \hsk{FlexibleInstances} for the automagically derived {\Tt{}\LA{}FromJSON Instances~{\nwtagstyle{}\subpageref{NW3lV6pB-43l1LQ-1}}\RA{}\nwendquote}. \todo{describe why} \nwenddocs{}\nwbegincode{9}\sublabel{NW3lV6pB-3Pyhm3-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW3lV6pB-3Pyhm3-1}}}\moddef{PackageList~{\nwtagstyle{}\subpageref{NW3lV6pB-3Pyhm3-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW3lV6pB-17s3Rd-1}}\nwenddeflinemarkup \nwindexdefn{\nwixident{PackageList}}{PackageList}{NW3lV6pB-3Pyhm3-1}newtype \nwlinkedidentc{PackageList}{NW3lV6pB-3Pyhm3-1} = \nwlinkedidentc{PackageList}{NW3lV6pB-3Pyhm3-1} [\nwlinkedidentc{Package}{NW3lV6pB-JwV0T-1}] \nwused{\\{NW3lV6pB-17s3Rd-1}}\nwidentdefs{\\{{\nwixident{PackageList}}{PackageList}}}\nwidentuses{\\{{\nwixident{Package}}{Package}}}\nwindexuse{\nwixident{Package}}{Package}{NW3lV6pB-3Pyhm3-1}\nwendcode{}\nwbegindocs{10}\nwdocspar \todo{Mention the avoidance of the orphan instance.} \nwenddocs{}\nwbegincode{11}\sublabel{NW3lV6pB-39G3ut-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW3lV6pB-39G3ut-1}}}\moddef{NixURL~{\nwtagstyle{}\subpageref{NW3lV6pB-39G3ut-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW3lV6pB-17s3Rd-1}}\nwenddeflinemarkup \nwindexdefn{\nwixident{NixURL}}{NixURL}{NW3lV6pB-39G3ut-1}newtype \nwlinkedidentc{NixURL}{NW3lV6pB-39G3ut-1} = \nwlinkedidentc{NixURL}{NW3lV6pB-39G3ut-1} \nwlinkedidentc{URL}{NW3lV6pB-33WSsm-1} deriving (Show) \nwused{\\{NW3lV6pB-17s3Rd-1}}\nwidentdefs{\\{{\nwixident{NixURL}}{NixURL}}}\nwidentuses{\\{{\nwixident{URL}}{URL}}}\nwindexuse{\nwixident{URL}}{URL}{NW3lV6pB-39G3ut-1}\nwendcode{}\nwbegindocs{12}\nwdocspar \todo{describe this} \nwenddocs{}\nwbegincode{13}\sublabel{NW3lV6pB-4ICBye-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW3lV6pB-4ICBye-1}}}\moddef{magically derive ToJSON and FromJSON instances~{\nwtagstyle{}\subpageref{NW3lV6pB-4ICBye-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW3lV6pB-43l1LQ-1}}\nwenddeflinemarkup $(\nwlinkedidentc{deriveJSON}{NW3lV6pB-47ZaoI-1} \nwlinkedidentc{defaultOptions}{NW3lV6pB-47ZaoI-1} ''Meta) $(\nwlinkedidentc{deriveJSON}{NW3lV6pB-47ZaoI-1} \nwlinkedidentc{defaultOptions}{NW3lV6pB-47ZaoI-1} ''PackageInfo) \nwused{\\{NW3lV6pB-43l1LQ-1}}\nwidentuses{\\{{\nwixident{defaultOptions}}{defaultOptions}}\\{{\nwixident{deriveJSON}}{deriveJSON}}}\nwindexuse{\nwixident{defaultOptions}}{defaultOptions}{NW3lV6pB-4ICBye-1}\nwindexuse{\nwixident{deriveJSON}}{deriveJSON}{NW3lV6pB-4ICBye-1}\nwendcode{}\nwbegindocs{14}\nwdocspar \todo{describe this} \nwenddocs{}\nwbegincode{15}\sublabel{NW3lV6pB-43l1LQ-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW3lV6pB-43l1LQ-1}}}\moddef{FromJSON Instances~{\nwtagstyle{}\subpageref{NW3lV6pB-43l1LQ-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW3lV6pB-1ClFUp-1}\\{NW3lV6pB-4NdpI2-1}}\nwenddeflinemarkup \LA{}magically derive ToJSON and FromJSON instances~{\nwtagstyle{}\subpageref{NW3lV6pB-4ICBye-1}}\RA{} instance \nwlinkedidentc{FromJSON}{NW3lV6pB-1ClFUp-1} \nwlinkedidentc{PackageList}{NW3lV6pB-3Pyhm3-1} where parseJSON (Object v) = \nwlinkedidentc{PackageList}{NW3lV6pB-3Pyhm3-1} <$> traverse (\\(p,y) -> \nwlinkedidentc{Package}{NW3lV6pB-JwV0T-1} p <$> parseJSON y) (\nwlinkedidentc{HM}{NW3lV6pB-33WSsm-1}.toList v) parseJSON _ = fail "non-object" instance \nwlinkedidentc{FromJSON}{NW3lV6pB-1ClFUp-1} \nwlinkedidentc{NixURL}{NW3lV6pB-39G3ut-1} where parseJSON (String t) = case \nwlinkedidentc{importURL}{NW3lV6pB-33WSsm-1} (\nwlinkedidentc{T}{NW3lV6pB-33WSsm-1}.unpack t) of Just url -> pure $ \nwlinkedidentc{NixURL}{NW3lV6pB-39G3ut-1} url Nothing -> fail "no parse" parseJSON _ = fail "non-string" instance ToJSON \nwlinkedidentc{NixURL}{NW3lV6pB-39G3ut-1} where toJSON (\nwlinkedidentc{NixURL}{NW3lV6pB-39G3ut-1} url) = String (\nwlinkedidentc{T}{NW3lV6pB-33WSsm-1}.pack (\nwlinkedidentc{exportURL}{NW3lV6pB-3KgweQ-1} url)) toEncoding (\nwlinkedidentc{NixURL}{NW3lV6pB-39G3ut-1} url) = \nwlinkedidentc{text}{NW3lV6pB-2l8uu8-1} (\nwlinkedidentc{T}{NW3lV6pB-33WSsm-1}.pack (\nwlinkedidentc{exportURL}{NW3lV6pB-3KgweQ-1} url)) \nwused{\\{NW3lV6pB-1ClFUp-1}\\{NW3lV6pB-4NdpI2-1}}\nwidentuses{\\{{\nwixident{exportURL}}{exportURL}}\\{{\nwixident{FromJSON}}{FromJSON}}\\{{\nwixident{HM}}{HM}}\\{{\nwixident{importURL}}{importURL}}\\{{\nwixident{NixURL}}{NixURL}}\\{{\nwixident{Package}}{Package}}\\{{\nwixident{PackageList}}{PackageList}}\\{{\nwixident{T}}{T}}\\{{\nwixident{text}}{text}}}\nwindexuse{\nwixident{exportURL}}{exportURL}{NW3lV6pB-43l1LQ-1}\nwindexuse{\nwixident{FromJSON}}{FromJSON}{NW3lV6pB-43l1LQ-1}\nwindexuse{\nwixident{HM}}{HM}{NW3lV6pB-43l1LQ-1}\nwindexuse{\nwixident{importURL}}{importURL}{NW3lV6pB-43l1LQ-1}\nwindexuse{\nwixident{NixURL}}{NixURL}{NW3lV6pB-43l1LQ-1}\nwindexuse{\nwixident{Package}}{Package}{NW3lV6pB-43l1LQ-1}\nwindexuse{\nwixident{PackageList}}{PackageList}{NW3lV6pB-43l1LQ-1}\nwindexuse{\nwixident{T}}{T}{NW3lV6pB-43l1LQ-1}\nwindexuse{\nwixident{text}}{text}{NW3lV6pB-43l1LQ-1}\nwendcode{}\nwbegindocs{16}\nwdocspar \nwenddocs{}\nwbegincode{17}\sublabel{NW3lV6pB-1ClFUp-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW3lV6pB-1ClFUp-1}}}\moddef{src/NixInfo/Types.hs~{\nwtagstyle{}\subpageref{NW3lV6pB-1ClFUp-1}}}\endmoddef\nwstartdeflinemarkup\nwenddeflinemarkup -- | -- Module : \nwlinkedidentc{NixInfo}{NW3lV6pB-3KgweQ-1}\nwlinkedidentc{.Types}{NW3lV6pB-1ClFUp-1} -- Copyright : (c) 2017, Eric Bailey -- License : BSD-style (see LICENSE) -- -- Maintainer : [email protected] -- Stability : experimental -- Portability : portable -- -- Data types and JSON parsers for nix-\nwlinkedidentc{info}{NW3lV6pB-JwV0T-1} \LA{}OverloadedStrings~{\nwtagstyle{}\subpageref{NW3lV6pB-2Z5QZX-1}}\RA{} \LA{}TemplateHaskell~{\nwtagstyle{}\subpageref{NW3lV6pB-2PXGaB-1}}\RA{} module \nwlinkedidentc{NixInfo}{NW3lV6pB-3KgweQ-1}\nwlinkedidentc{.Types}{NW3lV6pB-1ClFUp-1} where \LA{}NixInfo.Types Imports~{\nwtagstyle{}\subpageref{NW3lV6pB-33WSsm-1}}\RA{} \LA{}Data Types~{\nwtagstyle{}\subpageref{NW3lV6pB-17s3Rd-1}}\RA{} \LA{}FromJSON Instances~{\nwtagstyle{}\subpageref{NW3lV6pB-43l1LQ-1}}\RA{} \nwindexdefn{\nwixident{NixInfo.Types}}{NixInfo.Types}{NW3lV6pB-1ClFUp-1}\eatline \nwindexdefn{\nwixident{FromJSON}}{FromJSON}{NW3lV6pB-1ClFUp-1}\eatline \nwnotused{src/NixInfo/Types.hs}\nwidentdefs{\\{{\nwixident{FromJSON}}{FromJSON}}\\{{\nwixident{NixInfo.Types}}{NixInfo.Types}}}\nwidentuses{\\{{\nwixident{info}}{info}}\\{{\nwixident{NixInfo}}{NixInfo}}}\nwindexuse{\nwixident{info}}{info}{NW3lV6pB-1ClFUp-1}\nwindexuse{\nwixident{NixInfo}}{NixInfo}{NW3lV6pB-1ClFUp-1}\nwendcode{}\nwbegindocs{18}\nwdocspar \section{Helper Functions} \nwenddocs{}\nwbegincode{19}\sublabel{NW3lV6pB-esBWi-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW3lV6pB-esBWi-1}}}\moddef{printPackage~{\nwtagstyle{}\subpageref{NW3lV6pB-esBWi-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW3lV6pB-3KgweQ-1}\\{NW3lV6pB-4NdpI2-1}}\nwenddeflinemarkup -- printPackage :: MonadIO io => \nwlinkedidentc{Package}{NW3lV6pB-JwV0T-1} -> io () printPackage :: \nwlinkedidentc{Package}{NW3lV6pB-JwV0T-1} -> IO () printPackage (\nwlinkedidentc{Package}{NW3lV6pB-JwV0T-1} pkgPath (\nwlinkedidentc{PackageInfo}{NW3lV6pB-jCvHb-1} pkgName _pkgSystem pkgMeta)) = \nwlinkedidentc{traverse_}{NW3lV6pB-3lPV08-1} \nwlinkedidentc{putStrLn}{NW3lV6pB-WlLXm-1} $ \nwlinkedidentc{catMaybes}{NW3lV6pB-3lPV08-1} [ Just pkgName -- , Just pkgSystem , \nwlinkedidentc{description}{NW3lV6pB-3bcEWn-1} pkgMeta , \nwlinkedidentc{T}{NW3lV6pB-33WSsm-1}.pack . \nwlinkedidentc{exportURL}{NW3lV6pB-3KgweQ-1} . (\\(\nwlinkedidentc{NixURL}{NW3lV6pB-39G3ut-1} url) -> url) <$> \nwlinkedidentc{homepage}{NW3lV6pB-3bcEWn-1} pkgMeta -- , \nwlinkedidentc{T}{NW3lV6pB-33WSsm-1}.unwords . \nwlinkedidentc{T}{NW3lV6pB-33WSsm-1}.words <$> \nwlinkedidentc{longDescription}{NW3lV6pB-3bcEWn-1} pkgMeta , \nwlinkedidentc{T}{NW3lV6pB-33WSsm-1}.unwords <$> \nwlinkedidentc{maintainers}{NW3lV6pB-3bcEWn-1} pkgMeta -- , \nwlinkedidentc{T}{NW3lV6pB-33WSsm-1}.unwords <$> \nwlinkedidentc{outputsToInstall}{NW3lV6pB-3bcEWn-1} pkgMeta -- , \nwlinkedidentc{T}{NW3lV6pB-33WSsm-1}.unwords <$> \nwlinkedidentc{platforms}{NW3lV6pB-3bcEWn-1} pkgMeta , Just pkgPath , \nwlinkedidentc{position}{NW3lV6pB-3bcEWn-1} pkgMeta ] \nwused{\\{NW3lV6pB-3KgweQ-1}\\{NW3lV6pB-4NdpI2-1}}\nwidentuses{\\{{\nwixident{catMaybes}}{catMaybes}}\\{{\nwixident{description}}{description}}\\{{\nwixident{exportURL}}{exportURL}}\\{{\nwixident{homepage}}{homepage}}\\{{\nwixident{longDescription}}{longDescription}}\\{{\nwixident{maintainers}}{maintainers}}\\{{\nwixident{NixURL}}{NixURL}}\\{{\nwixident{outputsToInstall}}{outputsToInstall}}\\{{\nwixident{Package}}{Package}}\\{{\nwixident{PackageInfo}}{PackageInfo}}\\{{\nwixident{platforms}}{platforms}}\\{{\nwixident{position}}{position}}\\{{\nwixident{putStrLn}}{putStrLn}}\\{{\nwixident{T}}{T}}\\{{\nwixident{traverse{\_}}}{traverse:un}}}\nwindexuse{\nwixident{catMaybes}}{catMaybes}{NW3lV6pB-esBWi-1}\nwindexuse{\nwixident{description}}{description}{NW3lV6pB-esBWi-1}\nwindexuse{\nwixident{exportURL}}{exportURL}{NW3lV6pB-esBWi-1}\nwindexuse{\nwixident{homepage}}{homepage}{NW3lV6pB-esBWi-1}\nwindexuse{\nwixident{longDescription}}{longDescription}{NW3lV6pB-esBWi-1}\nwindexuse{\nwixident{maintainers}}{maintainers}{NW3lV6pB-esBWi-1}\nwindexuse{\nwixident{NixURL}}{NixURL}{NW3lV6pB-esBWi-1}\nwindexuse{\nwixident{outputsToInstall}}{outputsToInstall}{NW3lV6pB-esBWi-1}\nwindexuse{\nwixident{Package}}{Package}{NW3lV6pB-esBWi-1}\nwindexuse{\nwixident{PackageInfo}}{PackageInfo}{NW3lV6pB-esBWi-1}\nwindexuse{\nwixident{platforms}}{platforms}{NW3lV6pB-esBWi-1}\nwindexuse{\nwixident{position}}{position}{NW3lV6pB-esBWi-1}\nwindexuse{\nwixident{putStrLn}}{putStrLn}{NW3lV6pB-esBWi-1}\nwindexuse{\nwixident{T}}{T}{NW3lV6pB-esBWi-1}\nwindexuse{\nwixident{traverse{\_}}}{traverse:un}{NW3lV6pB-esBWi-1}\nwendcode{}\nwbegindocs{20}\nwdocspar \nwenddocs{}\nwbegincode{21}\sublabel{NW3lV6pB-3KgweQ-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW3lV6pB-3KgweQ-1}}}\moddef{src/NixInfo.hs~{\nwtagstyle{}\subpageref{NW3lV6pB-3KgweQ-1}}}\endmoddef\nwstartdeflinemarkup\nwenddeflinemarkup -- | -- Module : \nwlinkedidentc{NixInfo}{NW3lV6pB-3KgweQ-1} -- Copyright : (c) 2017, Eric Bailey -- License : BSD-style (see LICENSE) -- -- Maintainer : [email protected] -- Stability : experimental -- Portability : portable -- -- brew \nwlinkedidentc{info}{NW3lV6pB-JwV0T-1} clone for Nix module \nwlinkedidentc{NixInfo}{NW3lV6pB-3KgweQ-1} (printPackage) where import \nwlinkedidentc{NixInfo}{NW3lV6pB-3KgweQ-1}\nwlinkedidentc{.Types}{NW3lV6pB-1ClFUp-1} \LA{}hide Prelude.putStrLn~{\nwtagstyle{}\subpageref{NW3lV6pB-3agbDv-1}}\RA{} \LA{}import traverse\_, catMaybes~{\nwtagstyle{}\subpageref{NW3lV6pB-3lPV08-1}}\RA{} import qualified Data.\nwlinkedidentc{Text}{NW3lV6pB-33WSsm-1}\nwlinkedidentc{ as }{NW3lV6pB-33WSsm-1}\nwlinkedidentc{T}{NW3lV6pB-33WSsm-1} \LA{}import Data.Text.IO~{\nwtagstyle{}\subpageref{NW3lV6pB-WlLXm-1}}\RA{} \nwindexdefn{\nwixident{exportURL}}{exportURL}{NW3lV6pB-3KgweQ-1}import Network.\nwlinkedidentc{URL}{NW3lV6pB-33WSsm-1}\nwlinkedidentc{ (}{NW3lV6pB-33WSsm-1}\nwlinkedidentc{exportURL}{NW3lV6pB-3KgweQ-1}) \LA{}printPackage~{\nwtagstyle{}\subpageref{NW3lV6pB-esBWi-1}}\RA{} \nwindexdefn{\nwixident{NixInfo}}{NixInfo}{NW3lV6pB-3KgweQ-1}\eatline \nwnotused{src/NixInfo.hs}\nwidentdefs{\\{{\nwixident{exportURL}}{exportURL}}\\{{\nwixident{NixInfo}}{NixInfo}}}\nwidentuses{\\{{\nwixident{Data.Text}}{Data.Text}}\\{{\nwixident{info}}{info}}\\{{\nwixident{Network.URL}}{Network.URL}}\\{{\nwixident{NixInfo.Types}}{NixInfo.Types}}\\{{\nwixident{T}}{T}}\\{{\nwixident{Text}}{Text}}\\{{\nwixident{URL}}{URL}}}\nwindexuse{\nwixident{Data.Text}}{Data.Text}{NW3lV6pB-3KgweQ-1}\nwindexuse{\nwixident{info}}{info}{NW3lV6pB-3KgweQ-1}\nwindexuse{\nwixident{Network.URL}}{Network.URL}{NW3lV6pB-3KgweQ-1}\nwindexuse{\nwixident{NixInfo.Types}}{NixInfo.Types}{NW3lV6pB-3KgweQ-1}\nwindexuse{\nwixident{T}}{T}{NW3lV6pB-3KgweQ-1}\nwindexuse{\nwixident{Text}}{Text}{NW3lV6pB-3KgweQ-1}\nwindexuse{\nwixident{URL}}{URL}{NW3lV6pB-3KgweQ-1}\nwendcode{}\nwbegindocs{22}\nwdocspar \section{Main Executable} \nwenddocs{}\nwbegincode{23}\sublabel{NW3lV6pB-1oSFZX-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW3lV6pB-1oSFZX-1}}}\moddef{nixQuery~{\nwtagstyle{}\subpageref{NW3lV6pB-1oSFZX-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW3lV6pB-zkSPe-1}\\{NW3lV6pB-4NdpI2-1}}\nwenddeflinemarkup \nwlinkedidentc{nixQuery}{NW3lV6pB-1oSFZX-1} :: \nwlinkedidentc{Text}{NW3lV6pB-33WSsm-1} -> \nwlinkedidentc{Shell}{NW3lV6pB-kiOo-1} (Maybe \nwlinkedidentc{PackageList}{NW3lV6pB-3Pyhm3-1}) \nwindexdefn{\nwixident{nixQuery}}{nixQuery}{NW3lV6pB-1oSFZX-1}\nwlinkedidentc{nixQuery}{NW3lV6pB-1oSFZX-1} arg = \nwlinkedidentc{procStrict}{NW3lV6pB-kiOo-1} "nix-env" ["-qa", arg, "--json" ] \nwlinkedidentc{empty}{NW3lV6pB-kiOo-1} >>= \\case (ExitSuccess,txt) -> pure $ decode (\nwlinkedidentc{cs}{NW3lV6pB-4NdpI2-1} txt) (status,_) -> \nwlinkedidentc{exit}{NW3lV6pB-kiOo-1} status \nwused{\\{NW3lV6pB-zkSPe-1}\\{NW3lV6pB-4NdpI2-1}}\nwidentdefs{\\{{\nwixident{nixQuery}}{nixQuery}}}\nwidentuses{\\{{\nwixident{cs}}{cs}}\\{{\nwixident{empty}}{empty}}\\{{\nwixident{exit}}{exit}}\\{{\nwixident{PackageList}}{PackageList}}\\{{\nwixident{procStrict}}{procStrict}}\\{{\nwixident{Shell}}{Shell}}\\{{\nwixident{Text}}{Text}}}\nwindexuse{\nwixident{cs}}{cs}{NW3lV6pB-1oSFZX-1}\nwindexuse{\nwixident{empty}}{empty}{NW3lV6pB-1oSFZX-1}\nwindexuse{\nwixident{exit}}{exit}{NW3lV6pB-1oSFZX-1}\nwindexuse{\nwixident{PackageList}}{PackageList}{NW3lV6pB-1oSFZX-1}\nwindexuse{\nwixident{procStrict}}{procStrict}{NW3lV6pB-1oSFZX-1}\nwindexuse{\nwixident{Shell}}{Shell}{NW3lV6pB-1oSFZX-1}\nwindexuse{\nwixident{Text}}{Text}{NW3lV6pB-1oSFZX-1}\nwendcode{}\nwbegindocs{24}\nwdocspar \nwenddocs{}\nwbegincode{25}\sublabel{NW3lV6pB-23uoZy-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW3lV6pB-23uoZy-1}}}\moddef{main~{\nwtagstyle{}\subpageref{NW3lV6pB-23uoZy-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW3lV6pB-zkSPe-1}\\{NW3lV6pB-4NdpI2-1}}\nwenddeflinemarkup \nwlinkedidentc{main}{NW3lV6pB-23uoZy-1} :: IO () \nwindexdefn{\nwixident{main}}{main}{NW3lV6pB-23uoZy-1}\nwlinkedidentc{main}{NW3lV6pB-23uoZy-1} = \nwlinkedidentc{sh}{NW3lV6pB-kiOo-1} $ \nwlinkedidentc{arguments}{NW3lV6pB-kiOo-1} >>= \\case [arg] -> \nwlinkedidentc{nixQuery}{NW3lV6pB-1oSFZX-1} arg >>= \\case Just (\nwlinkedidentc{PackageList}{NW3lV6pB-3Pyhm3-1} pkgs) -> \nwlinkedidentc{liftIO}{NW3lV6pB-kiOo-1} $ \nwlinkedidentc{traverse_}{NW3lV6pB-3lPV08-1} printPackage pkgs Nothing -> \nwlinkedidentc{exit}{NW3lV6pB-kiOo-1} $ ExitFailure 1 _ -> do \nwlinkedidentc{echo}{NW3lV6pB-kiOo-1} "TODO: usage" \nwlinkedidentc{exit}{NW3lV6pB-kiOo-1} $ ExitFailure 1 \nwused{\\{NW3lV6pB-zkSPe-1}\\{NW3lV6pB-4NdpI2-1}}\nwidentdefs{\\{{\nwixident{main}}{main}}}\nwidentuses{\\{{\nwixident{arguments}}{arguments}}\\{{\nwixident{echo}}{echo}}\\{{\nwixident{exit}}{exit}}\\{{\nwixident{liftIO}}{liftIO}}\\{{\nwixident{nixQuery}}{nixQuery}}\\{{\nwixident{PackageList}}{PackageList}}\\{{\nwixident{sh}}{sh}}\\{{\nwixident{traverse{\_}}}{traverse:un}}}\nwindexuse{\nwixident{arguments}}{arguments}{NW3lV6pB-23uoZy-1}\nwindexuse{\nwixident{echo}}{echo}{NW3lV6pB-23uoZy-1}\nwindexuse{\nwixident{exit}}{exit}{NW3lV6pB-23uoZy-1}\nwindexuse{\nwixident{liftIO}}{liftIO}{NW3lV6pB-23uoZy-1}\nwindexuse{\nwixident{nixQuery}}{nixQuery}{NW3lV6pB-23uoZy-1}\nwindexuse{\nwixident{PackageList}}{PackageList}{NW3lV6pB-23uoZy-1}\nwindexuse{\nwixident{sh}}{sh}{NW3lV6pB-23uoZy-1}\nwindexuse{\nwixident{traverse{\_}}}{traverse:un}{NW3lV6pB-23uoZy-1}\nwendcode{}\nwbegindocs{26}\nwdocspar \nwenddocs{}\nwbegincode{27}\sublabel{NW3lV6pB-zkSPe-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW3lV6pB-zkSPe-1}}}\moddef{app/Main.hs~{\nwtagstyle{}\subpageref{NW3lV6pB-zkSPe-1}}}\endmoddef\nwstartdeflinemarkup\nwenddeflinemarkup -- | -- Module : \nwlinkedidentc{Main}{NW3lV6pB-4NdpI2-1} -- Copyright : (c) 2017, Eric Bailey -- License : BSD-style (see LICENSE) -- -- Maintainer : [email protected] -- Stability : experimental -- Portability : portable -- -- \nwlinkedidentc{Main}{NW3lV6pB-4NdpI2-1} executable for nix-\nwlinkedidentc{info}{NW3lV6pB-JwV0T-1}. \LA{}LambdaCase~{\nwtagstyle{}\subpageref{NW3lV6pB-TcIAL-1}}\RA{} \LA{}OverloadedStrings~{\nwtagstyle{}\subpageref{NW3lV6pB-2Z5QZX-1}}\RA{} module \nwlinkedidentc{Main}{NW3lV6pB-4NdpI2-1} (\nwlinkedidentc{main}{NW3lV6pB-23uoZy-1}) where import \nwlinkedidentc{NixInfo}{NW3lV6pB-3KgweQ-1} (printPackage) import \nwlinkedidentc{NixInfo}{NW3lV6pB-3KgweQ-1}\nwlinkedidentc{.Types}{NW3lV6pB-1ClFUp-1} \LA{}import Data.Aeson~{\nwtagstyle{}\subpageref{NW3lV6pB-2HZloV-1}}\RA{} import \nwlinkedidentc{Data.Foldable}{NW3lV6pB-3lPV08-1} (\nwlinkedidentc{traverse_}{NW3lV6pB-3lPV08-1}) import Data.String.Conversions (\nwlinkedidentc{cs}{NW3lV6pB-4NdpI2-1}) import Data.\nwlinkedidentc{Text}{NW3lV6pB-33WSsm-1}\nwlinkedidentc{ (}{NW3lV6pB-33WSsm-1}\nwlinkedidentc{Text}{NW3lV6pB-33WSsm-1}) \LA{}import Turtle~{\nwtagstyle{}\subpageref{NW3lV6pB-kiOo-1}}\RA{} \LA{}nixQuery~{\nwtagstyle{}\subpageref{NW3lV6pB-1oSFZX-1}}\RA{} \LA{}main~{\nwtagstyle{}\subpageref{NW3lV6pB-23uoZy-1}}\RA{} \nwnotused{app/Main.hs}\nwidentuses{\\{{\nwixident{cs}}{cs}}\\{{\nwixident{Data.Foldable}}{Data.Foldable}}\\{{\nwixident{Data.Text}}{Data.Text}}\\{{\nwixident{info}}{info}}\\{{\nwixident{Main}}{Main}}\\{{\nwixident{main}}{main}}\\{{\nwixident{NixInfo}}{NixInfo}}\\{{\nwixident{NixInfo.Types}}{NixInfo.Types}}\\{{\nwixident{Text}}{Text}}\\{{\nwixident{traverse{\_}}}{traverse:un}}}\nwindexuse{\nwixident{cs}}{cs}{NW3lV6pB-zkSPe-1}\nwindexuse{\nwixident{Data.Foldable}}{Data.Foldable}{NW3lV6pB-zkSPe-1}\nwindexuse{\nwixident{Data.Text}}{Data.Text}{NW3lV6pB-zkSPe-1}\nwindexuse{\nwixident{info}}{info}{NW3lV6pB-zkSPe-1}\nwindexuse{\nwixident{Main}}{Main}{NW3lV6pB-zkSPe-1}\nwindexuse{\nwixident{main}}{main}{NW3lV6pB-zkSPe-1}\nwindexuse{\nwixident{NixInfo}}{NixInfo}{NW3lV6pB-zkSPe-1}\nwindexuse{\nwixident{NixInfo.Types}}{NixInfo.Types}{NW3lV6pB-zkSPe-1}\nwindexuse{\nwixident{Text}}{Text}{NW3lV6pB-zkSPe-1}\nwindexuse{\nwixident{traverse{\_}}}{traverse:un}{NW3lV6pB-zkSPe-1}\nwendcode{}\nwbegindocs{28}\nwdocspar \section{As a Script} \nwenddocs{}\nwbegincode{29}\sublabel{NW3lV6pB-3YxVZb-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW3lV6pB-3YxVZb-1}}}\moddef{shebang~{\nwtagstyle{}\subpageref{NW3lV6pB-3YxVZb-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW3lV6pB-4NdpI2-1}}\nwenddeflinemarkup #! /usr/bin/env nix-shell #! nix-shell -i runhaskell -p "haskellPackages.ghcWithPackages (h: [ h.turtle h.aeson h.string-conversions h.url ])" \nwused{\\{NW3lV6pB-4NdpI2-1}}\nwendcode{}\nwbegindocs{30}\nwdocspar \nwenddocs{}\nwbegincode{31}\sublabel{NW3lV6pB-4NdpI2-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW3lV6pB-4NdpI2-1}}}\moddef{script/nix-info~{\nwtagstyle{}\subpageref{NW3lV6pB-4NdpI2-1}}}\endmoddef\nwstartdeflinemarkup\nwenddeflinemarkup \LA{}shebang~{\nwtagstyle{}\subpageref{NW3lV6pB-3YxVZb-1}}\RA{} \LA{}LambdaCase~{\nwtagstyle{}\subpageref{NW3lV6pB-TcIAL-1}}\RA{} \LA{}OverloadedStrings~{\nwtagstyle{}\subpageref{NW3lV6pB-2Z5QZX-1}}\RA{} \LA{}TemplateHaskell~{\nwtagstyle{}\subpageref{NW3lV6pB-2PXGaB-1}}\RA{} \nwindexdefn{\nwixident{Main}}{Main}{NW3lV6pB-4NdpI2-1}\nwindexdefn{\nwixident{main}}{main}{NW3lV6pB-4NdpI2-1}module \nwlinkedidentc{Main}{NW3lV6pB-4NdpI2-1} (\nwlinkedidentc{main}{NW3lV6pB-23uoZy-1}) where \LA{}hide Prelude.putStrLn~{\nwtagstyle{}\subpageref{NW3lV6pB-3agbDv-1}}\RA{} \LA{}NixInfo.Types Imports~{\nwtagstyle{}\subpageref{NW3lV6pB-33WSsm-1}}\RA{} \LA{}import traverse\_, catMaybes~{\nwtagstyle{}\subpageref{NW3lV6pB-3lPV08-1}}\RA{} \nwindexdefn{\nwixident{cs}}{cs}{NW3lV6pB-4NdpI2-1}import Data.String.Conversions (\nwlinkedidentc{cs}{NW3lV6pB-4NdpI2-1}) \LA{}import Data.Text.IO~{\nwtagstyle{}\subpageref{NW3lV6pB-WlLXm-1}}\RA{} \LA{}import Turtle~{\nwtagstyle{}\subpageref{NW3lV6pB-kiOo-1}}\RA{} \LA{}Data Types~{\nwtagstyle{}\subpageref{NW3lV6pB-17s3Rd-1}}\RA{} \LA{}FromJSON Instances~{\nwtagstyle{}\subpageref{NW3lV6pB-43l1LQ-1}}\RA{} \LA{}printPackage~{\nwtagstyle{}\subpageref{NW3lV6pB-esBWi-1}}\RA{} \LA{}nixQuery~{\nwtagstyle{}\subpageref{NW3lV6pB-1oSFZX-1}}\RA{} \LA{}main~{\nwtagstyle{}\subpageref{NW3lV6pB-23uoZy-1}}\RA{} \nwnotused{script/nix-info}\nwidentdefs{\\{{\nwixident{cs}}{cs}}\\{{\nwixident{Main}}{Main}}\\{{\nwixident{main}}{main}}}\nwendcode{}\nwbegindocs{32}\nwdocspar \section{Language Extensions} For brevity: \nwenddocs{}\nwbegincode{33}\sublabel{NW3lV6pB-TcIAL-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW3lV6pB-TcIAL-1}}}\moddef{LambdaCase~{\nwtagstyle{}\subpageref{NW3lV6pB-TcIAL-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW3lV6pB-zkSPe-1}\\{NW3lV6pB-4NdpI2-1}}\nwenddeflinemarkup \{-# LANGUAGE LambdaCase #-\} \nwused{\\{NW3lV6pB-zkSPe-1}\\{NW3lV6pB-4NdpI2-1}}\nwendcode{}\nwbegindocs{34}\nwdocspar To manage juggling \href{https://hackage.haskell.org/package/text/docs/Data-Text.html\#t:Text}{\hsk{Text}}, (lazy) \href{https://hackage.haskell.org/package/bytestring/docs/Data-ByteString.html\#t:ByteString}{\hsk{ByteString}}, and \href{https://hackage.haskell.org/package/turtle-1.3.2/docs/Turtle-Line.html\#t:Line}{\hsk{Line}} values, use the {\Tt{}\LA{}OverloadedStrings~{\nwtagstyle{}\subpageref{NW3lV6pB-2Z5QZX-1}}\RA{}\nwendquote} language extension \parencite{Charles2014}. \nwenddocs{}\nwbegincode{35}\sublabel{NW3lV6pB-2Z5QZX-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW3lV6pB-2Z5QZX-1}}}\moddef{OverloadedStrings~{\nwtagstyle{}\subpageref{NW3lV6pB-2Z5QZX-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW3lV6pB-1ClFUp-1}\\{NW3lV6pB-zkSPe-1}\\{NW3lV6pB-4NdpI2-1}}\nwenddeflinemarkup \{-# LANGUAGE OverloadedStrings #-\} \nwused{\\{NW3lV6pB-1ClFUp-1}\\{NW3lV6pB-zkSPe-1}\\{NW3lV6pB-4NdpI2-1}}\nwendcode{}\nwbegindocs{36}\nwdocspar Enable the {\Tt{}\LA{}TemplateHaskell~{\nwtagstyle{}\subpageref{NW3lV6pB-2PXGaB-1}}\RA{}\nwendquote} language extension \parencite{Westfall2014} to {\Tt{}\LA{}magically derive ToJSON and FromJSON instances~{\nwtagstyle{}\subpageref{NW3lV6pB-4ICBye-1}}\RA{}\nwendquote} from record definitions via \href{https://hackage.haskell.org/package/aeson-1.1.1.0/docs/Data-Aeson-TH.html}{\hsk{Data.Aeson.TH}} \nwenddocs{}\nwbegincode{37}\sublabel{NW3lV6pB-2PXGaB-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW3lV6pB-2PXGaB-1}}}\moddef{TemplateHaskell~{\nwtagstyle{}\subpageref{NW3lV6pB-2PXGaB-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW3lV6pB-1ClFUp-1}\\{NW3lV6pB-4NdpI2-1}}\nwenddeflinemarkup \{-# LANGUAGE TemplateHaskell #-\} \nwused{\\{NW3lV6pB-1ClFUp-1}\\{NW3lV6pB-4NdpI2-1}}\nwendcode{}\nwbegindocs{38}\nwdocspar \section{Imports} Hide \href{https://hackage.haskell.org/package/base/docs/Prelude.html\#v:putStrLn}{\hsk{Prelude.putStrLn}}, so we can {\Tt{}\LA{}import Data.Text.IO~{\nwtagstyle{}\subpageref{NW3lV6pB-WlLXm-1}}\RA{}\nwendquote} \href{https://hackage.haskell.org/package/text/docs/Data-Text-IO.html\#v:putStrLn}{\hsk{(putStrLn)}}. \nwenddocs{}\nwbegincode{39}\sublabel{NW3lV6pB-3agbDv-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW3lV6pB-3agbDv-1}}}\moddef{hide Prelude.putStrLn~{\nwtagstyle{}\subpageref{NW3lV6pB-3agbDv-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW3lV6pB-3KgweQ-1}\\{NW3lV6pB-4NdpI2-1}}\nwenddeflinemarkup import Prelude hiding (\nwlinkedidentc{putStrLn}{NW3lV6pB-WlLXm-1}) \nwused{\\{NW3lV6pB-3KgweQ-1}\\{NW3lV6pB-4NdpI2-1}}\nwidentuses{\\{{\nwixident{putStrLn}}{putStrLn}}}\nwindexuse{\nwixident{putStrLn}}{putStrLn}{NW3lV6pB-3agbDv-1}\nwendcode{}\nwbegindocs{40}\nwdocspar \nwenddocs{}\nwbegincode{41}\sublabel{NW3lV6pB-2HZloV-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW3lV6pB-2HZloV-1}}}\moddef{import Data.Aeson~{\nwtagstyle{}\subpageref{NW3lV6pB-2HZloV-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW3lV6pB-zkSPe-1}\\{NW3lV6pB-33WSsm-1}}\nwenddeflinemarkup import \nwlinkedidentc{Data.Aeson}{NW3lV6pB-2HZloV-1} \nwindexdefn{\nwixident{Data.Aeson}}{Data.Aeson}{NW3lV6pB-2HZloV-1}\eatline \nwused{\\{NW3lV6pB-zkSPe-1}\\{NW3lV6pB-33WSsm-1}}\nwidentdefs{\\{{\nwixident{Data.Aeson}}{Data.Aeson}}}\nwendcode{}\nwbegindocs{42}\nwdocspar \nwenddocs{}\nwbegincode{43}\sublabel{NW3lV6pB-2l8uu8-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW3lV6pB-2l8uu8-1}}}\moddef{import Data.Aeson.Encoding~{\nwtagstyle{}\subpageref{NW3lV6pB-2l8uu8-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW3lV6pB-33WSsm-1}}\nwenddeflinemarkup \nwindexdefn{\nwixident{text}}{text}{NW3lV6pB-2l8uu8-1}import \nwlinkedidentc{Data.Aeson}{NW3lV6pB-2HZloV-1}\nwlinkedidentc{.Encoding}{NW3lV6pB-2l8uu8-1} (\nwlinkedidentc{text}{NW3lV6pB-2l8uu8-1}) \nwindexdefn{\nwixident{Data.Aeson.Encoding}}{Data.Aeson.Encoding}{NW3lV6pB-2l8uu8-1}\eatline \nwused{\\{NW3lV6pB-33WSsm-1}}\nwidentdefs{\\{{\nwixident{Data.Aeson.Encoding}}{Data.Aeson.Encoding}}\\{{\nwixident{text}}{text}}}\nwidentuses{\\{{\nwixident{Data.Aeson}}{Data.Aeson}}}\nwindexuse{\nwixident{Data.Aeson}}{Data.Aeson}{NW3lV6pB-2l8uu8-1}\nwendcode{}\nwbegindocs{44}\nwdocspar \nwenddocs{}\nwbegincode{45}\sublabel{NW3lV6pB-47ZaoI-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW3lV6pB-47ZaoI-1}}}\moddef{import Data.Aeson.TH~{\nwtagstyle{}\subpageref{NW3lV6pB-47ZaoI-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW3lV6pB-33WSsm-1}}\nwenddeflinemarkup \nwindexdefn{\nwixident{defaultOptions}}{defaultOptions}{NW3lV6pB-47ZaoI-1}\nwindexdefn{\nwixident{deriveJSON}}{deriveJSON}{NW3lV6pB-47ZaoI-1}import \nwlinkedidentc{Data.Aeson}{NW3lV6pB-2HZloV-1}\nwlinkedidentc{.TH}{NW3lV6pB-47ZaoI-1} (\nwlinkedidentc{defaultOptions}{NW3lV6pB-47ZaoI-1}, \nwlinkedidentc{deriveJSON}{NW3lV6pB-47ZaoI-1}) \nwindexdefn{\nwixident{Data.Aeson.TH}}{Data.Aeson.TH}{NW3lV6pB-47ZaoI-1}\eatline \nwused{\\{NW3lV6pB-33WSsm-1}}\nwidentdefs{\\{{\nwixident{Data.Aeson.TH}}{Data.Aeson.TH}}\\{{\nwixident{defaultOptions}}{defaultOptions}}\\{{\nwixident{deriveJSON}}{deriveJSON}}}\nwidentuses{\\{{\nwixident{Data.Aeson}}{Data.Aeson}}}\nwindexuse{\nwixident{Data.Aeson}}{Data.Aeson}{NW3lV6pB-47ZaoI-1}\nwendcode{}\nwbegindocs{46}\nwdocspar \nwenddocs{}\nwbegincode{47}\sublabel{NW3lV6pB-33WSsm-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW3lV6pB-33WSsm-1}}}\moddef{NixInfo.Types Imports~{\nwtagstyle{}\subpageref{NW3lV6pB-33WSsm-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW3lV6pB-1ClFUp-1}\\{NW3lV6pB-4NdpI2-1}}\nwenddeflinemarkup \LA{}import Data.Aeson~{\nwtagstyle{}\subpageref{NW3lV6pB-2HZloV-1}}\RA{} \LA{}import Data.Aeson.Encoding~{\nwtagstyle{}\subpageref{NW3lV6pB-2l8uu8-1}}\RA{} \LA{}import Data.Aeson.TH~{\nwtagstyle{}\subpageref{NW3lV6pB-47ZaoI-1}}\RA{} \nwindexdefn{\nwixident{HM}}{HM}{NW3lV6pB-33WSsm-1}import qualified \nwlinkedidentc{Data.HashMap.Lazy}{NW3lV6pB-33WSsm-1} as \nwlinkedidentc{HM}{NW3lV6pB-33WSsm-1} \nwindexdefn{\nwixident{Text}}{Text}{NW3lV6pB-33WSsm-1}import Data.\nwlinkedidentc{Text}{NW3lV6pB-33WSsm-1}\nwlinkedidentc{ (}{NW3lV6pB-33WSsm-1}\nwlinkedidentc{Text}{NW3lV6pB-33WSsm-1}) \nwindexdefn{\nwixident{T}}{T}{NW3lV6pB-33WSsm-1}import qualified Data.\nwlinkedidentc{Text}{NW3lV6pB-33WSsm-1}\nwlinkedidentc{ as }{NW3lV6pB-33WSsm-1}\nwlinkedidentc{T}{NW3lV6pB-33WSsm-1} \nwindexdefn{\nwixident{URL}}{URL}{NW3lV6pB-33WSsm-1}\nwindexdefn{\nwixident{exportURL}}{exportURL}{NW3lV6pB-33WSsm-1}\nwindexdefn{\nwixident{importURL}}{importURL}{NW3lV6pB-33WSsm-1}import Network.\nwlinkedidentc{URL}{NW3lV6pB-33WSsm-1}\nwlinkedidentc{ (}{NW3lV6pB-33WSsm-1}\nwlinkedidentc{URL}{NW3lV6pB-33WSsm-1}, \nwlinkedidentc{exportURL}{NW3lV6pB-3KgweQ-1}, \nwlinkedidentc{importURL}{NW3lV6pB-33WSsm-1}) \nwindexdefn{\nwixident{Data.HashMap.Lazy}}{Data.HashMap.Lazy}{NW3lV6pB-33WSsm-1}\eatline \nwindexdefn{\nwixident{Data.Text}}{Data.Text}{NW3lV6pB-33WSsm-1}\eatline \nwindexdefn{\nwixident{Network.URL}}{Network.URL}{NW3lV6pB-33WSsm-1}\eatline \nwused{\\{NW3lV6pB-1ClFUp-1}\\{NW3lV6pB-4NdpI2-1}}\nwidentdefs{\\{{\nwixident{Data.HashMap.Lazy}}{Data.HashMap.Lazy}}\\{{\nwixident{Data.Text}}{Data.Text}}\\{{\nwixident{exportURL}}{exportURL}}\\{{\nwixident{HM}}{HM}}\\{{\nwixident{importURL}}{importURL}}\\{{\nwixident{Network.URL}}{Network.URL}}\\{{\nwixident{T}}{T}}\\{{\nwixident{Text}}{Text}}\\{{\nwixident{URL}}{URL}}}\nwendcode{}\nwbegindocs{48}\nwdocspar \nwenddocs{}\nwbegincode{49}\sublabel{NW3lV6pB-WlLXm-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW3lV6pB-WlLXm-1}}}\moddef{import Data.Text.IO~{\nwtagstyle{}\subpageref{NW3lV6pB-WlLXm-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW3lV6pB-3KgweQ-1}\\{NW3lV6pB-4NdpI2-1}}\nwenddeflinemarkup \nwindexdefn{\nwixident{putStrLn}}{putStrLn}{NW3lV6pB-WlLXm-1}import Data.\nwlinkedidentc{Text}{NW3lV6pB-33WSsm-1}\nwlinkedidentc{.IO}{NW3lV6pB-WlLXm-1} (\nwlinkedidentc{putStrLn}{NW3lV6pB-WlLXm-1}) \nwindexdefn{\nwixident{Data.Text.IO}}{Data.Text.IO}{NW3lV6pB-WlLXm-1}\eatline \nwused{\\{NW3lV6pB-3KgweQ-1}\\{NW3lV6pB-4NdpI2-1}}\nwidentdefs{\\{{\nwixident{Data.Text.IO}}{Data.Text.IO}}\\{{\nwixident{putStrLn}}{putStrLn}}}\nwidentuses{\\{{\nwixident{Data.Text}}{Data.Text}}\\{{\nwixident{Text}}{Text}}}\nwindexuse{\nwixident{Data.Text}}{Data.Text}{NW3lV6pB-WlLXm-1}\nwindexuse{\nwixident{Text}}{Text}{NW3lV6pB-WlLXm-1}\nwendcode{}\nwbegindocs{50}\nwdocspar \nwenddocs{}\nwbegincode{51}\sublabel{NW3lV6pB-3lPV08-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW3lV6pB-3lPV08-1}}}\moddef{import traverse\_, catMaybes~{\nwtagstyle{}\subpageref{NW3lV6pB-3lPV08-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW3lV6pB-3KgweQ-1}\\{NW3lV6pB-4NdpI2-1}}\nwenddeflinemarkup \nwindexdefn{\nwixident{traverse{\_}}}{traverse:un}{NW3lV6pB-3lPV08-1}import \nwlinkedidentc{Data.Foldable}{NW3lV6pB-3lPV08-1} (\nwlinkedidentc{traverse_}{NW3lV6pB-3lPV08-1}) \nwindexdefn{\nwixident{catMaybes}}{catMaybes}{NW3lV6pB-3lPV08-1}import \nwlinkedidentc{Data.Maybe}{NW3lV6pB-3lPV08-1} (\nwlinkedidentc{catMaybes}{NW3lV6pB-3lPV08-1}) \nwindexdefn{\nwixident{Data.Foldable}}{Data.Foldable}{NW3lV6pB-3lPV08-1}\eatline \nwindexdefn{\nwixident{Data.Maybe}}{Data.Maybe}{NW3lV6pB-3lPV08-1}\eatline \nwused{\\{NW3lV6pB-3KgweQ-1}\\{NW3lV6pB-4NdpI2-1}}\nwidentdefs{\\{{\nwixident{catMaybes}}{catMaybes}}\\{{\nwixident{Data.Foldable}}{Data.Foldable}}\\{{\nwixident{Data.Maybe}}{Data.Maybe}}\\{{\nwixident{traverse{\_}}}{traverse:un}}}\nwendcode{}\nwbegindocs{52}\nwdocspar \nwenddocs{}\nwbegincode{53}\sublabel{NW3lV6pB-kiOo-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW3lV6pB-kiOo-1}}}\moddef{import Turtle~{\nwtagstyle{}\subpageref{NW3lV6pB-kiOo-1}}}\endmoddef\nwstartdeflinemarkup\nwusesondefline{\\{NW3lV6pB-zkSPe-1}\\{NW3lV6pB-4NdpI2-1}}\nwenddeflinemarkup \nwindexdefn{\nwixident{ExitCode}}{ExitCode}{NW3lV6pB-kiOo-1}\nwindexdefn{\nwixident{Shell}}{Shell}{NW3lV6pB-kiOo-1}\nwindexdefn{\nwixident{arguments}}{arguments}{NW3lV6pB-kiOo-1}import \nwlinkedidentc{Turtle}{NW3lV6pB-kiOo-1} (\nwlinkedidentc{ExitCode}{NW3lV6pB-kiOo-1} (..), \nwlinkedidentc{Shell}{NW3lV6pB-kiOo-1}, \nwlinkedidentc{arguments}{NW3lV6pB-kiOo-1}, \nwindexdefn{\nwixident{echo}}{echo}{NW3lV6pB-kiOo-1}\nwindexdefn{\nwixident{empty}}{empty}{NW3lV6pB-kiOo-1}\nwindexdefn{\nwixident{exit}}{exit}{NW3lV6pB-kiOo-1}\nwindexdefn{\nwixident{liftIO}}{liftIO}{NW3lV6pB-kiOo-1} \nwlinkedidentc{echo}{NW3lV6pB-kiOo-1}, \nwlinkedidentc{empty}{NW3lV6pB-kiOo-1}, \nwlinkedidentc{exit}{NW3lV6pB-kiOo-1}, \nwlinkedidentc{liftIO}{NW3lV6pB-kiOo-1}, \nwindexdefn{\nwixident{procStrict}}{procStrict}{NW3lV6pB-kiOo-1}\nwindexdefn{\nwixident{sh}}{sh}{NW3lV6pB-kiOo-1} \nwlinkedidentc{procStrict}{NW3lV6pB-kiOo-1}, \nwlinkedidentc{sh}{NW3lV6pB-kiOo-1}) \nwindexdefn{\nwixident{Turtle}}{Turtle}{NW3lV6pB-kiOo-1}\eatline \nwused{\\{NW3lV6pB-zkSPe-1}\\{NW3lV6pB-4NdpI2-1}}\nwidentdefs{\\{{\nwixident{arguments}}{arguments}}\\{{\nwixident{echo}}{echo}}\\{{\nwixident{empty}}{empty}}\\{{\nwixident{exit}}{exit}}\\{{\nwixident{ExitCode}}{ExitCode}}\\{{\nwixident{liftIO}}{liftIO}}\\{{\nwixident{procStrict}}{procStrict}}\\{{\nwixident{sh}}{sh}}\\{{\nwixident{Shell}}{Shell}}\\{{\nwixident{Turtle}}{Turtle}}}\nwendcode{}\nwbegindocs{54}\nwdocspar \section{Package Setup} \nwenddocs{}\nwbegincode{55}\sublabel{NW3lV6pB-sQOAe-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW3lV6pB-sQOAe-1}}}\moddef{package.yaml~{\nwtagstyle{}\subpageref{NW3lV6pB-sQOAe-1}}}\endmoddef\nwstartdeflinemarkup\nwenddeflinemarkup \nwlinkedidentc{name}{NW3lV6pB-jCvHb-1}: nix-\nwlinkedidentc{info}{NW3lV6pB-JwV0T-1} version: '0.1.0.0' synopsis: brew \nwlinkedidentc{info}{NW3lV6pB-JwV0T-1} clone for Nix \nwlinkedidentc{description}{NW3lV6pB-3bcEWn-1}: See README at <https://github.com/nix-hackers/nix-info#readme> category: Development stability: experimental \nwlinkedidentc{homepage}{NW3lV6pB-3bcEWn-1}: https://github.com/nix-hackers/nix-\nwlinkedidentc{info}{NW3lV6pB-JwV0T-1} github: nix-hackers/nix-\nwlinkedidentc{info}{NW3lV6pB-JwV0T-1} author: Eric Bailey maintainer: [email protected] license: BSD3 extra-source-files: - ChangeLog.md ghc-options: -Wall dependencies: - base >=4.9 && <4.10 - aeson >=1.0 && <1.2 - string-conversions >=0.4 && <0.5 - \nwlinkedidentc{text}{NW3lV6pB-2l8uu8-1} >=1.2 && <1.3 - turtle >=1.3 && <1.4 - unordered-containers >=0.2 && <0.3 - url >=2.1 && <2.2 library: source-dirs: src exposed-modules: - \nwlinkedidentc{NixInfo}{NW3lV6pB-3KgweQ-1} - \nwlinkedidentc{NixInfo}{NW3lV6pB-3KgweQ-1}\nwlinkedidentc{.Types}{NW3lV6pB-1ClFUp-1} executables: nix-\nwlinkedidentc{info}{NW3lV6pB-JwV0T-1}: \nwlinkedidentc{main}{NW3lV6pB-23uoZy-1}: \nwlinkedidentc{Main}{NW3lV6pB-4NdpI2-1}.hs source-dirs: app dependencies: - nix-\nwlinkedidentc{info}{NW3lV6pB-JwV0T-1} \nwnotused{package.yaml}\nwidentuses{\\{{\nwixident{description}}{description}}\\{{\nwixident{homepage}}{homepage}}\\{{\nwixident{info}}{info}}\\{{\nwixident{Main}}{Main}}\\{{\nwixident{main}}{main}}\\{{\nwixident{name}}{name}}\\{{\nwixident{NixInfo}}{NixInfo}}\\{{\nwixident{NixInfo.Types}}{NixInfo.Types}}\\{{\nwixident{text}}{text}}}\nwindexuse{\nwixident{description}}{description}{NW3lV6pB-sQOAe-1}\nwindexuse{\nwixident{homepage}}{homepage}{NW3lV6pB-sQOAe-1}\nwindexuse{\nwixident{info}}{info}{NW3lV6pB-sQOAe-1}\nwindexuse{\nwixident{Main}}{Main}{NW3lV6pB-sQOAe-1}\nwindexuse{\nwixident{main}}{main}{NW3lV6pB-sQOAe-1}\nwindexuse{\nwixident{name}}{name}{NW3lV6pB-sQOAe-1}\nwindexuse{\nwixident{NixInfo}}{NixInfo}{NW3lV6pB-sQOAe-1}\nwindexuse{\nwixident{NixInfo.Types}}{NixInfo.Types}{NW3lV6pB-sQOAe-1}\nwindexuse{\nwixident{text}}{text}{NW3lV6pB-sQOAe-1}\nwendcode{}\nwbegindocs{56}\nwdocspar \nwenddocs{}\nwbegincode{57}\sublabel{NW3lV6pB-2OJOfk-1}\nwmargintag{{\nwtagstyle{}\subpageref{NW3lV6pB-2OJOfk-1}}}\moddef{Setup.hs~{\nwtagstyle{}\subpageref{NW3lV6pB-2OJOfk-1}}}\endmoddef\nwstartdeflinemarkup\nwenddeflinemarkup import Distribution.Simple \nwlinkedidentc{main}{NW3lV6pB-23uoZy-1} :: IO () \nwlinkedidentc{main}{NW3lV6pB-23uoZy-1} = defaultMain \nwnotused{Setup.hs}\nwidentuses{\\{{\nwixident{main}}{main}}}\nwindexuse{\nwixident{main}}{main}{NW3lV6pB-2OJOfk-1}\nwendcode{} \nwixlogsorted{c}{{app/Main.hs}{NW3lV6pB-zkSPe-1}{\nwixd{NW3lV6pB-zkSPe-1}}}% \nwixlogsorted{c}{{Data Types}{NW3lV6pB-17s3Rd-1}{\nwixd{NW3lV6pB-17s3Rd-1}\nwixu{NW3lV6pB-1ClFUp-1}\nwixu{NW3lV6pB-4NdpI2-1}}}% \nwixlogsorted{c}{{FromJSON Instances}{NW3lV6pB-43l1LQ-1}{\nwixd{NW3lV6pB-43l1LQ-1}\nwixu{NW3lV6pB-1ClFUp-1}\nwixu{NW3lV6pB-4NdpI2-1}}}% \nwixlogsorted{c}{{hide Prelude.putStrLn}{NW3lV6pB-3agbDv-1}{\nwixu{NW3lV6pB-3KgweQ-1}\nwixu{NW3lV6pB-4NdpI2-1}\nwixd{NW3lV6pB-3agbDv-1}}}% \nwixlogsorted{c}{{import Data.Aeson}{NW3lV6pB-2HZloV-1}{\nwixu{NW3lV6pB-zkSPe-1}\nwixd{NW3lV6pB-2HZloV-1}\nwixu{NW3lV6pB-33WSsm-1}}}% \nwixlogsorted{c}{{import Data.Aeson.Encoding}{NW3lV6pB-2l8uu8-1}{\nwixd{NW3lV6pB-2l8uu8-1}\nwixu{NW3lV6pB-33WSsm-1}}}% \nwixlogsorted{c}{{import Data.Aeson.TH}{NW3lV6pB-47ZaoI-1}{\nwixd{NW3lV6pB-47ZaoI-1}\nwixu{NW3lV6pB-33WSsm-1}}}% \nwixlogsorted{c}{{import Data.Text.IO}{NW3lV6pB-WlLXm-1}{\nwixu{NW3lV6pB-3KgweQ-1}\nwixu{NW3lV6pB-4NdpI2-1}\nwixd{NW3lV6pB-WlLXm-1}}}% \nwixlogsorted{c}{{import traverse\_, catMaybes}{NW3lV6pB-3lPV08-1}{\nwixu{NW3lV6pB-3KgweQ-1}\nwixu{NW3lV6pB-4NdpI2-1}\nwixd{NW3lV6pB-3lPV08-1}}}% \nwixlogsorted{c}{{import Turtle}{NW3lV6pB-kiOo-1}{\nwixu{NW3lV6pB-zkSPe-1}\nwixu{NW3lV6pB-4NdpI2-1}\nwixd{NW3lV6pB-kiOo-1}}}% \nwixlogsorted{c}{{LambdaCase}{NW3lV6pB-TcIAL-1}{\nwixu{NW3lV6pB-zkSPe-1}\nwixu{NW3lV6pB-4NdpI2-1}\nwixd{NW3lV6pB-TcIAL-1}}}% \nwixlogsorted{c}{{magically derive ToJSON and FromJSON instances}{NW3lV6pB-4ICBye-1}{\nwixd{NW3lV6pB-4ICBye-1}\nwixu{NW3lV6pB-43l1LQ-1}}}% \nwixlogsorted{c}{{main}{NW3lV6pB-23uoZy-1}{\nwixd{NW3lV6pB-23uoZy-1}\nwixu{NW3lV6pB-zkSPe-1}\nwixu{NW3lV6pB-4NdpI2-1}}}% \nwixlogsorted{c}{{Meta}{NW3lV6pB-3bcEWn-1}{\nwixu{NW3lV6pB-17s3Rd-1}\nwixd{NW3lV6pB-3bcEWn-1}}}% \nwixlogsorted{c}{{NixInfo.Types Imports}{NW3lV6pB-33WSsm-1}{\nwixu{NW3lV6pB-1ClFUp-1}\nwixu{NW3lV6pB-4NdpI2-1}\nwixd{NW3lV6pB-33WSsm-1}}}% \nwixlogsorted{c}{{nixQuery}{NW3lV6pB-1oSFZX-1}{\nwixd{NW3lV6pB-1oSFZX-1}\nwixu{NW3lV6pB-zkSPe-1}\nwixu{NW3lV6pB-4NdpI2-1}}}% \nwixlogsorted{c}{{NixURL}{NW3lV6pB-39G3ut-1}{\nwixu{NW3lV6pB-17s3Rd-1}\nwixd{NW3lV6pB-39G3ut-1}}}% \nwixlogsorted{c}{{OverloadedStrings}{NW3lV6pB-2Z5QZX-1}{\nwixu{NW3lV6pB-1ClFUp-1}\nwixu{NW3lV6pB-zkSPe-1}\nwixu{NW3lV6pB-4NdpI2-1}\nwixd{NW3lV6pB-2Z5QZX-1}}}% \nwixlogsorted{c}{{Package}{NW3lV6pB-JwV0T-1}{\nwixu{NW3lV6pB-17s3Rd-1}\nwixd{NW3lV6pB-JwV0T-1}}}% \nwixlogsorted{c}{{package.yaml}{NW3lV6pB-sQOAe-1}{\nwixd{NW3lV6pB-sQOAe-1}}}% \nwixlogsorted{c}{{PackageInfo}{NW3lV6pB-jCvHb-1}{\nwixu{NW3lV6pB-17s3Rd-1}\nwixd{NW3lV6pB-jCvHb-1}}}% \nwixlogsorted{c}{{PackageList}{NW3lV6pB-3Pyhm3-1}{\nwixu{NW3lV6pB-17s3Rd-1}\nwixd{NW3lV6pB-3Pyhm3-1}}}% \nwixlogsorted{c}{{printPackage}{NW3lV6pB-esBWi-1}{\nwixd{NW3lV6pB-esBWi-1}\nwixu{NW3lV6pB-3KgweQ-1}\nwixu{NW3lV6pB-4NdpI2-1}}}% \nwixlogsorted{c}{{script/nix-info}{NW3lV6pB-4NdpI2-1}{\nwixd{NW3lV6pB-4NdpI2-1}}}% \nwixlogsorted{c}{{Setup.hs}{NW3lV6pB-2OJOfk-1}{\nwixd{NW3lV6pB-2OJOfk-1}}}% \nwixlogsorted{c}{{shebang}{NW3lV6pB-3YxVZb-1}{\nwixd{NW3lV6pB-3YxVZb-1}\nwixu{NW3lV6pB-4NdpI2-1}}}% \nwixlogsorted{c}{{src/NixInfo.hs}{NW3lV6pB-3KgweQ-1}{\nwixd{NW3lV6pB-3KgweQ-1}}}% \nwixlogsorted{c}{{src/NixInfo/Types.hs}{NW3lV6pB-1ClFUp-1}{\nwixd{NW3lV6pB-1ClFUp-1}}}% \nwixlogsorted{c}{{TemplateHaskell}{NW3lV6pB-2PXGaB-1}{\nwixu{NW3lV6pB-1ClFUp-1}\nwixu{NW3lV6pB-4NdpI2-1}\nwixd{NW3lV6pB-2PXGaB-1}}}% \nwixlogsorted{i}{{\nwixident{arguments}}{arguments}}% \nwixlogsorted{i}{{\nwixident{branch}}{branch}}% \nwixlogsorted{i}{{\nwixident{broken}}{broken}}% \nwixlogsorted{i}{{\nwixident{catMaybes}}{catMaybes}}% \nwixlogsorted{i}{{\nwixident{cs}}{cs}}% \nwixlogsorted{i}{{\nwixident{Data.Aeson}}{Data.Aeson}}% \nwixlogsorted{i}{{\nwixident{Data.Aeson.Encoding}}{Data.Aeson.Encoding}}% \nwixlogsorted{i}{{\nwixident{Data.Aeson.TH}}{Data.Aeson.TH}}% \nwixlogsorted{i}{{\nwixident{Data.Foldable}}{Data.Foldable}}% \nwixlogsorted{i}{{\nwixident{Data.HashMap.Lazy}}{Data.HashMap.Lazy}}% \nwixlogsorted{i}{{\nwixident{Data.Maybe}}{Data.Maybe}}% \nwixlogsorted{i}{{\nwixident{Data.Text}}{Data.Text}}% \nwixlogsorted{i}{{\nwixident{Data.Text.IO}}{Data.Text.IO}}% \nwixlogsorted{i}{{\nwixident{defaultOptions}}{defaultOptions}}% \nwixlogsorted{i}{{\nwixident{deriveJSON}}{deriveJSON}}% \nwixlogsorted{i}{{\nwixident{description}}{description}}% \nwixlogsorted{i}{{\nwixident{downloadPage}}{downloadPage}}% \nwixlogsorted{i}{{\nwixident{echo}}{echo}}% \nwixlogsorted{i}{{\nwixident{empty}}{empty}}% \nwixlogsorted{i}{{\nwixident{exit}}{exit}}% \nwixlogsorted{i}{{\nwixident{ExitCode}}{ExitCode}}% \nwixlogsorted{i}{{\nwixident{exportURL}}{exportURL}}% \nwixlogsorted{i}{{\nwixident{FromJSON}}{FromJSON}}% \nwixlogsorted{i}{{\nwixident{HM}}{HM}}% \nwixlogsorted{i}{{\nwixident{homepage}}{homepage}}% \nwixlogsorted{i}{{\nwixident{hydraPlatforms}}{hydraPlatforms}}% \nwixlogsorted{i}{{\nwixident{importURL}}{importURL}}% \nwixlogsorted{i}{{\nwixident{info}}{info}}% \nwixlogsorted{i}{{\nwixident{liftIO}}{liftIO}}% \nwixlogsorted{i}{{\nwixident{longDescription}}{longDescription}}% \nwixlogsorted{i}{{\nwixident{Main}}{Main}}% \nwixlogsorted{i}{{\nwixident{main}}{main}}% \nwixlogsorted{i}{{\nwixident{maintainers}}{maintainers}}% \nwixlogsorted{i}{{\nwixident{Meta}}{Meta}}% \nwixlogsorted{i}{{\nwixident{meta}}{meta}}% \nwixlogsorted{i}{{\nwixident{name}}{name}}% \nwixlogsorted{i}{{\nwixident{Network.URL}}{Network.URL}}% \nwixlogsorted{i}{{\nwixident{NixInfo}}{NixInfo}}% \nwixlogsorted{i}{{\nwixident{NixInfo.Types}}{NixInfo.Types}}% \nwixlogsorted{i}{{\nwixident{nixQuery}}{nixQuery}}% \nwixlogsorted{i}{{\nwixident{NixURL}}{NixURL}}% \nwixlogsorted{i}{{\nwixident{outputsToInstall}}{outputsToInstall}}% \nwixlogsorted{i}{{\nwixident{Package}}{Package}}% \nwixlogsorted{i}{{\nwixident{PackageInfo}}{PackageInfo}}% \nwixlogsorted{i}{{\nwixident{PackageList}}{PackageList}}% \nwixlogsorted{i}{{\nwixident{path}}{path}}% \nwixlogsorted{i}{{\nwixident{platforms}}{platforms}}% \nwixlogsorted{i}{{\nwixident{position}}{position}}% \nwixlogsorted{i}{{\nwixident{priority}}{priority}}% \nwixlogsorted{i}{{\nwixident{procStrict}}{procStrict}}% \nwixlogsorted{i}{{\nwixident{putStrLn}}{putStrLn}}% \nwixlogsorted{i}{{\nwixident{sh}}{sh}}% \nwixlogsorted{i}{{\nwixident{Shell}}{Shell}}% \nwixlogsorted{i}{{\nwixident{system}}{system}}% \nwixlogsorted{i}{{\nwixident{T}}{T}}% \nwixlogsorted{i}{{\nwixident{Text}}{Text}}% \nwixlogsorted{i}{{\nwixident{text}}{text}}% \nwixlogsorted{i}{{\nwixident{traverse{\_}}}{traverse:un}}% \nwixlogsorted{i}{{\nwixident{Turtle}}{Turtle}}% \nwixlogsorted{i}{{\nwixident{updateWalker}}{updateWalker}}% \nwixlogsorted{i}{{\nwixident{URL}}{URL}}% \nwbegindocs{58}\nwdocspar \section{Chunks} \nowebchunks \section{Index} \nowebindex \newpage \printbibliography \end{document} \nwenddocs{}
{ "alphanum_fraction": 0.7465435187, "avg_line_length": 85.3306581059, "ext": "tex", "hexsha": "a38286ef692638140aabaa5157fe62df0c2360ad", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a6bea0b208adce753e071595ccb3e4120ba8f799", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "nix-hackers/nix-info", "max_forks_repo_path": "tex/nix-info.tex", "max_issues_count": 15, "max_issues_repo_head_hexsha": "a6bea0b208adce753e071595ccb3e4120ba8f799", "max_issues_repo_issues_event_max_datetime": "2017-03-24T06:27:36.000Z", "max_issues_repo_issues_event_min_datetime": "2017-03-18T15:17:57.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "nix-hackers/nix-info", "max_issues_repo_path": "tex/nix-info.tex", "max_line_length": 1640, "max_stars_count": 4, "max_stars_repo_head_hexsha": "a6bea0b208adce753e071595ccb3e4120ba8f799", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "nix-hackers/nix-info", "max_stars_repo_path": "tex/nix-info.tex", "max_stars_repo_stars_event_max_datetime": "2021-06-15T13:25:06.000Z", "max_stars_repo_stars_event_min_datetime": "2017-07-23T06:13:07.000Z", "num_tokens": 24590, "size": 53161 }
\section{\scshape Work plan}\label{sec:workplan} \subsection{Tasks} \begin{frame}{Tasks} \begin{footnotesize} \begin{itemize} \item Use cases definition (1 month) \item Review of state of the art (1 month) \item Evaluation and selection of hardware for testing platforms (2 month) \item Creation of perception and learning datasets (2 month) \item Definition of software architecture (3 months) \item Knowledge / skill representation (3 months) \item Extraction of assembly knowledge from SOPs (3 months) \item Assembly operations from structured knowledge / assembly skills (3 months) \item Learning of new assembly operations from human demonstration (5 months) \item Immersive human-robot cooperation using projection mapping (2 months) \item Validation of assembly system in industrial conditions (4 months) \item Writing of thesis (7 months) \end{itemize} \end{footnotesize} \end{frame} \subsection{Gantt chart} \begin{frame}{Gantt chart} \begin{figure} \centering \includegraphics[width=\linewidth]{gantt-chart} \caption{Gantt chart} \end{figure} \end{frame}
{ "alphanum_fraction": 0.7520143241, "avg_line_length": 36.0322580645, "ext": "tex", "hexsha": "ec395b1626aad20db75c91c0fd606c7d5a4627ee", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c144ec287e2d4ed934586b031485cdbda5495d1e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "carlosmccosta/prodei-research-planning-presentation", "max_forks_repo_path": "tex/sections/workplan.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c144ec287e2d4ed934586b031485cdbda5495d1e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "carlosmccosta/prodei-research-planning-presentation", "max_issues_repo_path": "tex/sections/workplan.tex", "max_line_length": 83, "max_stars_count": null, "max_stars_repo_head_hexsha": "c144ec287e2d4ed934586b031485cdbda5495d1e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "carlosmccosta/prodei-research-planning-presentation", "max_stars_repo_path": "tex/sections/workplan.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 295, "size": 1117 }
\documentclass[11pt]{amsart} \usepackage{amsmath} \usepackage{amssymb} \usepackage{tikz} \usepackage{fp} % Prevents issues with arithmetic overflow. \usepackage{pgfplots} \usepackage{xcolor} \usepackage[hidelinks]{hyperref} \usepackage[section]{placeins} % Prevents figure placement outside of section. \usetikzlibrary{arrows, fixedpointarithmetic} \newcommand{\shaft}{\mathrm{shaft}} \definecolor{matlab1}{rgb}{0, 0.4470, 0.7410} \definecolor{matlab2}{rgb}{0.8500, 0.3250, 0.0980} \definecolor{matlab3}{rgb}{0.9290, 0.6940, 0.1250} \definecolor{matlab4}{rgb}{0.4940, 0.1840, 0.5560} \definecolor{matlab5}{rgb}{0.4660, 0.6740, 0.1880} \definecolor{matlab6}{rgb}{0.3010, 0.7450, 0.9330} \definecolor{matlab7}{rgb}{0.6350, 0.0780, 0.1840} \title{MEKF} \author{Makani Technologies LLC} \date{October 2016\; (DRAFT)} \begin{document} \maketitle \section{States} \begin{equation} C_g^b \approx \hat C_g^b (I + [\psi_{gb}^g]_{\times}) \end{equation} \begin{equation} \vec{b} \approx \hat{b} + \delta \vec{b} \end{equation} \section{Propagate} \begin{equation} \delta \vec{\theta} = (\vec{\omega} - \vec{b}) \cdot \Delta t \end{equation} \begin{equation} \delta q = q(\delta \vec{\theta}) \end{equation} \begin{equation} {q_g^b}_{k|k-1} = {q_g^b}_{k-1|k-1} \star \delta q \end{equation} \begin{equation} \delta \vec{x} = [\delta \vec{\theta}, \delta \vec{b}]^T \end{equation} \begin{equation} \mathbf{F} = \begin{bmatrix} \mathbf{C}(\delta q) & \Delta t \cdot \mathbf{I} \\ \mathbf{0} & \mathbf{I} \\ \end{bmatrix} \end{equation} \begin{equation} \mathbf{\hat{x}}_{k+1|k} = \mathbf{F} \mathbf{\hat{x}}_{k|k} + \mathbf{B} \mathbf{w}_{k} \end{equation} \begin{equation} \mathbf{P} = \mathbf{U} \mathbf{D} \mathbf{U}^T \end{equation} \begin{equation} \mathbf{F}_k = \frac{\partial \mathbf{f}}{\partial \mathbf{x}} \bigg|_{\mathbf{\hat{x}}_{k-1|k-1}} \end{equation} \begin{eqnarray} P_{k+1|k} &=& F_k P_{k|k} F_k^T + Q_k \\ &=& F_k U_{k|k} D_{k|k} U_{k|k}^T F_k^T + B Q_k B^T \\ &=& W_{k|k} D_{k|k} W_{k|k}^T \end{eqnarray} \begin{equation} \mathbf{D}_w = \begin{bmatrix} \mathbf{D}_{k|k} & \mathbf{0} \\ \mathbf{0} & \mathbf{D}_q \\ \end{bmatrix} \end{equation} \begin{equation} \mathbf{W} = [\mathbf{F} \mathbf{U}, \mathbf{B}] \end{equation} \section{Correct} \begin{equation} h(\mathbf{x}) \equiv C_g^b \vec{v}_g \approx \hat{C}_g^b (\mathbf{I} + \delta \vec{\theta}_{\times}) \vec{v}_g \end{equation} \begin{equation} \frac{\partial h}{\partial \delta \vec{\theta}} = -[\hat{C}_g^b \vec{v}_g]_{\times} \end{equation} \end{document}
{ "alphanum_fraction": 0.6322052808, "avg_line_length": 24.4454545455, "ext": "tex", "hexsha": "db3cd051bf1278a1f3ac78cdafd0ee66ce6c4adb", "lang": "TeX", "max_forks_count": 107, "max_forks_repo_forks_event_max_datetime": "2022-03-18T09:00:14.000Z", "max_forks_repo_forks_event_min_datetime": "2020-09-10T17:29:30.000Z", "max_forks_repo_head_hexsha": "c94d5c2b600b98002f932e80a313a06b9285cc1b", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "leozz37/makani", "max_forks_repo_path": "documentation/control/estimator/mekf.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "c94d5c2b600b98002f932e80a313a06b9285cc1b", "max_issues_repo_issues_event_max_datetime": "2020-05-22T05:22:35.000Z", "max_issues_repo_issues_event_min_datetime": "2020-05-22T05:22:35.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "leozz37/makani", "max_issues_repo_path": "documentation/control/estimator/mekf.tex", "max_line_length": 100, "max_stars_count": 1178, "max_stars_repo_head_hexsha": "c94d5c2b600b98002f932e80a313a06b9285cc1b", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "leozz37/makani", "max_stars_repo_path": "documentation/control/estimator/mekf.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-31T14:59:35.000Z", "max_stars_repo_stars_event_min_datetime": "2020-09-10T17:15:42.000Z", "num_tokens": 1141, "size": 2689 }
\begin{refsection} \chapter{The Soft Massive Spring} \section*{Objectives} \begin{enumerate} \item To determine the spring constant and the mass correction factor for the given soft massive spring by static (equilibrium extension) method. \item To determine the spring constant and the mass correction factor for the given soft massive spring by dynamic (spring mass oscillations) method. \item To determine the frequency of oscillations of the spring with one end fixed and the other end free i.e. zero mass attached. \item To study the longitudinal stationary waves and to determine the fundamental frequency of oscillations of the spring with both the ends fixed. \end{enumerate} \section*{Introduction} Springs are familiar objects with many everyday applications ranging from retractable ballpoint pens, to weighing scales and metro handles. Let us consider an ideal spring: such a spring has an equilibrium length -- that is, a length when no external force is applied -- and resists any change to this length through a restoring force. %\todo[inline]{Diagram showing forces and springs.} \begin{figure}[!htb] \centering \includegraphics[width=0.4\textwidth]{figs/springForce.png} \caption{The restoring force experienced by the mass due to the spring is proportional to the amount the spring is stretched from its equilibrium position.} \label{fig:springForces} \end{figure} In such ideal springs, the force required to stretch (or compress) is assumed to be proportional to the amount which the spring is stretched (with compression being negative stretching). When one end is fixed, then this stretch is just the displacement of the free end. From now on we will assume that one end is fixed (which implies that we must use the formulas carefully when both ends are free). This tells us that the restoring force i.e.\ $F \propto x$. By defining a constant of proportionality $k$, known as the spring constant, we can write down \textit{Hooke's Law}: \begin{equation} F = - k x, \label{hooke} \end{equation} where the negative sign shows that the force is in the direction opposite to the displacement of the end, i.e.\ it is a restoring force: when the spring is stretched, this force tends to compress it, and vice versa. Experiment reveals that for relatively small displacements, almost all \textit{real} springs obey Equation (\ref{hooke}). Let us assume that the spring is placed horizontally on a frictionless surface and attached to a wall on one end and an object of some mass $M$ on the other, which we can pull or push. If we displace the object by some $x$ and let go, it will experience a restoring force due to the spring, which will change as the object begins moving. Combining Newton's Second Law and Equation (\ref{hooke}) we can see that the acceleration of the object is given by \begin{equation} \begin{aligned} a &= \frac{F}{M} \\ \dv[2]{x}{t}&=-\frac{k}{M} x \end{aligned} \label{diffEqnSimpleSpring} \end{equation} where we have chosen $x = 0$ as the equilibrium position of the mass (in which the spring has its unstretched, or equilibrium, length). %\todo[inline]{Diagram showing horizontal oscillations.} It is not difficult to see that any function whose second derivative is itself (times a negative constant) is a solution to the differential equation above. Both $\sin (\omega t)$ and $\cos (\omega t)$ are such functions. From the theory of differential equations, we know that the general solution to this differential equation is a linear combination of two linearly independent solutions. Thus we can write \begin{equation*} x(t) = A \sin(\omega t) + B \cos(\omega t), \end{equation*} where $\omega$ -- known as the frequency of vibration -- must be equal to $\sqrt{k/M}$ for the left and right sides to match, and $A$ and $B$ are constants determined by the initial conditions (initial position and initial velocity). Observing the solution above, it should be clear to you (since $\sin$ and $\cos$ are periodic functions) that there must be some time $T$ after which $x(t)$ comes back to itself, i.e.\ $x(t+T) = x(t)$. The motion is thus periodic, with period equal to that of the sinusoidal functions $\sin(\omega t)$ and $\cos(\omega t)$, which is \begin{equation} T = \frac{2 \pi}{\omega} = 2\pi \sqrt{\frac{M}{k}}. \label{masslessTime} \end{equation} The mass thus oscillates about its equilibrium position with a time period $T$ that is -- at least for small amplitudes -- independent of amplitude. %\todo[inline]{Diagram showing twice the mass and twice the displacement under gravity.} \begin{figure}[!htb] \centering \includegraphics[width=0.3\textwidth]{figs/springMassVertical.png} \caption{Hanging vertically, if a mass $m$ causes the spring to extend to $y_0$, a mass of $2m$ will cause the same spring to extend to $2 y_0$.} \label{fig:springMassVertical} \end{figure} We could now imagine hanging the spring and mass vertically. While the spring is assumed to be massless (as it is ideal), the massive object experiences a downward force due to gravity of $Mg$. At static equilibrium, the spring would have displaced enough to balance this force. Thus, \begin{equation} y_0 = \frac{Mg}{k}. \label{masslessExt} \end{equation} We could similarly gently displace this spring vertically about this position, and we would again see oscillations, only this time the oscillations will be about the point $y_0$, which is the new equilibrium point. \begin{figure}[!htb] \centering \includegraphics[width=\textwidth]{figs/springOscillations.png} \caption{If the vertically hanging spring is displaced slightly, it will begin to oscillate with a time period $T$ about the new equilibrium point $y_0$, as shown.} \label{fig:springOscillations} \end{figure} \begin{question} \paragraph{Question:} Write out Newton's Law as a differential equation in this case (i.e.\ when gravity is included). Show that it can be written as: \begin{equation*} \dv[2]{y}{t} = -\omega^2 (y - y_0) \end{equation*} \paragraph{Question:} By making a suitable substitution and comparing it with Equation (\ref{diffEqnSimpleSpring}), show that the solutions have the same form in a shifted variable. \paragraph{Question:} Show that the time period $T$ is the same as in the preceding case (of ``static'' equilibrium). \end{question} \subsection*{Massive springs} So far, we have assumed our spring to be ``massless'', in that its mass may be neglected. Whether or not this is a reasonable assumption does not depend on the spring alone, but on external factors. On inspection, it should be clear to you that the top and the bottom of a spring hung vertically do not experience the same downward or restoring forces, even at static equilibrium.\footnote{In general, a spring is not a ``rigid'' body.} If this ``droop'' (caused by the mass of he spring) is negligible compared to the stretch caused by external masses, then the spring may be effectively considered as massless. Thus it depends roughly on the ratio of the masses. \subsubsection*{The static case} Let us consider a point very close to the top of the spring: it will experience a (small) restoring force from the small amount of spring above it, and a much larger downward force from the mass of spring under it. Similarly, consider a point very close to the free end of the spring: it will experience a larger restoring force (as there is more ``spring'' above it) than downward gravitational force as there is very little mass under it. As a result, the spring does not stretch uniformly; the coils near the top are further separated than those near the bottom. We would thus expect some form of ``correction factor'' to the mass term in Equation (\ref{masslessExt}). Consider a spring with some mass $m_s$. Let $L_0$ be the length of the spring when it is kept horizontally with no forces extending it, and $L_M$ be the length of spring when it is hung vertically with a mass $M$ attached to its lower end. We can define the ``equilibrium extension'' $S_M$ as \begin{equation} S_M = L_M - L_0. \end{equation} It is possible to solve the problem theoretically and show that the massive spring acts effectively like an ideal spring with a mass $m_s/2$ attached to its end. In other words, if a massive spring is hung vertically, and an ideal spring with mass $m_s/2$ is attached to its end, they will both extend to the same distance. Thus, Equation (\ref{masslessExt}) is modified to \begin{equation} S_M = \left( M + \frac{m_s}{2} \right)\left(\frac{g}{k} \right) \label{Sm} \end{equation} where the factor $m_s/2$ is known as the \textbf{static} mass correction factor. \subsubsection*{The dynamic case} Consider now the case of an ideal spring oscillating with a mass attached at its end. The oscillating mass $M$ will have a kinetic energy \begin{equation*} K_\text{mass} = \frac{1}{2} M v^2 \end{equation*} However, if the spring itself is massive, since different parts of it move at different velocities, it will also have some kinetic energy associated with it. To find the total kinetic energy of the spring, one could imagine an infinitesimal mass element on the spring moving at some velocity and integrate its contribution to the kinetic energy to get the total energy:\footnote{Though this is slightly advanced, it is not too difficult to do at your level. We urge you to attempt to show this theoretically.} \begin{equation} K_\text{spring} = \frac{1}{6} m_s v^2 \end{equation} \begin{imp} This also explains why the spring oscillates even when it has no external mass attached. \end{imp} \begin{question} \paragraph{Question:} Show, from the earlier expressions for kinetic energy, that the spring behaves like an ideal spring with a mass of $m_s/3$ attached to it. \end{question} Thus, when an external mass $M$ is attached to the spring, it behaves as if it is effectively an ideal spring with a mass $M+ m_s/3$ attached to its end. Thus, Equation (\ref{masslessTime}) will need to be modified. The resulting time period $T$ for the oscillations of a massive spring is given by \begin{equation} T = 2\pi \sqrt{\cfrac{\left(M + \cfrac{m_s}{3}\right)}{k}}. \label{Tm} \end{equation} The factor $m_s/3$ is the \textbf{dynamic} mass correction factor. \begin{imp} Note that this factor is different from the mass correction factor in the previous (static) case. The reason for this difference is that they arise from two different processes. In the first case, the effective mass comes from the fact that the different mass-elements comprising the spring experience different forces, which affects the effective \textit{extension}. In the dynamic case where these different mass-elements are \textit{moving}, the term $m_s/3$ comes from the fact that they have different velocities, which affects the effective \textit{kinetic energy}. %\todo[inline]{Could still use some work.} \end{imp} \begin{question} \paragraph{Question:} Show that if the mass $M$ that is attached is too large compared to $m_s$, Equations (\ref{Sm}) and (\ref{Tm}) give the same results as for an ideal spring. \end{question} When no additional mass is attached to the spring (i.e.\ $M=0$), we can define a corresponding frequency for the massive spring \begin{equation} f_0 = \frac{1}{T} = \frac{1}{2\pi} \sqrt{\frac{3k}{m_s}}. \end{equation} \subsubsection*{Standing waves on a massive spring} If we stretch the spring between two fixed points, it can be considered as a system with a uniform mass density (say, number of rungs per centimetre). When vibrated with some periodic forcing, this system has its own natural frequencies, very much like sound waves in a hollow pipe closed at both ends.\footnote{The pipe being ``closed'' implies that the amplitude of the waves at both ends is zero. In this case, the top end of the spring is fixed, and the bottom end moves at an amplitude so small compared to the length of the spring, that it can effectively be considered to be fixed.} \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[width=0.8\textwidth]{figs/wavesInAHalfOpenPipe.png} \caption{Waves on a pipe with one closed and one open end.} \label{fig:wavesInAHalfOpenPipe} \end{subfigure}\hfill \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[width=0.8\textwidth]{figs/wavesInAClosedPipe.png} \caption{Waves on a pipe with both ends closed.} \label{fig:wavesInAClosedPipe} \end{subfigure} \caption{Variation of amplitude in an air column: Sound waves are characterised by a series of alternate compressions and rarefactions. When an end is closed, the air particles there do not move, and thus that point is a node. Conversely, when the end is open, the particles have the maximum amplitude, and so that point is an antinode.} \label{fig:wavesInAPipe} \end{figure} When this system is vibrated at one of these ``natural'' frequencies, standing (longitudinal) waves are seen to form, with clear nodes -- i.e.\ points on the spring that remain completely stationary. Two important points can be noticed about the nodes in this case: \begin{enumerate}[label=(\alph*)] \item They divide the length of the spring into equal parts. \item They increase by 1 as you move from one natural frequency to the next. \end{enumerate} \begin{question} \paragraph{Question:} Drawing out a diagram like Figure (\ref{fig:wavesInAPipe}), show that the wavelength of a wave with $n$ nodes is given by \begin{equation} \lambda_n = \left(\frac{2}{n+1}\right) L , \quad \quad n=0,1,2\hdots \end{equation} where $L$ is the length of the column and $n$ the number of nodes (excluding the fixed endpoints). \end{question} We know that waves satisfy the following equation that relates their wavelengths and frequencies, \begin{equation} v = f_n \lambda_n \end{equation} where $v$ is the velocity of the wave. As this is a constant for a configuration, it follows that \begin{equation} \begin{aligned} f_n = \frac{v}{\lambda_n} &= \frac{v (n+1)}{2L}\\ f_1 = \frac{v}{2L}, \quad f_2 = \frac{v}{L},\quad &\hdots,\quad f_n = n f_1. \end{aligned} \end{equation} \section*{Experimental Setup} \subsection*{Apparatus} \begin{enumerate}[label=\arabic*)] \itemsep0em \item A set of soft massive springs \item A long and heavy retort stand with a clamp at the top end \item A set of masses with hooks \item A signal generator (\textit{Equip-tronics QT-210}) \item A dual output power amplifier with the connecting cords \item A mechanical vibrator \item A digital multimeter (\textit{Victor VC97}) \item A digital stopwatch (\textit{Racer}) \item Measuring tapes \item A set of measuring scales (1.0 m, 0.6 m and 0.3 m) \end{enumerate} \subsection*{Description} \begin{description} \item[Digital Multimeter (\textit{Victor VC97})] A multimeter is an instrument used to measure multiple parameters like voltage, current, and resistance. You will be using the multimeter to measure the frequency of the signal. You will have to use input sockets marked COM, V/$\Omega$ to do this. Note that the two input sockets marked mA and 10A are for the current measurement. You will not be using those. Connect the banana cables to COM and V/$\Omega$, and select the Hz setting on the multimeter. \item[Signal Generator] A signal generator is used to generate simple repetitive waveforms in the form of an alternating electrical wave. Typically, it will produce simple waveforms like sine, square, and triangular waves, and will allow you to adjust the frequency and amplitude of these signals. The instrument given to you generates sine, sawtooth, and square waveforms. The output may be taken from the respective output sockets through banana cables. The frequency can be adjusted by turning the frequency dial and turning the Range knob to the appropriate multiplier. For example, turning the dial to 3 and selecting the 100X setting in the range knob would provide an output waveform with a frequency of 300Hz. Similarly the amplitude knob varies the amplitude of the output waveform. \begin{tip} The DC Offset button makes the signal oscillate about a constant non-zero DC voltage (instead of zero). Normally, this shouldn't affect the working of the setup. However, when a multimeter is connected to it to measure frequency, it will not be able to do so, as it is calibrated to measure signals alternating about zero. \end{tip} \item[Mechanical Vibrator] The mechanical vibrator converts the electrical signals from the signal generator into mechanical vibrations, similar to how a speaker works. When using the spring as a uniform mass distribution, it should be clamped at one end of the stand, and its lower end should be clamped to the crocodile clip fixed on the vibrator. This end will be subjected to an up and down harmonic motion which will constitute the ``forcing'' of the spring. It must be ensured that the amplitude of this motion is small enough so that these ends could be considered to be fixed. \item[Power Amplifier] The Power Amplifier is used to amplify the signals from the signal generator before it is sent to the mechanical vibrator. The reason for this is two-fold: the signal itself is not sufficiently strong, and even if it were, there is a risk that a signal with a large amplitude could damage the mechanical vibrator. In order to prevent this, the amplifier has been equipped with two fuses to protect the vibrator. \end{description} \subsection*{Precautions} \begin{itemize} \item Don't overload the spring or you will stretch it beyond its elastic limit and damage it. \item Keep the amplitude of oscillations of the spring-mass system just sufficient to get the required number of oscillations. \item The amplitude of vibrations should be carefully adjusted to the required level using the amplitude knob of the signal generator so as to not blow the fuse in the power amplifier. The brighter and more frequently the indicator LEDs flash, the closer the fuse is to blowing. \end{itemize} \section*{Procedure} \subsection*{Part A} In this part, you will use the static method to determine the spring constant of a massive spring, by measuring the equilibrium extension of a given spring for different attached masses. \begin{enumerate} \item Measure the length $L_0$ of the spring keeping it horizontal on a table in an unstretched (all the coils touching each other) position. \item Hang the spring to the clamp fixed to the top end of the retort stand. The spring extends under its own weight. \item Take appropriate masses and attach them to the lower end of the spring. \item Measure the length $L_M$ of the spring in each case. (For better results you may repeat each measurement two or three times.) Thus determine the equilibrium extension $S_M$ for each value of mass attached. \item Plot an appropriate graph and determine the spring constant $k$ and mass of the spring $m_s$. \item Weigh the spring and compare its mass $m_s$ with the one obtained from the graph experimentally. \end{enumerate} \begin{question} \paragraph{Question:} State and justify the selection of variables plotted on the $x$ and $y$ axes. Explain the observed behaviour and interpret the $x$ and $y$ intercepts. \end{question} \subsection*{Part B} In this part you will use the dynamic method to find the time period of a massive spring with different masses attached. The frequency of oscillations of the spring with the upper end fixed and the lower end free (i.e. with no attached mass) will be determined graphically through extrapolation. \begin{enumerate} \item Keep the spring clamped to the retort stand. \item Try to set the spring into oscillations without any mass attached, you will observe that the spring oscillates under the influence of its own weight. \item Attach different masses to the lower end of the spring and measure the time period of oscillations of the spring mass system for each value of the mass attached. You may measure the time for a number of oscillations to determine the average time period. \item Perform the necessary data analysis and determine spring constant $k$ and the mass of the spring $m_s$ using the above data. Compare these values to those obtained in the Part A. \item Also determine frequency $f_0$ for zero mass attached to the spring from the graph. \end{enumerate} \subsection*{Part C} In this part, you will use a mechanical vibrator to force oscillations on the spring and excite its different normal modes of vibrations. Thus the longitudinal stationary waves will be set up on the spring, whose frequencies will be measured. The fundamental frequency $f_1$ in this case will be compared with $f_0$ obtained in \textbf{Part B}. \begin{enumerate} \item Keep the spring clamped to the long retort stand. \item Clamp the lower end of the spring to the crocodile clip attached to the vibrator. \item Connect the output of the signal generator to the input of the mechanical vibrator through the power amplifier, using a BNC cable. \item Connect a the multimeter to the signal generator and set it to measure frequency. \begin{tip} While it might seem sensible to connect the multimeter to the mechanical vibrator, it is found that the amplification of the signal leads to a problem in detecting its frequency. Thus, it is better to connect it directly to the signal generator. \end{tip} \item Starting from zero, slowly increase the frequency of the sinusoidal signal generated by the signal generator. At some particular frequency you will observe the formation of nodes: points on the spring which appear clearly visible as they are not moving. \item Increase the frequency further and observe higher harmonics identifying them on the basis of the number of loops you can see between the fixed ends. (If you see $n$ nodes -- or fixed points -- between the endpoints, there are $n+1$ loops.) \item Plot a graph of frequency for different number of loops versus the number of loops. Determine this fundamental frequency $f_1$ from the slope of this graph. \item Compare this fundamental frequency $f_1$ with the frequency $f_0$ of the spring mass system with one end fixed and the zero mass attached (as determined in Part B) and show that $$f_0 = \frac{f_1}{2}$$ \end{enumerate} \begin{question} \paragraph{Question:} Can you think of why the two frequencies should be related by a factor of two? (You may use the analogy between the spring and an air column.) \end{question} \section*{References} % \begin{enumerate} % \itemsep0em % \item J. Christensen, \textit{Am. J. Phys}, 2004, 72(6), 818-828. % \item T. C. Heard, N. D. Newby Jr, Behavior of a Soft Spring, \textit{Am. J. Phys}, 45 (11), 1977, % pp. 1102-1106. % \item H. C. Pradhan, B. N. Meera, Oscillations of a Spring With Non-negligible Mass, \textit{Physics Education (India)}, 13, 1996, pp. 189-193. % \item B. N. Meera, H. C. Pradhan, Experimental Study of Oscillations of a Spring with Mass Correction, \textit{Physics Education (India)}, 13, 1996, pp. 248-255. % \item Rajesh B. Khaparde, B. N. Meera, H. C. Pradhan, Study of Stationary Longitudinal Oscillations on a Soft Spring, \textit{Physics Education (India)}, 14, 1997, pp. 130-19. % \item H. J. Pain, \textit{The Physics of Vibrations and Waves}, 2nd Ed, John Wiley \& Sons, Ltd., 1981. % \item D. Halliday, R. Resnick, J. Walker, \textit{Fundamentals of Physics}, 5th Ed, John Wiley \& Sons, Inc., 1997. % \item K. Rama Reddy, S. B. Badami, V. Balasubramanian, \textit{Oscillations and Waves}, University % Press, Hyderabad, 1994. % \end{enumerate} \nocite{khaparde_training_2008} \nocite{heard_behavior_1977} \nocite{christensen_improved_2004} \printbibliography[heading=none] \newpage \end{refsection}
{ "alphanum_fraction": 0.7617633527, "avg_line_length": 59.1806930693, "ext": "tex", "hexsha": "7570068747ca58a368d64bd3035343e79bdf4ff5", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2022-01-20T07:01:11.000Z", "max_forks_repo_forks_event_min_datetime": "2022-01-20T07:01:11.000Z", "max_forks_repo_head_hexsha": "c0ad1e8216311fcad835880c938eafdd75bcceff", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "dpcherian/PHY102-Lab-Manual", "max_forks_repo_path": "04_Soft_Massive_Spring.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c0ad1e8216311fcad835880c938eafdd75bcceff", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "dpcherian/PHY102-Lab-Manual", "max_issues_repo_path": "04_Soft_Massive_Spring.tex", "max_line_length": 767, "max_stars_count": null, "max_stars_repo_head_hexsha": "c0ad1e8216311fcad835880c938eafdd75bcceff", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "dpcherian/PHY102-Lab-Manual", "max_stars_repo_path": "04_Soft_Massive_Spring.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5918, "size": 23909 }
%!TEX root = ./ERL Industrial Robots-rulebook.tex %-------------------------------------------------------------------- %-------------------------------------------------------------------- %-------------------------------------------------------------------- \section{Introduction to \erlir} \label{sec:Intro} The objective of the \erl is to organize several indoor robot competition events per year, ensuring a scientific competition format, around the following two challenges: \erlsr and \erlir. Those indoor robot competitions will be focused on two major challenges addressed by H2020: societal challenges (service robots helping and interacting with humans at home, especially the elderly and those with motor disabilities) and industrial leadership (industrial robots addressing the flexible factories of the future and modern automation issues). These challenges were addressed by \ro and \rockeutwo and will be extended in the \erl by building on the current version of the rule books and testbeds designed and used during RockEU2’s project lifetime. Greater automation in broader application domains than today is essential for ensuring European industry remains competitive, production processes are flexible to custom demands and factories can operate safely in harsh or dangerous environments. In the \erlir competition, robots will assist with the assembly of a drive axle - a key component of the robot itself and therefore a step towards self-replicating robots. Tasks include locating, transporting and assembling necessary parts, checking their quality and preparing them for other machines and workers. By combining the versatility of human workers and the accuracy, reliability and robustness of mobile robot assistants, the entire production process is able to be optimized. The \erlir competition is looking to make these innovative and flexible manufacturing systems, such as that required by the \rollin factory, a reality. This is the inspiration behind the challenge and the following scenario description. A more detailed account of the \erlir competition, but still targeted towards a general audience, is given in the \erlir in a Nutshell document, which gives a brief introduction to the very idea of the \erl and the \erlir competition, the underlying user story, and surveys the scenario, including the environment for user story, the tasks to be performed, and the robots targeted. Furthermore, this document gives general descriptions of the task benchmarks and the functional benchmarks that make up \erlir. The document on hand is the rule book for \erlir, and it is assumed that the reader has already read the nutshell document. The audience for the current document are teams who want to participate in the competition, the organizers of events where the \erlir competition is supposed to be executed, and the developers of simulation software, who want to provide their customers and users with ready-to-use models of the environment. The remainder of this document is structured as follows: Section \ref{sec:AwardCategories}, \emph{\textbf{award categories}} surveys the number and kind of awards that will be awarded and how the ranking of the award categories is determined based on individual benchmark results. The \emph{\textbf{testbed}} for \erlir competitions is described in some detail in the next section (Section \ref{sec:TestBed}). Subsections are devoted to the specification of the structure of the environment and its properties (Section \ref{ssec:StructureProperties}), to the mechanical parts and objects in the environment which can be manipulated (Section \ref {sssec:PartstoManipulate}), to objects in the environment that need to be recognized for completing the task (Section \ref{sssec:EnvironmentObjectstoRecognize}), to the networked devices embedded in the environment and accessible to the robot (Section \ref{ssec:NetworkedDevices}), and to the benchmarking equipment which we plan to install in the environment and which may impose additional constraints to the robot's behavior (equipment presenting obstacles to avoid) or add further perceptual noise (visible equipment, see Section \ref{ssec:BenchmarkingEquipment}). Next (Section \ref{sec:RobotsTeams}), we provide some specifications and constraints applying to the \emph{\textbf{robots and teams}} permitted to participate in \erlir. The \erl consortium is striving to minimize such constraints, but for reasons of safety and practicality such constraints are required. After that, the next two sections describe in detail the \emph{\textbf{task benchmarks}} (Section \ref{sec:TaskBenchmarks}) and the \emph{\textbf{functionality benchmarks}} (Section \ref{sec:FunctionalityBenchmarks}) comprising the \erlir competition, while information on scoring and ranking the performance of participating teams on each benchmark is already provided in the benchmark descriptions.%, Section \ref{sec:AwardCategories}, \emph{\textbf{award categories}} surveys the number and kind of awards that will be awarded and how the ranking of the award categories is determined based on individual benchmark results. %Last but not least, Section \ref{sec:RoawOrganization} provides details on \emph{\textbf{organizational issues}}, like the committees involved, the media to communicate with teams, qualification and setup procedures, competition schedules, and post-competition activities. %-------------------------------------------------------------------- %-------------------------------------------------------------------- %-------------------------------------------------------------------- \input{secErlirRulebookAwards} %-------------------------------------------------------------------- %-------------------------------------------------------------------- %-------------------------------------------------------------------- \clearpage\phantomsection \section{The \erlir Testbed} \label{sec:TestBed} The testbed for the \erlir competition consists of the arena (e.g. walls, workstation), networked devices and task-related objects. The robot can communicate and interact with the networked devices, which allow the robot to exert control on the testbed to a certain extend. Figure \ref{fig:rockin-n-rollin-production-area} shows the evolution of the \erlir environment from its early concept in \roaw to its implementation in the \roaw event in Lisbon in 2015 and \roaw last event in Polimi in 2017. Participating teams should assume the competition environment to be similar to those shown in Figure \ref{fig:rockin-n-rollin-production-area}; deviations should only occur if on-site constraints (space available, safety regulations) enforce them. % \begin{figure}[htb] \begin{center} \hfill \subfigure[Early concept]{ \scalebox{1.0}[1.0]{ \includegraphics[height=40mm,angle=0,trim=0px 0px 0px 0px,clip] {fig/AS_RoaW_Arena_v4} } \label{fig:RulebookArenaConcept} }% \hfill \subfigure[Laboratory installation]{ \scalebox{1.0}[1.0]{ \includegraphics[height=40mm,angle=0,trim=0px 100px 0px 200px,clip]% {pics/atwork/test_beds/WorkArenaBRSU.jpg}% } \label{fig:RulebookArenaLab} }% \hfill \subfigure[RoCKIn@Work 2015]{ \scalebox{.5}[.5]{% \includegraphics[height=80mm,angle=0,trim=-200px 0px -250px 0px,clip]% {fig/testbed/roaw_arena_lisbon.JPG}% }% \label{fig:RulebookArena2014} }% \hfill \subfigure[RoCKIn@Work 2017]{ \scalebox{.5}[.5]{% \includegraphics[height=80mm,angle=0]% {fig/testbed/roaw_arena_polimi.jpg}% }% \label{fig:RulebookArenaLisbon2015} }% \hfill\mbox{} \caption{The evolution of the \erlir environment} \label{fig:rockin-n-rollin-production-area} \end{center} \end{figure} \input{ssecErlirRulebookTestbedEnvironment} \input{ssecErlirRulebookTestbedObjects} %\input{ssecErlirRulebookIdentifier} \input{ssecErlirRulebookTestbedNetDevices} \input{ssecErlirRulebookTestbedCFH} \input{ssecErlirRulebookTestbedBMequipment} \clearpage\phantomsection \input{secErlirRulebookRobots} %-------------------------------------------------------------------- %-------------------------------------------------------------------- %-------------------------------------------------------------------- \newpage \section{Task Benchmarks} \label{sec:TaskBenchmarks} Details concerning rules, procedures, as well as scoring and benchmarking methods, are common to all task benchmarks. \begin{description} \item[Rules and Procedures] Every run of each of the task benchmark will be preceded by a safety-check, outlined as follows: % \begin{enumerate} \item The team members must ensure and inform at least one of the organizing committee (OC) member, present during the execution of the task, that they have an emergency stop button on the robot which is fully functional. Any member of the OC can ask the team to stop their robot at any time which must be done immediately. \item A member of the OC present during the execution of the task will make sure that the robot complies with the other safety-related rules and robot specifications presented in Section~\ref{sec:RobotsTeams}. \end{enumerate} % All teams are required to perform each task according to the steps mentioned in the rules and procedures sub-subsections for the tasks. During the competition, all teams are required to repeat the task benchmarks several times. On the last day, only a selected number of top teams will be allowed to perform the task benchmarks again. Maximum time allowed for one task benchmark is 10 minutes. %-------------------------------------------------------------------- \item[Acquisition of Benchmarking Data] \label{sec:TbmAcquisitionOfData} Following some general notes on the acquisition of benchmarking data are described. They are valid for all task benchmarks, as well as for the functional benchmarks. \begin{itemize} \item{\textbf{Calibration parameters}} Important! Calibration parameters for cameras must be saved. This must be done for other sensors (e.g., Kinect) that require calibration as well, if a calibration procedure has been applied instead of using the default values (e.g., those provided by OpenNI). \item{\textbf{Notes on data saving}} The specific data that the robot must save is described in the benchmark section. In general some data streams (those with the highest bitrate) must be logged only in the time intervals when they are actually used by the robot to perform the activities required by the benchmark. In this way, system load and data bulk are minimized. For instance, whenever a benchmark includes object recognition activities, video and point cloud data must be logged by the robot only in the time intervals when it is actually performing object recognition. \item{\textbf{Use of data}} The logged data is not used during the competition. In particular, it is not used for scoring. The data is processed by \erl consortium members after the end of the competition. It is used for in-depth analysis and/or to produce datasets to be published for the benefit of the robotics community. \item{\textbf{Where and when to save data}} Robots must save the data as specified in the section ``Acquisition of Benchmarking Data'' of their respective TBM/FBM on a USB stick provided by \erlir staff. The USB stick is given to the team immediately before the start of the benchmark, and must be returned (with the required data on it) at the end of the benchmark. Each time a teams robot executes a benchmark, the team must: \begin{enumerate} \item Create, in the root directory of the USB stick, a new directory named \begin{itemize} \item NameOfTheTeam\_FBMx\_DD\_HH-MM (for FBM) or \item NameOfTheTeam\_TBMx\_DD\_HH-MM (for TBM) \end{itemize} \item Configure the robot to save the data files in these directories. \end{enumerate} \end{itemize} In the filenames above $x$ denotes the number of the benchmark, $DD$ is the day of the month and $HH$, $MM$ represent the time of the day (hours and minutes). All files produced by the robot that are associated with the execution of the benchmark must be written in this directory. Please note that a new directory must be created for each benchmark executed by the robot. This holds true even when the benchmark is a new run of one that the robot already executed. During the execution of the benchmark, the following data will be collected\footnote{In the following, `offline' identifies data produced by the robot, and stored locally on the robot, that will be collected by the referees when the execution of the benchmark ends (e.g., as files on a USB stick), while `online' identifies data that the robot has to transmit to the CFH during the execution of the benchmark. Data marked neither with `offline' nor `online' is generated outside the robot.}. In brackets the expected ROS topics are named. Corresponding data types can be stored in a YAML file (see Section \ref{sssec:YamlDataFileSpec}) or rosbag. Following the list of \textbf{offline data} to be logged: \begin{table}[h] \centering \begin{footnotesize} \begin{tabular}{|l|l|l|l|} \hline Topic & Type & Frame Id & Notes \\ \hline\hline /rockin/robot\_pose\tablefootnote{The 2D robot pose at the floor level, i.e., $z=0$ and only yaw rotation.} & geometry\_msgs/PoseStamped & /map & 10 Hz \\ \hline /rockin/marker\_pose\tablefootnote{The 3D pose of the marker in 6 degrees of freedom. } & geometry\_msgs\/PoseStamped & /map & 10 Hz \\ \hline /rockin/trajectory\tablefootnote{Trajectories planned by the robot including when replanning.} & nav\_msgs/Path & /map & Each (re)plan \\ \hline /rockin/<device>/image\tablefootnote{Image processed for object perception; <device> must be any of stereo\_left, stereo\_right, rgb; if multiple devices of type <device> are available on your robot, you can append "\_0", "\_1", and so on to the device name: e.g., "rgb\_0", "stereo\_left\_2", and so on.} & sensor\_msgs/Image & /<device>\_frame & -- \\ \hline /rockin/<device>/camera\_info\tablefootnote{Calibration info for /erlir/<device>/image.} & sensor\_msgs/CameraInfo & -- & --\\ \hline /rockin/depth\_<id>/pointcloud\tablefootnote{Point cloud processed for object perception; <id> is a counter starting from 0 to take into account the fact that multiple depth camera could be present on the robot: e.g., "depth\_0", "depth\_1", and so on.} & sensor\_msgs/PointCloud2 & /depth\_<id>\_frame & -- \\ \hline /rockin/scan\_<id>\tablefootnote{Laser scans, <id> is a counter starting from 0 to take into account the fact that multiple laser range finders could be present on the robot: e.g., "scan\_0", "scan\_1", and so on.} & sensor\_msgs/LaserScan & /laser\_<id>\_frame & 10-40Hz \\ \hline tf\tablefootnote{The tf topic on the robot; the tf tree needs to contain the frames described in this table properly connected through the /base\_frame which is the odometric center of the robot.} & tf & -- & -- \\ \hline \end{tabular} \end{footnotesize} \end{table} Some robots might not have some of the sensors or they might have multiple instances of the previous data (e.g., multiple rgb cameras or multiple laser scanner), in this case you append the number of the device to the topic and the frame (e.g., /erlir/scan\_0 in /laser\_frame\_0). It is possible not to log some of the data, if the task does not require it. The \textbf{online} data part can be found in the description of the respective benchmark. %-------------------------------------------------------------------- \item[Communication with CFH] \label{sec:CommCFH} The following steps describe the part of the CFH communication that is applicable for all TBMs. \begin{enumerate} \item The robot sends a \textbf{BeaconSignal} message at least every second. \item The robot waits for \textbf{BenchmarkState} messages. It starts the benchmark execution when the \emph{phase} field is equal to EXECUTION and the \emph{state} field is equal to RUNNING. \item The robot waits for an \textbf{Inventory} message from the CFH (which is continuously sent out by the CFH) in order to receive the initial distribution of objects and their locations in the environment. \item The robot waits for an \textbf{Order} message from the CFH (which is sent out continuously by the CFH) in order to receive the actual task, i.e., where the objects should be at the end. \item The task benchmark ends when all objects are at their final location as specified in the \textbf{Order} message. After that the robot sends a message of type \textbf{BenchmarkFeedback} to the CFH with the \emph{phase\_to\_terminate} field set to EXECUTION. The robot should do this until the \textbf{BenchmarkState}'s \emph{state} field has changed. \end{enumerate} The messages to be sent and to be received can be seen on the Github repository located at \cite{rockin:CFHMessages}. \item[Scoring and Ranking] Evaluation of the performance of a robot according to this task benchmark is based on performance equivalence classes and they are related to the fact that the robot has done the required task or not. The criterion defining the performance equivalence class of robots is based on the concept of \emph{tasks required achievements}. While the ranking of the robot within each equivalence class is obtained by looking at the performance criteria. In particular: % \begin{itemize} \item The performance of any robot belonging to performance class N is considered as better than the performance of any robot belonging to performance class $M$ whenever $M<N$ \item Considering two robots belonging to the same class, then a penalization criterion (penalties are defined according to task performance criteria) is used and the performance of the one which received less penalizations is considered as better \item If the two robots received the same amount of penalizations, the performance of the one which finished the task more quickly is considered as better (unless not being able to reach a given achievement within a given time is explicitly considered as a penalty). \end{itemize} % Performance equivalence classes and in-class ranking of the robots are determined according to three sets: % \begin{itemize} \item A set $A$ of \textbf{achievements}, i.e.,~things that should happen (what the robot is expected to do). \item A set $PB$ of \textbf{penalized behaviors}, i.e.,~robot behaviors that are penalized, if they happen, (e.g.,~hitting furniture). \item A set $DB$ of \textbf{disqualifying behaviors}, i.e.,~robot behaviors that absolutely must not happen (e.g.,~hitting people). \end{itemize} % Scoring is implemented with the following 3-step sorting algorithm: % \begin{enumerate} \item If one or more of the elements of set $DB$ occur during task execution, the robot gets disqualified (i.e.~assigned to the lowest possible performance class, called class $0$), and no further scoring procedures are performed. \item Performance equivalence class $X$ is assigned to the robot, where $X$ corresponds to the number of achievements in set $A$ that have been accomplished. \item Whenever an element of set $PB$ occurs, a penalization is assigned to the robot (without changing its performance class). \end{enumerate} % One key property of this scoring system is that a robot that executes the required task completely will always be placed into a higher performance class than a robot that executes the task partially. Moreover the penalties do not make a robot change class (also in the case of incomplete task). % \item[Penalized Behaviors and Disqualifying Behaviors] The penalizing behaviors for all task benchmarks are: \begin{itemize} \item The robot collides with obstacles in the testbed. \item The robot drops an object. \item The robot stops working. \item The robot accidentally place an object on top of another object. \end{itemize} The disqualifying behaviors for all task benchmarks are: \begin{itemize} \item The robot damages or destroys the objects requested to manipulate. \item The robot damages the testbed. \end{itemize} The achievements for each task are unique and are described in the respective task benchmark section. \end{description} %\input{ssecErlirRulebookTBMAssemblyAidTray} %\input{ssecErlirRulebookTBMPlateDrilling} \input{ssecErlirRulebookTBMFillaBox} %\input{BTT} %-------------------------------------------------------------------- %-------------------------------------------------------------------- %-------------------------------------------------------------------- \clearpage \section{Functionality Benchmarks} \label{sec:FunctionalityBenchmarks} \paragraph{Communication with CFH} \label{ssec:CommCFH} Every functionality benchmark will be preceded by a safety-check similar to that described for the task benchmark procedures. All teams are required to perform each functionality benchmark according to the steps mentioned in their respective section. During the competition, all teams are required to repeat it the functionality benchmark several times. On the last day, only a selected number of top teams will be allowed to perform it. \input{ssecErlirRulebookFBMObjectPerception} \input{ssecErlirRulebookFBMManipulation} \input{ssecErlirRulebookFBMManipulationPlace} %\input{ssecErlirRulebookFBMControl} \input{ssecErlirRuleBookFBMExploration} %\input{secErlirRulebookOrganization} %-------------------------------------------------------------------- % EOF %--------------------------------------------------------------------
{ "alphanum_fraction": 0.7384815659, "avg_line_length": 79.0772058824, "ext": "tex", "hexsha": "a392a73ea43760440cde69c99e97ea1b7a2a237b", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-10-09T09:08:01.000Z", "max_forks_repo_forks_event_min_datetime": "2020-10-09T09:08:01.000Z", "max_forks_repo_head_hexsha": "32a10713fcd6de11042853dafcb1275c301f5f82", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "mhwasil/ProfessionalServiceRobots", "max_forks_repo_path": "rulebook/chapterErlirRulebook.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "32a10713fcd6de11042853dafcb1275c301f5f82", "max_issues_repo_issues_event_max_datetime": "2020-10-09T09:23:13.000Z", "max_issues_repo_issues_event_min_datetime": "2020-07-06T16:03:20.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "mhwasil/ProfessionalServiceRobots", "max_issues_repo_path": "rulebook/chapterErlirRulebook.tex", "max_line_length": 804, "max_stars_count": null, "max_stars_repo_head_hexsha": "32a10713fcd6de11042853dafcb1275c301f5f82", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "mhwasil/ProfessionalServiceRobots", "max_stars_repo_path": "rulebook/chapterErlirRulebook.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4967, "size": 21509 }
\documentclass[letter-paper]{tufte-book} %% % Book metadata \title{Geometry 1H} \author[]{Inusuke Shibemoto} %\publisher{Research Institute of Valinor} %% % If they're installed, use Bergamo and Chantilly from www.fontsite.com. % They're clones of Bembo and Gill Sans, respectively. \IfFileExists{bergamo.sty}{\usepackage[osf]{bergamo}}{}% Bembo \IfFileExists{chantill.sty}{\usepackage{chantill}}{}% Gill Sans %\usepackage{microtype} \usepackage{amssymb} \usepackage{amsmath} %% % For nicely typeset tabular material \usepackage{booktabs} %% overunder braces \usepackage{oubraces} %% \usepackage{xcolor} \usepackage{tcolorbox} \newtcolorbox[auto counter,number within=section]{derivbox}[2][]{colback=TealBlue!5!white,colframe=TealBlue,title=Box \thetcbcounter:\ #2,#1} \makeatletter \@openrightfalse \makeatother %% % For graphics / images \usepackage{graphicx} \setkeys{Gin}{width=\linewidth,totalheight=\textheight,keepaspectratio} \graphicspath{{figs/}} % The fancyvrb package lets us customize the formatting of verbatim % environments. We use a slightly smaller font. \usepackage{fancyvrb} \fvset{fontsize=\normalsize} \usepackage[plain]{fancyref} \newcommand*{\fancyrefboxlabelprefix}{box} \fancyrefaddcaptions{english}{% \providecommand*{\frefboxname}{Box}% \providecommand*{\Frefboxname}{Box}% } \frefformat{plain}{\fancyrefboxlabelprefix}{\frefboxname\fancyrefdefaultspacing#1} \Frefformat{plain}{\fancyrefboxlabelprefix}{\Frefboxname\fancyrefdefaultspacing#1} %% % Prints argument within hanging parentheses (i.e., parentheses that take % up no horizontal space). Useful in tabular environments. \newcommand{\hangp}[1]{\makebox[0pt][r]{(}#1\makebox[0pt][l]{)}} %% % Prints an asterisk that takes up no horizontal space. % Useful in tabular environments. \newcommand{\hangstar}{\makebox[0pt][l]{*}} %% % Prints a trailing space in a smart way. \usepackage{xspace} \usepackage{xstring} %% % Some shortcuts for Tufte's book titles. The lowercase commands will % produce the initials of the book title in italics. The all-caps commands % will print out the full title of the book in italics. \newcommand{\vdqi}{\textit{VDQI}\xspace} \newcommand{\ei}{\textit{EI}\xspace} \newcommand{\ve}{\textit{VE}\xspace} \newcommand{\be}{\textit{BE}\xspace} \newcommand{\VDQI}{\textit{The Visual Display of Quantitative Information}\xspace} \newcommand{\EI}{\textit{Envisioning Information}\xspace} \newcommand{\VE}{\textit{Visual Explanations}\xspace} \newcommand{\BE}{\textit{Beautiful Evidence}\xspace} \newcommand{\TL}{Tufte-\LaTeX\xspace} % Prints the month name (e.g., January) and the year (e.g., 2008) \newcommand{\monthyear}{% \ifcase\month\or January\or February\or March\or April\or May\or June\or July\or August\or September\or October\or November\or December\fi\space\number\year } \newcommand{\urlwhitespacereplace}[1]{\StrSubstitute{#1}{ }{_}[\wpLink]} \newcommand{\wikipedialink}[1]{http://en.wikipedia.org/wiki/#1}% needs \wpLink now \newcommand{\anonymouswikipedialink}[1]{\urlwhitespacereplace{#1}\href{\wikipedialink{\wpLink}}{Wikipedia}} \newcommand{\Wikiref}[1]{\urlwhitespacereplace{#1}\href{\wikipedialink{\wpLink}}{#1}} % Prints an epigraph and speaker in sans serif, all-caps type. \newcommand{\openepigraph}[2]{% %\sffamily\fontsize{14}{16}\selectfont \begin{fullwidth} \sffamily\large \begin{doublespace} \noindent\allcaps{#1}\\% epigraph \noindent\allcaps{#2}% author \end{doublespace} \end{fullwidth} } % Inserts a blank page \newcommand{\blankpage}{\newpage\hbox{}\thispagestyle{empty}\newpage} \usepackage{units} % Typesets the font size, leading, and measure in the form of 10/12x26 pc. \newcommand{\measure}[3]{#1/#2$\times$\unit[#3]{pc}} % Macros for typesetting the documentation \newcommand{\hlred}[1]{\textcolor{Maroon}{#1}}% prints in red \newcommand{\hangleft}[1]{\makebox[0pt][r]{#1}} \newcommand{\hairsp}{\hspace{1pt}}% hair space \newcommand{\hquad}{\hskip0.5em\relax}% half quad space \newcommand{\TODO}{\textcolor{red}{\bf TODO!}\xspace} \newcommand{\na}{\quad--}% used in tables for N/A cells \providecommand{\XeLaTeX}{X\lower.5ex\hbox{\kern-0.15em\reflectbox{E}}\kern-0.1em\LaTeX} \newcommand{\tXeLaTeX}{\XeLaTeX\index{XeLaTeX@\protect\XeLaTeX}} % \index{\texttt{\textbackslash xyz}@\hangleft{\texttt{\textbackslash}}\texttt{xyz}} \newcommand{\tuftebs}{\symbol{'134}}% a backslash in tt type in OT1/T1 \newcommand{\doccmdnoindex}[2][]{\texttt{\tuftebs#2}}% command name -- adds backslash automatically (and doesn't add cmd to the index) \newcommand{\doccmddef}[2][]{% \hlred{\texttt{\tuftebs#2}}\label{cmd:#2}% \ifthenelse{\isempty{#1}}% {% add the command to the index \index{#2 command@\protect\hangleft{\texttt{\tuftebs}}\texttt{#2}}% command name }% {% add the command and package to the index \index{#2 command@\protect\hangleft{\texttt{\tuftebs}}\texttt{#2} (\texttt{#1} package)}% command name \index{#1 package@\texttt{#1} package}\index{packages!#1@\texttt{#1}}% package name }% }% command name -- adds backslash automatically \newcommand{\doccmd}[2][]{% \texttt{\tuftebs#2}% \ifthenelse{\isempty{#1}}% {% add the command to the index \index{#2 command@\protect\hangleft{\texttt{\tuftebs}}\texttt{#2}}% command name }% {% add the command and package to the index \index{#2 command@\protect\hangleft{\texttt{\tuftebs}}\texttt{#2} (\texttt{#1} package)}% command name \index{#1 package@\texttt{#1} package}\index{packages!#1@\texttt{#1}}% package name }% }% command name -- adds backslash automatically \newcommand{\docopt}[1]{\ensuremath{\langle}\textrm{\textit{#1}}\ensuremath{\rangle}}% optional command argument \newcommand{\docarg}[1]{\textrm{\textit{#1}}}% (required) command argument \newenvironment{docspec}{\begin{quotation}\ttfamily\parskip0pt\parindent0pt\ignorespaces}{\end{quotation}}% command specification environment \newcommand{\docenv}[1]{\texttt{#1}\index{#1 environment@\texttt{#1} environment}\index{environments!#1@\texttt{#1}}}% environment name \newcommand{\docenvdef}[1]{\hlred{\texttt{#1}}\label{env:#1}\index{#1 environment@\texttt{#1} environment}\index{environments!#1@\texttt{#1}}}% environment name \newcommand{\docpkg}[1]{\texttt{#1}\index{#1 package@\texttt{#1} package}\index{packages!#1@\texttt{#1}}}% package name \newcommand{\doccls}[1]{\texttt{#1}}% document class name \newcommand{\docclsopt}[1]{\texttt{#1}\index{#1 class option@\texttt{#1} class option}\index{class options!#1@\texttt{#1}}}% document class option name \newcommand{\docclsoptdef}[1]{\hlred{\texttt{#1}}\label{clsopt:#1}\index{#1 class option@\texttt{#1} class option}\index{class options!#1@\texttt{#1}}}% document class option name defined \newcommand{\docmsg}[2]{\bigskip\begin{fullwidth}\noindent\ttfamily#1\end{fullwidth}\medskip\par\noindent#2} \newcommand{\docfilehook}[2]{\texttt{#1}\index{file hooks!#2}\index{#1@\texttt{#1}}} \newcommand{\doccounter}[1]{\texttt{#1}\index{#1 counter@\texttt{#1} counter}} \newcommand{\studyq}[1]{\marginnote{Q: #1}} \hypersetup{colorlinks}% uncomment this line if you prefer colored hyperlinks (e.g., for onscreen viewing) % Generates the index \usepackage{makeidx} \makeindex \setcounter{tocdepth}{3} \setcounter{secnumdepth}{3} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % custom commands \newtheorem{theorem}{\color{pastel-blue}Theorem}[section] \newtheorem{lemma}[theorem]{\color{pastel-blue}Lemma} \newtheorem{proposition}[theorem]{\color{pastel-blue}Proposition} \newtheorem{corollary}[theorem]{\color{pastel-blue}Corollary} \newenvironment{proof}[1][Proof]{\begin{trivlist} \item[\hskip \labelsep {\bfseries #1}]}{\end{trivlist}} \newenvironment{definition}[1][Definition]{\begin{trivlist} \item[\hskip \labelsep {\bfseries #1}]}{\end{trivlist}} \newenvironment{example}[1][Example]{\begin{trivlist} \item[\hskip \labelsep {\bfseries #1}]}{\end{trivlist}} \newenvironment{remark}[1][Remark]{\begin{trivlist} \item[\hskip \labelsep {\bfseries #1}]}{\end{trivlist}} \hyphenpenalty=5000 % more pastel ones \xdefinecolor{pastel-red}{rgb}{0.77,0.31,0.32} \xdefinecolor{pastel-green}{rgb}{0.33,0.66,0.41} \definecolor{pastel-blue}{rgb}{0.30,0.45,0.69} % crayola blue \definecolor{gray}{rgb}{0.2,0.2,0.2} % dark gray \xdefinecolor{orange}{rgb}{1,0.45,0} \xdefinecolor{green}{rgb}{0,0.35,0} \definecolor{blue}{rgb}{0.12,0.46,0.99} % crayola blue \definecolor{gray}{rgb}{0.2,0.2,0.2} % dark gray \xdefinecolor{cerulean}{rgb}{0.01,0.48,0.65} \xdefinecolor{ust-blue}{rgb}{0,0.20,0.47} \xdefinecolor{ust-mustard}{rgb}{0.67,0.52,0.13} %\newcommand\comment[1]{{\color{red}#1}} \newcommand{\dy}{\partial} \newcommand{\ddy}[2]{\frac{\dy#1}{\dy#2}} \newcommand{\ab}{\boldsymbol{a}} \newcommand{\bb}{\boldsymbol{b}} \newcommand{\cb}{\boldsymbol{c}} \newcommand{\db}{\boldsymbol{d}} \newcommand{\eb}{\boldsymbol{e}} \newcommand{\lb}{\boldsymbol{l}} \newcommand{\nb}{\boldsymbol{n}} \newcommand{\tb}{\boldsymbol{t}} \newcommand{\ub}{\boldsymbol{u}} \newcommand{\vb}{\boldsymbol{v}} \newcommand{\xb}{\boldsymbol{x}} \newcommand{\wb}{\boldsymbol{w}} \newcommand{\yb}{\boldsymbol{y}} \newcommand{\Xb}{\boldsymbol{X}} \newcommand{\ex}{\mathrm{e}} \newcommand{\zi}{{\rm i}} \newcommand\Real{\mbox{Re}} % cf plain TeX's \Re and Reynolds number \newcommand\Imag{\mbox{Im}} % cf plain TeX's \Im \newcommand{\zbar}{{\overline{z}}} \newcommand\Def[1]{\textbf{#1}} \newcommand{\qed}{\hfill$\blacksquare$} \newcommand{\qedwhite}{\hfill \ensuremath{\Box}} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % some extra formatting (hacked from Patrick Farrell's notes) % https://courses.maths.ox.ac.uk/node/view_material/4915 % % chapter format \titleformat{\chapter}% {\huge\rmfamily\itshape\color{pastel-red}}% format applied to label+text {\llap{\colorbox{pastel-red}{\parbox{1.5cm}{\hfill\itshape\huge\color{white}\thechapter}}}}% label {1em}% horizontal separation between label and title body {}% before the title body []% after the title body % section format \titleformat{\section}% {\normalfont\Large\itshape\color{pastel-green}}% format applied to label+text {\llap{\colorbox{pastel-green}{\parbox{1.5cm}{\hfill\color{white}\thesection}}}}% label {1em}% horizontal separation between label and title body {}% before the title body []% after the title body % subsection format \titleformat{\subsection}% {\normalfont\large\itshape\color{pastel-blue}}% format applied to label+text {\llap{\colorbox{pastel-blue}{\parbox{1.5cm}{\hfill\color{white}\thesubsection}}}}% label {1em}% horizontal separation between label and title body {}% before the title body []% after the title body %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} % Front matter %\frontmatter % r.3 full title page %\maketitle % v.4 copyright page \chapter*{} \begin{fullwidth} \par \begin{center}{\Huge Geometry 1H}\end{center} \vspace*{5mm} \par \begin{center}{\Large typed up by B. S. H. Mithrandir}\end{center} \vspace*{5mm} \begin{itemize} \item \textit{Last compiled: \monthyear} \item Adapted from notes of R. Gregory, Durham \item This was part of the Durham Core A module given in the first year. This is standard Euclidean geometry involving some manipulations with matrices, as a precursor of linear algebra. \item The original course has geometry of complex numbers here, but for consistency reasons this has been moved to Complex Analaysis 2H in this organisation of notes. \item[] \item \TODO I seem to have managed to do the notes with absolutely no diagrams, probably should be fixed \end{itemize} \par \par Licensed under the Apache License, Version 2.0 (the ``License''); you may not use this file except in compliance with the License. You may obtain a copy of the License at \url{http://www.apache.org/licenses/LICENSE-2.0}. Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \smallcaps{``AS IS'' BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND}, either express or implied. See the License for the specific language governing permissions and limitations under the License. \end{fullwidth} %=============================================================================== \chapter{Geometry on the $\mathbb{R}^2$} Vectors in 2-dimensional space $\mathbb{R}^{2}$ may be written as $(x,y)$, $x$ and $y$ being co-ordinates for vector $\vb$, encoding how far along the axis they are. Elementary vector operations are addition and multiplication by a scalar, given respectively by \begin{equation} \ub+\vb=(u_1 + v_1, u_2 + v_2),\qquad \lambda\vb=(\lambda v_1, \lambda v_2), \end{equation} with $\lambda\in\mathbb{R}$. These operations satisfy the axioms of vector space. For $\ub,\vb,\wb\in\mathbb{R}^2$ and $\lambda,\mu\in\mathbb{R}$ \begin{itemize} \item \Def{closure} \begin{enumerate} \item $(\ub+\vb)\in\mathbb{R}^2$. \item $\lambda\vb\in\mathbb{R}^2$. \end{enumerate} \item Addition \begin{enumerate} \item \Def{Associativity}: $(\ub+\vb)+\wb=\ub+(\vb+\wb)$. \item \Def{Identity}: There exists a $\mathbf{0}\in\mathbb{R}^2$ where $\vb+\mathbf{0}=\mathbf{0}+\vb=\vb$. \item \Def{Inverse}: There exists a $-\vb\in\mathbb{R}^2$ where $\vb+(-\vb)=\mathbf{0}$. \item \Def{Commutativity}: $\vb+\wb=\wb+\vb$. \end{enumerate} \item Multiplication, \begin{enumerate} \item \Def{Associativity}: $(\lambda\mu)\vb=\lambda(\mu\vb)$. \item \Def{Zero}: $0\vb=\mathbf{0}$. \item \Def{One}: $1\vb=\vb$. \item \Def{Distributive}: $(\lambda+\mu)\vb=\lambda\vb+\mu\vb$. \end{enumerate} \end{itemize} It is convenient to write $(x,y)=x(1,0)+y(0,1)=x\eb_1 + y\eb_2$. Then the set $\{\eb_1,\eb_2\}$ form a \Def{basis} of $\mathbb{R}^2$, known as the \Def{Cartesian basis}. %------------------------------------------------------------------------------- \section{Scalar product, lengths and angles} Given $\vb_1,\vb_2\in\mathbb{R}^2$, the \Def{scalar product} is defines as \begin{equation} \ub \cdot \vb=(u_1 v_1 + u_2 v_2)\in\mathbb{R}. \end{equation} We note that $\vb\cdot\vb=v_1^2 + v_2^2\geq0$. The \Def{modulus} of the vector $\vb$ is defined as \begin{equation} |\vb|=\sqrt{v_1^2 + v_2^2}. \end{equation} The modulus coincides with the \Def{length} of the vector in this case, under the Euclidean norm (defined by the dot product here). \subsection{Polar co-ordinates} If we let $r=|\vb|$, then by considering the angle $\vb$ makes with respect to the base line (e.g., $x$-axis), we can write a vector $\vb$ in polar co-ordinates $(r,\theta$), where $r\in\mathbb{R}^+$ and $\theta\in[0,2\pi)$, so that \begin{equation} v_1 = r\cos\theta,\qquad v_2 = r\sin\theta. \end{equation} (The zero vector is not well-defined since any choice of $\theta$ will do for $r=0$). \begin{lemma} Suppose $\ub,\vb\in\mathbb{R}^2$ and $\theta\in[0.\pi]$ is the angle between the two vectors. Then $\ub\dot\vb=|\ub||\vb|\cos\theta$. \end{lemma} \begin{proof} For $\ub=(|\ub|,\phi)$, $\vb=(|\vb|,\psi)$, $\theta=|\phi-\psi|$. So \begin{align*} \ub \cdot \vb &=|\ub||\vb|(\cos\phi\cos\psi+\sin\phi\sin\psi)\\ &=|\ub||\vb|\cos(\phi-\psi)\\ &=|\ub||\vb|\cos\theta. \end{align*} \qed \end{proof} $\ub$ and $\vb$ are \Def{orthogonal} iff $\ub\cdot\vb=0$. (This is a more general way for saying two vectors are perpendicular, as we do not necessarily have to restrict ourselves to the Euclidean inner product). %------------------------------------------------------------------------------- \section{Simultaneous equations} Suppose we have the system of equations \begin{equation*} ax+by=e,\qquad cx+dy=f,\qquad a\cdots f\in\mathbb{R}. \end{equation*} Then multiplying and eliminating accordingly gives \begin{equation*} x=\frac{de-bf}{ad-bc},\qquad y=\frac{ce-af}{ad-bc}, \end{equation*} so unique solution exists if $(ad-bc)\neq0$. If $(ad-bc)=0$, then there can be two scenarios: \begin{enumerate} \item Solution is under-determined. e.g., $2x+6y=4$ and $3x+9y=6$, and the two equations differ by a constant factor, so all solutions lie on the same line. \item There is no solution (lines do not intersect in the plane). e.g., $2x+6y=6$ and $3x+9y=4$. \end{enumerate} If we instead write the system of equations in terms of a matrix, for example, \begin{equation*} \begin{cases}2x+6y=6\\ 3x+9y=4\end{cases}\qquad\equiv\qquad \begin{pmatrix}2 & 6\\ 3 & 9\end{pmatrix}\begin{pmatrix}x\\ y\end{pmatrix} =\begin{pmatrix}6\\ 4\end{pmatrix}, \end{equation*} by using matrix multiplication rules and inverses of matrices, an alternative method can be used to solve simultaneous equations, not necessarily of two variables. In the general case in $\mathbb{R}^2$, \begin{equation*} \mathsf{A}\xb=\bb\rightarrow \xb=\mathsf{A}^{-1}\bb, \end{equation*} where \begin{equation} \mathsf{A}^{-1}=\frac{1}{ad-bc}\begin{pmatrix}d & -b\\ -c & a\end{pmatrix}, \qquad \mathsf{A}=\begin{pmatrix}a & b\\ c & d\end{pmatrix}. \end{equation} $\mathsf{A}^{-1}$ is the \Def{inverse} of $\mathsf{A}$, and $|\mathsf{A}|=(ad-bc)$ is known as the \Def{determinant} of a $2\times2$ matrix. The \Def{adjoint} $\mbox{adj}\mathsf{A}$ is the inverse without division by the determinant. The geometric interpretation is then, if $|\mathsf{A}|\neq0$, two lines intersect uniquely, otherwise there is no intersection, or the lines lie on top of each other. %------------------------------------------------------------------------------- \section{Lines on a plane} We know that $ax+by=c$ is the equation of a straight line. There are several ways to describe lines: \begin{itemize} \item \Def{Parametric form}. If $\vb$ passes through the origin, we can write the line as a collection of all scalar multiples of a direction vector along the line, i.e., $\xb=\lambda\vb$. More generally, if $\ab$ is any point on the line, then $\xb=\ab+\lambda\vb$. \item \Def{Normal vector}. Let $\nb$ be a vector orthogonal to $\vb$, then $\nb\cdot\vb=0$. Thus \begin{equation} \nb\cdot\xb=\nb\cdot\ab+\lambda\nb\cdot\vb = \nb\cdot\ab. \end{equation} A line may therefore be written in terms of $\ab$ and $\nb$ as $\nb\cdot\xb=n_1 x + n_2 y = \nb\cdot\ab$. It is often convenient to have $|\nb|=1$, and denote it $\hat{\nb}$. \end{itemize} %------------------------------------------------------------------------------- \section{Determinants and area} \begin{lemma} Let $\ub=(a,c)^{T}$ and $\vb=(b,d)^{T}$. Then $\ub$ and $\vb$ are parallel iff $|\mathsf{A}|=0$. \end{lemma} \begin{proof} If $\ub$ is parallel to $\vb$ then $\ub=\lambda\vb$. Then $a=\lambda b$ and $c=\lambda=d$. Then $ad-bc=\lambda(bd-bd)=0$. If $|\mathsf{A}|=0$, then $ad=bc$, and so $a/b=c/d=\lambda$, and so we arrive at $\ub=\lambda\vb$. \qed \end{proof} Now consider the parallelogram formed from $\ub$ and $\vb$, with one vertex on the origin wlog. The claim is that the area of this parallelogram is equal to $|\mathsf{A}|$. The argument goes that the area is base multiplied by the height. Taking $|\ub|$ to be the case, the height is given by $|\vb|\sin\theta$, and so the area is $|\ub||\vb|\sin\theta$. On the other hand, $\ub\cdot\vb=ab+cd=|\ub||\vb|\cos\theta$, so squaring both sides gives \begin{equation*} |\ub|^2 |\vb|^2 \cos^2\theta = (ab+cd)^2, \end{equation*} and \begin{equation*} \mbox{Area}=|\ub|^2 |\vb|^2 \sin^2\theta = -(ab+cd)^2 + |\ub|^2 |\vb|^2 =\cdots=(ad-bc)^2=|\mathsf{A}|^2, \end{equation*} from which result follows. \begin{example} Take a parallelogram with vertices \begin{equation*} \ab=(1,3),\qquad\bb=(4,4),\qquad \cb=(5,6),\qquad\db=(2,5). \end{equation*} Then, noticing that $\bb-\ab$ is parallel to $\cb-\bb$, we take the spanning vectors as $\bb-\ab=(3,1)$ and $\cb-\db=(1,2)$, so \begin{equation*} |\mathsf{A}|=\left|\begin{matrix}3 & 1\\ 1 & 2\end{matrix}\right|=6-1=5. \end{equation*} \end{example} From the above, we deduce that the area of a triangle is half of the determinant of the matrix with the spanning vectors of the parallelogram as the entries. \begin{example} For a triangle with vertices \begin{equation*} \ab=(-1,2),\qquad\bb=(1,1),\qquad\cb=(3,4), \end{equation*} we have $\bb-\ab=(2,-1)$ and $\cb-\db=(4,2)$, so \begin{equation*} \mbox{Area}=\frac{1}{2}|\mathsf{A}| =\frac{1}{2}\left|\begin{matrix}2 & 4\\ -1 & 2\end{matrix}\right| =(4+4)/2=4. \end{equation*} \end{example} %------------------------------------------------------------------------------- \section{Curves in the plane} The most familiar curves are graphs of functions. Let $f(x)$ be a real valued function of $x\in\mathbb{R}$, $x\in(a,b)$. The graph $y=f(x)$ is a curve in $\mathbb{R}^2$. The curve is differentiable if we can find a tangent to the graph of $f$ for each $x\in(a,b)$. This is done by taking the limit of chords with \begin{equation*} y \to f(x+\delta x)-f(x),\qquad x \to x+\delta x - x, \end{equation*} so that the line has gradient $[f(x+\delta x)-f(x)]/\delta x$. By re-scaling and taking the limit as $\delta x\rightarrow0$, this gives the direction vector of the tangent line, or, as a vector, \begin{equation*} \tb=\lim_{\delta x\rightarrow0}\left(\begin{matrix} 1\\ [f(x+\delta x)-f(x)]/\delta x\end{matrix}\right)=\left(\begin{matrix} 1\\ f'(x)\end{matrix}\right). \end{equation*} For fixed $x_0$, the tangent line is \begin{equation} \begin{pmatrix}x\\ y\end{pmatrix}=\begin{pmatrix}x_0 \\ f(x_0)\end{pmatrix} +\tb\begin{pmatrix}1\\ f'(x_0)\end{pmatrix}, \end{equation} with $\tb=x-x_0$, $y=f(x_0)+\tb f'(x_0)=f(x_0)+(x-x_0)f'(x_0)$. This is of course the first Taylor expansion approximation of $y=f(x)$ at $x=x_0$. \begin{example} Let $f(x)=x^3-6x^2 + 9x$. \begin{enumerate} \item Find tangent of $y=f(x)$ at $x=0,1,2,3,4$. \item Use this to sketch $y=f(x)$ for $x\in[-1,5]$. \item Find the area below the curve and above the $x$-axis between their two pairs of intersection. \end{enumerate} We have $f'(x)=3x^2-12x+9$. So \begin{tabular}{|c|c|c|c|c|c|} \hline $x$ & $0$ & $1$ & $2$ & $3$ & $4$\\ \hline $y=f(x)$ & $0$ & $4$ & $2$ & $0$ & $4$\\ \hline $f'(x)$ & $9$ & $0$ & $-3$ & $0$ & $9$\\ \hline tangent & $y=9x$ & $y=4$ & $y=3x+8$ & $y=0$ & $y=9x-32$\\ \hline \end{tabular} A sketch of the graph shows a single crossing at $x=0$ and double crossing at $x=3$ (can show this by factorising the cubic). So the area under the curve is \begin{equation*} \int_0^3 (x^3-6x^2+9x)\ \mathrm{d}x=27/4. \end{equation*} \end{example} %------------------------------------------------------------------------------- \section{Parametric curve} These can deal with places where the gradient becomes infinity, or where there is more than one value of $y$ for a particular $x$. For example, $(x,y)=(r\cos\theta,r\sin\theta)$ describes a circle centred at the origin, since $x^2 + y^2 = r^2$. A \Def{parametric curve} in $\mathbb{R}^2$ is a differentiable map \begin{equation*} \alpha(a,b)\rightarrow\mathbb{R}^2,\qquad \alpha(t)=(x(t),y(t)),\qquad t\in(a,b). \end{equation*} The curve is differentiable if, for all $t\in(a,b)$, there exists $\alpha'(t)=(x'(t),y'(t))$. Geometrically, $\alpha'(t)$ is the vector tangent to the curve at $t$. This definition captures the essence of a vector and generalises even to curved spaces. This may be seen via the definition of chords, taking $\delta t\rightarrow0$ for each component. \begin{example} For a circle, $\alpha(t)=(r\cos t,r\sin t)$, and $\alpha'(t)=(-r\sin t,r\cos t)$. More generally, the circle of radius $r$ centred at $\vb$ is parametrised as $\alpha(t)=(r\cos t + v_1, r\sin t + v_2)$. For an ellipse centred at the origin, the parametrisation is $\alpha(t)=(a\cos t, b\cos t)$, and $\alpha'(t)=(-a\sin t, b\cos t)$. In Cartesian co-ordinates, this yields \begin{equation*} \frac{x^2}{a^2} + \frac{y^2}{b^2} = c^2,\qquad a,b>0. \end{equation*} The general parametrise form is similar to the circle case. \end{example} A parametrised curve is \Def{regular} is $\alpha'(t)$ exists for $t\in(a,b)$. This means that the parameter is constantly moving along the curve, which allows us to measure distances along the curve. The \Def{arc length} of a regular smooth parametrised curve, measured from a reference point $t_0$, is given by \begin{equation} s(t)=\int_{t_0}^t |\alpha'(t)|\ \mathrm{d}t = \int_{t_0}^{t} \sqrt{[x'(t)]^2+[y'(t)]^2}\ \mathrm{d}t. \end{equation} This is because, for an arc length element, \begin{equation*} \delta s=\sqrt{\left(\frac{\delta x}{\delta t}\right)^2 + \left(\frac{\delta y}{\delta t}\right)^2},\qquad\Rightarrow\qquad \sum\delta s=\int_a^b\sqrt{\left(\frac{\mathrm{d} x}{\mathrm{d} t}\right)^2 +\left(\frac{\mathrm{d} y}{\mathrm{d} t}\right)^2}\ \mathrm{d}t. \end{equation*} The curve is said to be \Def{parametrised by arc length} if $|\alpha'(t)|=1$. The is equivalent to saying $\delta s\approx\delta t$. \begin{lemma} Let $\alpha(t)$ be a regular curve parametrised by arc length. Then $\alpha''(t)=(x''(t), y''(t))$ is normal to $\alpha(t)$. \end{lemma} \begin{proof} We need to show that $\alpha'(t)\cdot\alpha''(t)=0$. For $t$ the arc length, $|\alpha'(t)|^2=\alpha'(t)\cdot\alpha'(t)=1$. So $(\alpha'\cdot\alpha')'=2\alpha'\cdot\alpha''=0$, as required. \qed \end{proof} \begin{example} For a circle with $\alpha(t)=(r\cos t, r\sin t)$, $|\alpha'(t)|=r$, so \begin{equation*} s(t)=\int_{t_0}^t r\ \mathrm{d}t = r(t-t_0),\qquad \sum s = 2\pi r \end{equation*} since $t\in[0,2\pi)$. To parametrise the curve by arc length, we take $t=s/r$, so $\alpha(s)=(r\cos (s/r), r\sin(s/r))$, $\alpha'(s)=(-\sin (s/r), \cos(s/r))$, and $\alpha''(s)=(-1/r)(\cos(s/r), \sin(s/r))=-(1/r^2)\alpha(s)$. It is easy to see that $\alpha'(s)\cdot\alpha''(s)=0$. Geometrically, $\alpha$ points away from the circle centre, $\alpha'(s)$ is tangent to the circle, and $\alpha''(s)$ points towards the circle centre. From circular motion, $\alpha=\xb$, the position vector, $\alpha'=\vb$ the velocity vector, and $\alpha''=\ab$ is the acceleration vector. \end{example} \begin{example} The \Def{cardinod} is given by the parametrisation \begin{equation*} \alpha(t)=2a(1-\cos t)(\cos t, \sin t), \end{equation*} and looks like a heart shape lying on its side. At $t=0$, there is a cusp, which will need separate consideration since it is an end point for the cardinod, and the curve is not necessarily smooth there. We have \begin{equation*} \alpha'(t)=2a(-\sin t+2\cos t\sin t, \cos t-\cos^2 t + \sin^2), \end{equation*} and, after some algebra (do it yourself), $|\alpha'(t)|=4a\sin (t/2)$ after using some double angle formulas. The arc length is then \begin{equation*} s(t)=\int_0^t 4a\sin(t/2)\ \mathrm{d}t=8a(1-\cos(t/2)). \end{equation*} The total length of a cardinoid is then $s(2\pi)=16a$. \end{example} %------------------------------------------------------------------------------- \section{Central conics} Consider a curve $C\in\mathbb{R}^2$, given by the constraint \begin{equation*} ax^2=2bxy+cy^2 = \begin{pmatrix}x & y\end{pmatrix} \begin{pmatrix}a & b\\ b & c\end{pmatrix} \begin{pmatrix}x\\ y\end{pmatrix}=1. \end{equation*} Writing this in polar co-ordinates gives \begin{equation*} r^2 f(\theta)=1,\qquad f(\theta)=a\cos^2\theta + 2b\cos\theta\sin\theta + c\sin^2\theta. \end{equation*} Since the origin is not within the curve, $r>0$, so $r=1/\sqrt{f(\theta)}$. Curve then only exists if $f(\theta)>0$. Now, \begin{equation*} \frac{f(\theta)}{\sin^2 \theta}=a\cot^2 \theta + 2b\cot\theta + c. \end{equation*} If $f(\theta)=0$, $\cot\theta=(-b/a)\pm(\sqrt{b^2-ac}/a)$, and $\theta$ is real only if $b^2-ac\geq0$. Now, $b^2-ac=-|\mathsf{A}|$, so if $|\mathsf{A}|>0$, $f(\theta)$ does not exist. If $ac-b^2>0$, then if: \begin{itemize} \item $a,c>0\qquad\Rightarrow\qquad f(\theta)>0$. \item $a,c<0\qquad\Rightarrow\qquad f(\theta)<0$. \end{itemize} If instead $|\mathsf{A}|=0$, $ac=b^2$, so multiplying both sides by $c$, it maybe shown that $c=(bx+cy)^2$, thus $bx+cy=\pm c$, i.e., two parallel straight lines with gradient $-b/c=-a/b$. In summary: \begin{enumerate} \item If $|\mathsf{A}|>0$ and $a,c>0$, we have a circle, and there are solutions for all $\theta$. \item If $|\mathsf{A}|>0$ and $a,c<0$, there are no solutions since $f(\theta)<0$ and $r=1/\sqrt{f(\theta)}$. \item If $|\mathsf{A}|=0$, the curve are two parallel straight lines. \item If $|\mathsf{A}|<0$, we have hyperbola. There is a solution of $F$ for $\theta$ in two open intervals, each of length less than $\pi$. \end{enumerate} \begin{example} For $a=c=1$, $b=0$, $|\mathsf{A}|=1$, and this describes the unit circle centred at the origin. There is a solution for all $\theta$. for $a=c=1$, $b=-1/2$, $|\mathsf{A}|=3/4$, we have the ellipse $x^2 + y^2 -xy = 1$, and may be factorised into \begin{equation*} \frac{3}{4}(x-y)^2 + \frac{1}{4}(x+y)^2 = 1. \end{equation*} There is a solution for all $\theta$. For $a=b=c=1$, $|\mathsf{A}|=0$, and $x+y=\pm1$, two parallel straight lines. For $a=1$, $c=-1$, $b=0$, $|\mathsf{A}|=-1$, we have $x^2 - y^2 = 1$, and describes hyperbola for $\theta\in[0,\pi/4)\cup(7\pi/4,2\pi)$ and $\theta\in(3\pi/4,5\pi/4)$. \end{example} It is often useful to find the points of $C$ closest/furthest to/from the origin. These are given by the turning points of $C$, i.e., $f'(\theta)=(c-a)\sin2\theta+2b\cos2\theta=0$. An observation we first make is that, for $f'(\theta_0)=0$, then $f'(\theta_0+\pi/2)=0$ also (via straightforward substitution). Then $f'(\theta_0)$ and $f'(\theta_0 + \pi/2)$ are the extremum values of $f(\theta)$. Thus turning points occur at orthogonal lines. The lines $\theta=\theta_0$ and $\theta_0+\pi/2$ are the \Def{principal axes} of the conic $C$. \begin{example} For which values of $\theta$ do there exist \begin{equation*} 1=\begin{pmatrix}x & y\end{pmatrix} \begin{pmatrix}-1 & 2 \\ 2 & -1\end{pmatrix} \begin{pmatrix}x\\y\end{pmatrix}, \end{equation*} and what is the smallest value of $r=\sqrt{x^2+y^2}(=1/\sqrt{f(\theta)})$ for which solutions exist? $|\mathsf{A}|=-3$, so solution forms a hyperbola. $f(\theta)=-\cos^2\theta+4\cos\theta\sin\theta-\sin^2\theta =4\cos\theta\sin\theta-1$, and solution exists for $f(\theta)>0$. Thus we require $\sin\theta>1/2$, and so $\theta\in(\pi/12,5\pi/12)$ and $\theta\in(13\pi/12,17\pi/12)$. Now, $f'(\theta)=4(\cos^2\theta-\sin^2\theta)$, so $f'=0$ if $\cos\theta=|\sin\theta|$. This occurs at $\cos\theta=1/\sqrt{2}$, and so $\theta=\pi/4,3\pi/4,5\pi/4,17\pi/4$. Taking into account the domain of relevance, $\theta=\pi/4$ and $\theta=5\pi/4$ are the relevant turning points. It is easy to see the extremum in this case is at $\pi/4$, with $r_{\min}=1$; since this is a hyperbola, there is no maximum. \end{example} Sometimes $\theta_0$ is an extremum point for $f$, then going back to the relation for $f(\theta_0)$, we notice that \begin{align*} f(\theta_0)\cos\theta_0 &= a\cos^3\theta_0 + 2b\cos^2\theta_0 \sin\theta_0 +c\sin^2\theta_0 \cos\theta_0\\ &= a\cos^3\theta_0 + 2b\cos^2\theta_0 +\sin\theta_0[a\cos\theta_0 \sin\theta_0 +b(\sin^2\theta_0 -\cos^2\theta_0)] \\ &=a\cos\theta_0 + b\sin\theta_0 \end{align*} up on substituting the value of $c\sin2\theta$ when $f'(\theta_0)=0$. Similarly, we have \begin{equation*} f(\theta_0)\sin\theta_0 = b\cos\theta_0 + c\sin\theta_0, \end{equation*} so \begin{equation*} f(\theta_0)\begin{pmatrix}\cos\theta_0 \\ \sin\theta_0\end{pmatrix} =\begin{pmatrix}a & b\\ b & c\end{pmatrix} \begin{pmatrix}\cos\theta_0 \\ \sin\theta_0\end{pmatrix}. \end{equation*} For $\hat{\mathbf{r}}=(\cos\theta_0, \sin\theta_0)^T$ (this is a unit vector), we have $\mathsf{A}\hat{\mathbf{r}}=f(\theta_0)\hat{\mathbf{r}}$, and so $\hat{\mathbf{r}}$ is the \Def{eigenvector} with associated \Def{eigenvalue} $f(\theta_0)$. As we will see, matrices are associated with linear mappings, and the eigen-equation says the action of a map on a position vector is to leave it unchanged up to a scale factor $\lambda$. \begin{example} The matrix \begin{equation*} \mathsf{A}=\begin{pmatrix}-1 & 0\\ 0 & 1\end{pmatrix} \end{equation*} represents a reflection in the $y$-axis. In this case, the eigenvalues and eigenvectors are \begin{equation*} \eb_1=\begin{pmatrix}1\\0\end{pmatrix},\qquad \lambda_1=-1,\qquad \eb_2=\begin{pmatrix}0\\1\end{pmatrix},\qquad \lambda_1=1. \end{equation*} \end{example} Going back to conics, $\theta_0$ and $\theta_0+\pi/2$ gives the extremum values of $r$, which also corresponds to the direction of the eigenvectors. The eigenvalues gives the distance to the origin via $r=1/\sqrt{f(\theta)}$, with \begin{equation*} f(\theta_0)\begin{pmatrix}\cos\theta_0 \\ \sin\theta_0\end{pmatrix} =\mathsf{A}\begin{pmatrix}\cos\theta_0 \\ \sin\theta_0\end{pmatrix},\quad f(\theta_0+\pi/2) \begin{pmatrix}\cos\theta_0+\pi/2 \\ \sin\theta_0+\pi/2\end{pmatrix} =\mathsf{A} \begin{pmatrix}\cos\theta_0+\pi/2 \\ \sin\theta_0+\pi/2\end{pmatrix}. \end{equation*} For general $\theta$, we observe that \begin{equation*} \begin{pmatrix}\cos\theta \\ \sin\theta\end{pmatrix}= \begin{pmatrix}\cos(\theta-\theta_0+\theta_0) \\ \sin(\theta-\theta_0+\theta_0)\end{pmatrix}= \begin{pmatrix}\cos(\theta-\theta_0)\cos\theta_0 +\sin\theta_0 \sin(\theta-\theta_0) \\ \sin(\theta-\theta_0)\cos\theta_0 + \sin\theta_0 \cos(\theta-\theta_0) \end{pmatrix}, \end{equation*} so then \begin{align*} \mathsf{A}\begin{pmatrix}\cos\theta \\ \sin\theta\end{pmatrix} &=\cos(\theta-\theta_0)\mathsf{A} \begin{pmatrix}\cos\theta_0 \\ \sin\theta_0\end{pmatrix} +\sin(\theta-\theta_0)\mathsf{A} \begin{pmatrix}-\sin\theta_0 \\ \cos\theta\end{pmatrix}\\ &=\cos(\theta-\theta_0)f(\theta_0) \begin{pmatrix}\cos\theta_0 \\ \sin\theta_0\end{pmatrix} +\sin(\theta-\theta_0)f(\theta_0+\pi/2) \begin{pmatrix}-\sin\theta_0 \\ \cos\theta\end{pmatrix}. \end{align*} Now, we recall that we have \begin{equation*} 1=r^2(\cos\theta,\sin\theta)\mathsf{A} \begin{pmatrix}\cos\theta \\ \sin\theta\end{pmatrix}. \end{equation*} Using the above identity, the RHS is \begin{equation*}\begin{aligned} \mbox{RHS}=&r^2 f(\theta_0)\cos(\theta-\theta_0) [\cos\theta\cos\theta_0 + \sin\theta\sin\theta_0]\\ &\qquad+ r^2 f(\theta_0+\pi/2)\sin(\theta-\theta_0) [-\cos\theta\sin\theta_0 + \sin\theta\cos\theta_0]. \end{aligned}\end{equation*} Using double angle formulae, we arrive at \begin{equation*} 1=r^2 f(\theta_0)\cos^2(\theta-\theta_0) +r^2 f(\theta_0+\pi/2)\sin^2(\theta-\theta_0). \end{equation*} So we see that if we have (i) an ellipse if both eigenvalues are positive, (ii) a hyperbola if only one of the eigenvalues is positive, (iii) no solution if both eigenvalues are negative. \begin{example} Characterise the conic $1=5x^2 + 2\sqrt{3} xy + 3y^2$, an find its largest and smallest distance from the origin. Now, \begin{equation*} \mathsf{A}=\begin{pmatrix}5 & \sqrt{3}\\ \sqrt{3} & 3\end{pmatrix}, \end{equation*} with $|\mathsf{A}|=12$, $a,c>0$, therefore solutions exists for all $\theta$, so we have an ellipse (alternative we could find the eigenvalues). We also have \begin{equation*} f(\theta)=5\cos^2\theta+2\sqrt{3}\cos\theta\sin\theta+3\sin^2\theta, \qquad f'(\theta)=2\sqrt{3}\cos2\theta-2\sin2\theta, \end{equation*} so extremum values occur where $\tan2\theta=\sqrt{3}$, and so the principal axes occurs at $\theta=(\pi/6,2\pi/3,7\pi/6,5\pi/3)$. It is seen that $r_{\min}=1/\sqrt{f(\pi/6)}=1/\sqrt{6}$ and $r_{\max}=1/\sqrt{f(2\pi/3)}=\sqrt{2}$ (and similarly with the values another $\pi$ radians along). \end{example} %=============================================================================== \chapter{Geometry of $\mathbb{R}^3$} %------------------------------------------------------------------------------- \section{Vectors and their products} We assume vectors $\vb\in\mathbb{R}^3$ satisfy the same axioms of addition and multiplication as for $\mathbb{R}^2$. Taking the usual basis, we take the dot product to be \begin{equation} \ub\cdot\vb=u_1 v_1 + u_2 v_2 + u_3 v_3, \end{equation} so that \begin{equation} |\vb|=\sqrt{\vb\cdot\vb}=\sqrt{v_1^2 + v_2^2 + v_3^2},\qquad \ub\cdot\vb=|\ub||\vb|\cos\theta. \end{equation} Again, $\ub$ and $\vb$ are orthogonal if $\ub\cdot\vb=0$. In $\mathbb{R}^3$, we define the \Def{vector (cross) product} as \begin{equation} \ub\times\vb=\begin{pmatrix}u_2 v_3 -u_3 v_2\\ u_3 v_1 - u_1 v_3\\ u_1 v_2 - u_2 v_1\end{pmatrix}. \end{equation} It may be seen that $\eb_1\times\eb_2=\eb_3$, and the sign is kept if the indices are permuted in a cyclic fashion. We also notice that $\vb\cdot\ub=-\ub\cdot\vb$. In general, $\ub\times\vb$ generates a vector orthogonal to both of the vectors, such that $\ub\times\vb$ points according to the clockwise/right-hand screw convention. \begin{lemma} For non-trivial $\ub,\vb\in\mathbb{R}^3$ and $\theta\in[0,\pi]$ be the angle between them, then $|\ub\times\vb|=|\ub||\vb|\sin\theta$. \end{lemma} \begin{proof} It may be shown that $|\ub\times\vb|^2= \cdots=|\ub|^2|\vb|^2-(\ub\cdot\vb)^2$. Then, using the identity for $\ub\cdot\vb$, we obtain the identity. \qed \end{proof} %------------------------------------------------------------------------------- \section{Simultaneous equations in $\mathbb{R}^3$} Suppose we want to solve the simultaneous system of equations \begin{equation*} \begin{cases}ax+by+cz&=l,\\ dx+ey+fz&=m,\\ gx+hy+jz&=n.\end{cases} \end{equation*} A similar approach using matrices results in \begin{equation*} \mathsf{A}=\begin{pmatrix}a & b & c\\ d & e & f &\\ g & h & j \end{pmatrix}, \qquad \mathsf{A}\xb=\bb. \end{equation*} Supposing the inverse $\mathsf{A}^{-1}$ exists, then we may solve the system uniquely; the existence of an unique solution again depends on the determinant of the matrix. By brute force or otherwise, \begin{equation} |\mathsf{A}|=\left|\begin{matrix}a & b & c\\ d & e & f \\ g & h & j \end{matrix}\right|= a\left|\begin{matrix}e & f \\ h & j\end{matrix}\right| -b\left|\begin{matrix} d & f \\ g & j \end{matrix}\right| +c\left|\begin{matrix} d & e \\ g & h \end{matrix}\right|. \end{equation} (This is the expression of the determinant by expanding the first row; it may be done by expanding any column or row, although one needs to take into account of extra minus signs in entries where $i+j=2n$, with $\mathsf{A}=(A_{ij})$, $A_{ij}$ the entry at the $i$-th row and $j$-th column.) With the determinant, $\mathsf{A}^{-1}$ may be found by the following steps: \begin{enumerate} \item \Def{Matrix of minors}. We find the determinant of each $2\times2$ matrix, where the element corresponding to that row and column is covered, i.e., \begin{equation*} \begin{pmatrix}ej-fh & dj-fg & dh-eg\\ bj-ch & aj-cg & ah-bg\\ bf-ce & af-cd & ae-bd \end{pmatrix}. \end{equation*} \item \Def{Matrix of co-factors}. Change the sign of the places where $i+j=2n$, where $i$ and $j$ are the row and column number (starting the count from one), i.e., \begin{align*} &\begin{pmatrix}ej-fh & -(dj-fg) & dh-eg\\ -(bj-ch) & aj-cg & -(ah-bg)\\ bf-ce & -(af-cd) & ae-bd \end{pmatrix}\\ &= \begin{pmatrix}ej-fh & fg-dj & dh-eg\\ ch-bj & aj-cg & bg-ah\\ bf-ce & cd-af & ae-bd \end{pmatrix}. \end{align*} \item \Def{Adjoint}. Take a transpose of the matrix of co-factors, i.e., \begin{equation*} \mbox{adj}(\mathsf{A})=\begin{pmatrix}ej-fh & ch-bj & bf-ce \\ fg-dj & aj-cg & cd-af\\ dh-eg & bg-ah & ae-bd \end{pmatrix}. \end{equation*} (Think of the transpose as reflecting everything about the main diagonal.) \item \Def{Inverse}. Divide the adjoint by the determinant, i.e., $\mathsf{A}^{-1}=\mbox{adj}(\mathsf{A})/|\mathsf{A}|$. \end{enumerate} %------------------------------------------------------------------------------- \section{Planes in $\mathbb{R}^3$} Consider the constraint $ax+by+cz=l$, which describes a plane in $\mathbb{R}^3$. This is the case because, assuming $c\neq0$ wlog, we have \begin{equation*} z=\frac{l-ax-by}{c},\qquad\Rightarrow\qquad \frac{\mathrm{d} z}{\mathrm{d} x}=-\frac{a}{c},\qquad\frac{\mathrm{d} z}{\mathrm{d} y}=-\frac{b}{c}, \end{equation*} and there exists a unique $z$ for every $x$ and $y$. The derivatives are constant which implies that $z(x,y)$ traces out a two-dimensional plane in $\mathbb{R}^3$. An alternative description of the plane can be given in terms of vectors, again as $\nb\cdot\boldsymbol{r}=\ab\cdot\nb$, where $\nb$ is a normal vector to the plane, $\boldsymbol{r}=(x,y,z)$, and $\ab$ is a position vector in the plane. As a trivial example, the $(x,y)$ plane is described by the equation with $a,b,l=0$ and $c=1$. We have $\nb=\eb_z$, $\ab=\boldsymbol{0}$. \begin{example} For $3x+4y-2z=12$, we have $\nb=(3,4,-2)$, and one possibility for $\ab$ is $(4,0,0)$. \end{example} The description is as in $\mathbb{R}^2$, except now we require two direction vectors. The equation of the plane is given as $\boldsymbol{r}=\ab+\lambda\db_1 + \mu\db_2$. Eliminating $\lambda$ and $\mu$, it may be shown that $\boldsymbol{r}\cdot(\db_1\times\db_2) =\ab\cdot(\db_1\times\db_2)$, i.e., $\db_1\times\db_2$ serves as a normal vector to describe the plane. \begin{example} Find the equation of the plane that goes includes the points \begin{equation*} \ab=\begin{pmatrix}1\\1\\1\end{pmatrix},\qquad \bb=\begin{pmatrix}1\\-1\\2\end{pmatrix},\qquad \cb=\begin{pmatrix}3\\1\\-1\end{pmatrix}. \end{equation*} We have \begin{equation*} \db_1=\bb-\ab=\begin{pmatrix}0\\-2\\1\end{pmatrix},\qquad \db_2=\cb-\ab=\begin{pmatrix}1\\0\\-2\end{pmatrix}, \end{equation*} so that \begin{equation*} \nb=\db_1\times\db_2=\begin{pmatrix}4\\1\\2\end{pmatrix}, \end{equation*} with $\nb\cdot\ab=7$. So one possibility is $4x+y+2z=7$. \end{example} %------------------------------------------------------------------------------- \section{Lines in $\mathbb{R}^3$} A line is given by a point and a direction vector as $\boldsymbol{r}=\ab+\lambda\db$. It is then given by two constraints: \begin{equation*} \frac{x-a_1}{d_1}=\frac{y-a_2}{d_2}=\frac{z-a_3}{d_3}=\lambda. \end{equation*} This line also has two normal vectors. Since only one of these can ever be eliminated, there are two independent solutions. Both \begin{equation*} \nb=(d_2, -d_1, 0),\qquad\textnormal{and}\qquad \nb\times\db=(-d_1 d_3, -d_2 d_3, d_1^2 + d_2^2) \end{equation*} may serve as normal vectors. Two planes in $\mathbb{R}^3$ are either parallel or intersect in a line. Suppose the two normal vectors are $\nb_1$ and $\nb_2$, then if they are parallel, $\nb_1=\lambda \nb_2$, otherwise they intersect in a line $\boldsymbol{r}=\ab+\lambda\db$, with $\db=\nb_1\times\nb_2$. To find a point on the line, we may take one of the variables to be zero, and solve for the remaining linear system. \begin{example} Find the intersection of the planes $x-2y+z=3$ and $y+2z=1$. For $\nb_1=(1,-2,1)$ and $\nb_2=(0,1,2)$, $\db=(-5,-2,1)$. Taking $z=0$, $y=1$ from the second equation, so $x=5$, thus $\boldsymbol{r}=(5,1,0)=\lambda(-5,-2,1)$, or, in equation form, \begin{equation*} \frac{x-2}{-5}=\frac{y-1}{-2}=z. \end{equation*} \end{example} \begin{example} Find the parametric form of the line $(x-2)/3=(y-4)/-1=(z+5)/-2$. $\boldsymbol{r}=(2,4,-5)+\lambda(3,-1,-2)$. \end{example} If $\boldsymbol{r}=(a,b,c)+\lambda(d,e,0)$, then the constraints are \begin{equation*} \frac{x-a}{d}=\frac{y-b}{e}\qquad\textnormal{and}\qquad z=c. \end{equation*} Two lines in $\mathbb{R}^3$ do not necessarily intersect even if they are non-parallel. We now try to find the shortest distance between two lines that do not intersect (if they do, this is of course zero). Suppose $\boldsymbol{r}_1=\ab_1+\lambda\db_1$ and $\boldsymbol{r}_2=\ab_2+\lambda\db_2$. Then we may have \begin{itemize} \item Two lines are \Def{parallel}, with $\db_1=k\db_2$. Then the distance is $D=|\ab_1-\ab_2|\sin\theta$, since $\ab_1$ and $\ab_2$ are just position vectors on the line. Using the cross product corollary, we have \begin{equation*} D=\frac{|\ab_1-\ab_2|\times\db_1}{|\db_1|}. \end{equation*} \item If the two lines are \Def{skewed}, then $\nb=\db_1\times\db_2\neq\boldsymbol{0}$ is a normal to the lines. The minimum distance between two points on the line corresponds to the difference vector between the pairs parallel to $\nb$, i.e., \begin{equation*} D\frac{\nb}{|\nb|}=\pm(\ab_1+\lambda\db_1-\ab_2-\mu\db_2). \end{equation*} Taking a dot product of the above with $\nb/|\nb|$ and noting that $\nb\cdot\db_1=\nb\cdot\db_2=0$, we have \begin{equation*} D=|\ab_1-\ab_2|\cdot\frac{\nb}{|\nb|}= \frac{|(\ab_1-\ab_2)\cdot(\db_1\times\db_2)|}{|\db_1 \times \db_2|}. \end{equation*} \end{itemize} \begin{example} Find the distance between the lines described by \begin{equation*} \frac{x-2}{-3}=\frac{y-2}{6}=\frac{z+1}{9}\qquad\textnormal{and}\qquad \frac{x+1}{2}=\frac{y}{-4}=\frac{z-2}{-6}. \end{equation*} $\nb=(-3,6,9)\times(2,-4,-6)=\boldsymbol{0}$ so two lines are parallel, and using the appropriate formula, we have $D=\sqrt{122/7}$. \end{example} \begin{example} Find the distance between the two lines described by \begin{equation*} \boldsymbol{r}_1=\begin{pmatrix}2\\2\\-1\end{pmatrix} +\lambda\begin{pmatrix}-3\\6\\9\end{pmatrix}\qquad\textnormal{and}\qquad \boldsymbol{r}_2=\begin{pmatrix}-1\\0\\2\end{pmatrix} +\mu\begin{pmatrix}2\\4\\6\end{pmatrix}. \end{equation*} $\db_1\times\db_2=(-72,0,-24)$, and $\ab_1-\ab_2=(3,2,-3)$, so $D=144/(24\sqrt{10})=3\sqrt{10}/5$. \end{example} %------------------------------------------------------------------------------- \section{Vector products and volumes of polyhedra} Given three vectors $\ab,\bb,\cb\in\mathbb{R}^3$, we define the \Def{scalar triple product} as $[\ab,\bb,\cb]=\ab\cdot(\bb\times\cb)$. \begin{lemma} The scalar triple product is invariant under cyclic permutations, i.e., $[\ab,\bb,\cb]=[\cb,\ab,\cb]=[\bb,\cb,\ab]$. (May be shown via brute force calculation.) \qedwhite \end{lemma} \begin{lemma} The scalar triple product is the determinant of the matrix with $\ab,\bb,\cb$ as its columns. \qedwhite \end{lemma} Thus $|\mathsf{A}|$ is unchanged if we cyclically permute its columns. \begin{example} The three vectors $(\ab,\bb,\cb)=(\eb_1,\eb_2,\eb_3)$ describes three edges of the unit cube. The scalar triple product of these three vectors is of course $1$, since $(\ab|\bb|\cb)=\mathsf{I}$, which coincidentally is the volume of the unit cube. For a sheared cubed described by $(\ab,\bb,\cb)=(\eb_1,\eb_2,\eb_1+\eb_2+\eb3)$, which has the same volume as the cube, the scalar triple product is also $1$. \end{example} The latter object is a \Def{parallelpiped}, which is a three-dimensional polyhedron with six quadrilateral faces where opposite faces are parallel. Its relation to the cube is like that of a parallelogram to a square. The volume of the parallelpiped is give by the base area multiplied by the height; these are respectively given by $|\ab||\bb|\sin\theta=|\ab\times\bb|$ and $|\cb|\cos\phi=\cb\cdot\hat{\nb}=|\cb\cdot(\ab\times\bb)/|\ab\times\bb||$. Thus \begin{equation*} V=(|\ab||\bb|\sin\theta)(|\cb|\cos\phi)=|\cb\cdot(\ab\times\bb)|, \end{equation*} and the volume of a parallelpiped spanned by three vectors is given by the determinant of a matrix formed by the three vectors. \begin{example} Find the volume of the parallelpiped with vertices \begin{equation*} \begin{pmatrix}-1\\-1\\-1\end{pmatrix},\ \begin{pmatrix}0\\1\\-1\end{pmatrix},\ \begin{pmatrix}2\\1\\0\end{pmatrix},\ \begin{pmatrix}1\\-1\\0\end{pmatrix},\ \begin{pmatrix}-1\\0\\1\end{pmatrix},\ \begin{pmatrix}0\\2\\1\end{pmatrix},\ \begin{pmatrix}2\\2\\2\end{pmatrix},\ \begin{pmatrix}1\\0\\2\end{pmatrix}. \end{equation*} The first vector has the lowest value of $x,y,z$ so we take it as the origin of the spanning vectors. We see that three of the vectors are invariant in $x,y,z$ in respect to the first vector, given by the fifth, fourth and second vector respectively. Then the spanning vectors are $(0,1,2)$, $(2.0.1)$ and $(1,2,0)$ which gives a volume of $9$. \end{example} For a \Def{tetrahedron} (triangular bottom pyramid), the volume is a third of the base area multiplied by height; for triangles, this is \begin{equation*} V=\frac{1}{3}\frac{1}{2}|\ab||\bb|\sin\theta|\cb\cos\phi|= \frac{1}{6}|\cb\cdot(\ab\times\bb)|. \end{equation*} \begin{example} Find the volume of a tetrahedron with vertex \begin{equation*} \ab=\begin{pmatrix}1\\0\\-1\end{pmatrix},\ \bb=\begin{pmatrix}2\\0\\1\end{pmatrix},\ \cb=\begin{pmatrix}3\\1\\2\end{pmatrix},\ \db=\begin{pmatrix}1\\-1\\1\end{pmatrix}. \end{equation*} Taking $\ab$ to be the origin, the vectors are $\bb-\ab=(1,0,2)$, $\cb-\ab=(2,1,3)$, $\db-\ab=(0,1,2)$, and the resulting determinant of the matrix is $1$, so the volume is $1/6$. \end{example} As a final topic, the \Def{triple vector product} is $\ab\times(\bb\times\cb) = (\ab\cdot\cb)\bb-(\ab\cdot\bb)\cb$. This may be checked by brute force, and analogous results exists for any number of vectors. %------------------------------------------------------------------------------- \section{Intersection of planes and simultaneous equations} Two non-parallel planes intersect in a line. Suppose two planes $\pi_1$ and $\pi_2$ are described by $ax+by+cz=l$ and $dx+ey+fz=m$, then the intersection $\mathcal{C}=\ab+\lambda(\nb_1\times\nb_2)$, where $\nb_1$ and $\nb_2$ are the respective normal vectors to the planes. Suppose we now have a third plane $\pi_3$ described by $gx+hy+jz=n$, then this plane $\pi_3$ will intersect $\mathcal{C}$ at a point if it is no parallel to $\pi_1$ or $\pi_2$. In equation form, there is intersection if \begin{equation*} \boldsymbol{r}\cdot\nb_3=\ab\cdot\nb_3+\lambda[\nb_1,\nb_2,\nb_3]=n, \end{equation*} so that \begin{equation*} \lambda=\frac{n-\ab\cdot\nb_3}{[\nb_1,\nb_2,\nb_3]}, \end{equation*} and a solution is well-defined if $[\nb_1,\nb_2,\nb_3]\neq0$, i.e., the determinant of the matrix formed by the normal vectors is non-zero. \begin{lemma}[Cramer's rule] It may be shown that, from algebraic manipulation, if $\mathsf{A}\boldsymbol{r}=\bb$, then \begin{equation*} x=|\mathsf{A}_{1}|/|\mathsf{A}|,\qquad y=|\mathsf{A}_{2}|/|\mathsf{A}|,\qquad z=|\mathsf{A}_{3}|/|\mathsf{A}|, \end{equation*} where $\mathsf{A}_{j}$ has the $j^{\textnormal{th}}$ column replaced by $\bb$. \qedwhite \end{lemma} Further rules for evaluating the determinant of matrices: \begin{itemize} \item $|\mathsf{A}|=|\mathsf{A}^{T}|$. \item If $\mathsf{B}$ is formed by multiply a row or column of $\mathsf{A}$ by $\lambda$, then $|\mathsf{B}|=\lambda|\mathsf{A}|$. \item If $\mathsf{B}$ is formed by interchanging two rows/columns of $\mathsf{A}$, then $|\mathsf{B}|=-|\mathsf{A}|$. A cyclic permutation involves two such operation, so the determinant is unchanged in this case. \item If $\mathsf{B}$ is formed by adding an arbitrary multiple of a single row/column of $\mathsf{A}$ to another row/column, then $\mathsf{B}=\mathsf{A}$. e.g., \begin{equation*} \left|\begin{matrix}a & b & c\\ d & e & f\\ g & h & j\end{matrix}\right| =\left|\begin{matrix}a-\lambda d & b-\lambda e & c-\lambda f \\ d & e & f\\ g & h & j\end{matrix}\right|. \end{equation*} \end{itemize} %------------------------------------------------------------------------------- \section{Method of row reduction} Using the last rule above, a matrix may be manipulated into an \Def{upper-triangular/echelon} form, i.e., \begin{equation*} \mathsf{A}'=\begin{pmatrix}a' & b' & c'\\ 0 & e' & f'\\ 0 & 0 & j'\end{pmatrix},\qquad |\mathsf{A}'|=a'e'j'. \end{equation*} \begin{example} Solve the simultaneous equations \begin{equation*}\begin{aligned} 2x + 3y + z &= 23,\\ x + 7y + z &= 36,\\ 5x + 4y - 3z &= 16. \end{aligned}\end{equation*} \begin{align*} \left(\begin{matrix}2&3&1\\ 1&7&1\\ 5&4&-3\end{matrix}\right| \left.\begin{matrix}23\\ 36\\ 10\end{matrix}\right) &= \left(\begin{matrix}2&3&1\\ 1&7&1\\ 0&-7/2&-11/2\end{matrix}\right| \left.\begin{matrix}23\\ 36\\ -83/2\end{matrix}\right)\\ &=\left(\begin{matrix}2&3&1\\ 0&11/2&1/2\\ 0&-7/2&-11/\end{matrix}\right| \left.\begin{matrix}23\\ 49/2\\ -83/2\end{matrix}\right)\\ &=\left(\begin{matrix}2&3&1\\ 0&11/2&1/2\\ 0&0&-57/2/\end{matrix}\right| \left.\begin{matrix}23\\ 49/2\\ -285/11\end{matrix}\right). \end{align*} From this, we can back substitute and see that $z=5$, $y=4$ and $x=3$. \end{example} %=============================================================================== \chapter{Curves and surfaces in $\mathbb{R}^3$} %------------------------------------------------------------------------------- \section{Parametric curves and surfaces} We are used to functions that go from $\mathbb{R}$ to $\mathbb{R}$. A parametric curve is simple the generalisation of this to the case where a function takes $\mathbb{R}$ to $\mathbb{R}^3$, \begin{equation*} \gamma:\ \mathbb{R}\rightarrow\mathbb{R}^3,\quad t\mapsto(x(t),y(t),z(t)). \end{equation*} \begin{example} $\lambda\mapsto\ab+\lambda\db$ is the parameteric mapping for a line. \end{example} \begin{example} $t\mapsto(\cos t,\sin t, t)$ represents a helix gyrating in the $xy$-plane. \end{example} Just as we can differentiate real functions, we can differentiate vector functions as \begin{equation*} \frac{\mathrm{d}\gamma}{\mathrm{d}t}=\left( \frac{\mathrm{d}x}{\mathrm{d}t}, \frac{\mathrm{d}y}{\mathrm{d}t}, \frac{\mathrm{d}z}{\mathrm{d}t}\right)=\boldsymbol{\tau}. \end{equation*} \begin{example} In the above two examples, the tangent vectors are respectively $\db$ and $(-\sin t, \cos t,1)$. \end{example} The same set of points may be parametrised in different ways. For example, the parabola on the $x=1$ plane may be parametrised by $\gamma=(1,t,t^2)$ or $\beta=(1,\ex^\lambda,\ex^{2\lambda})$. However, $\gamma$ and $\beta$ are strictly different because they are parametrised using a different variable. As in $\mathbb{R}^2$, for $\boldsymbol{r}(t)=(x(t),y(t),z(t))$, the arc length $s$ is given by \begin{equation} s(t)=\int_{t_0}^{t}\sqrt{\dot{x}^2+\dot{y}^2+\dot{z}^2}\ \mathrm{d}t. \end{equation} We have $\mathrm{d}s/\mathrm{d}t=|\mathrm{d}\boldsymbol{r}/\mathrm{d}t|$, and, at infinitesimal ranges, this is often written as \begin{equation} \mathrm{d}s^2=\left|\frac{\mathrm{d}r}{\mathrm{d}t}\right|^2 \mathrm{d}t^2 =(\dot{x}^2+\dot{y}^2+\dot{z}^2)\mathrm{d}t^2 =(\mathrm{d}x)^2+(\mathrm{d}y)^2+(\mathrm{d}z)^2. \end{equation} (cf. 3D version of Pythagoras' theorem.) A curve is said to be parametrised by arc length if $|\mathrm{d}\boldsymbol{r}/\mathrm{d}t|=1$, and in this case we use $s$ instead of $t$ are the parameter. Note that a curve is a collection of points, whilst the tangent is a vector, with a definite direction and length. \begin{lemma} If $\gamma(s)$ is parametrised by arc length with tangent vector $\boldsymbol{\tau}(s)$, then $\mathrm{d}/\mathrm{d}t(\boldsymbol{\tau}(s))$ is normal to the curve $\gamma$. \end{lemma} \begin{proof} Since $\gamma$ is parametrised by arc length, $\boldsymbol{\tau}\cdot\boldsymbol{\tau}=1$. So then \begin{equation*} \frac{\mathrm{d}}{\mathrm{d}s}(\boldsymbol{\tau}\cdot\boldsymbol{\tau}) =2\boldsymbol{\tau}\cdot\frac{\mathrm{d}\boldsymbol{\tau}}{\mathrm{d}s} =0. \end{equation*} \qed \end{proof} We can define a surface parametrically, as \begin{equation*} S:\mathbb{R}^2\rightarrow \mathbb{R}^3,\qquad (\mu,\lambda)\mapsto(x(\mu,\lambda),y(\mu,\lambda),z(\mu,\lambda)) =\Xb(\mu,\lambda). \end{equation*} $S$ then is a function of two variables. \begin{example} $S^2=(\sin\theta\cos\phi,\sin\theta\sin\phi,\cos\theta)$ with $\theta\in[0,\pi]$ and $\phi\in[0,2\pi]$ is the unit 2-sphere. \end{example} \begin{example} $S=(\sinh\chi,\cos\phi,\sinh\chi\sin\phi,\cosh\chi)$ with $\chi\in\mathbb{R}$ and $\phi\in[0,2\pi]$ gives a hyperboloid. \end{example} If $\Xb(\mu,\lambda)$ traces a smooth surface $S$ in $\mathbb{R}^3$, then its tangent vectors at some $(\mu_0,\lambda_0)$ are \begin{equation*} \boldsymbol{\tau}_1 (\mu_0,\lambda_0) =\left.\ddy{\Xb}{\mu}\right|_{(\mu_0,\lambda_0)},\qquad \boldsymbol{\tau}_2 (\mu_0,\lambda_0) =\left.\ddy{\Xb}{\lambda}\right|_{(\mu_0,\lambda_0)}. \end{equation*} The \Def{tangent plane} of $S$ at $(\mu_0,\lambda_0)$ is then given by \begin{equation*} \Pi=\{\boldsymbol{r}\in\mathbb{R}^3\ |\ \boldsymbol{r}=\Xb_0 + \alpha \boldsymbol{\tau}_1 + \beta \boldsymbol{\tau}_2,\ \alpha,\beta\in\mathbb{R}\} \end{equation*} \begin{example} Find the tangent plane of the unit 2-sphere at $(\theta_0,\phi_0)=(\pi/2,0)$. [at the equator] We have \begin{align*} \ddy{\Xb}{\theta} &= (\cos\theta\cos\phi,\cos\theta\sin\phi,-\sin\theta),\\ \ddy{\Xb}{\phi} &= (-\sin\theta\sin\phi,\sin\theta\cos\phi,0), \end{align*} and, at the location of interest, $\Xb_0=\eb_1$, $\boldsymbol{\tau}_1=-\eb_3$, and $\boldsymbol{\tau}_2=\eb_2$, so $\pi=\eb_1-\alpha\eb_3+\beta\eb_2$, i.e., the $yz$-plane translated to $x=1$. \end{example} %------------------------------------------------------------------------------- \section{Functions, surfaces and gradients} Like planes, surfaces and tangent planes may be written in Cartesian form or using a function $f$, i.e., as \begin{equation*} ax+by+cz=d\qquad\textnormal{or}\qquad f(x,y,z)=d. \end{equation*} A \Def{scalar function} (or \Def{scalar field}) on $\mathbb{R}^3$ is a map \begin{equation*} f:\mathbb{R}^3\rightarrow\mathbb{R},\ (x,y,z)\mapsto f(x,y,z). \end{equation*} $f$ is \Def{continuous} at $\ab$ if $|f(\boldsymbol{r})-f(\ab)|\rightarrow0$ as $|\boldsymbol{r}-\ab|\rightarrow0$ along any path, and is differentiable is the partial derivatives exists. \begin{example} For $f=x^4+x^2y^2+xyz$, \begin{equation*} \ddy{f}{x}=4x^3+2xy^2+yz,\qquad \ddy{f}{y}=2x^2y+xz,\qquad \ddy{f}{z}=xy, \end{equation*} and \begin{equation} \nabla f=\left(\ddy{f}{x},\ddy{f}{y},\ddy{f}{z}\right) \end{equation} is the gradient of the function $f$ at $\xb_0$. \end{example} $\nabla f$ gives us the steepest rate of change of $f$, with the length of the vector representing the rate of change. \begin{example} $f(x,y)=1-(x^2+y^2)$ is a cone of height 1 with apex at the origin. $\nabla f=-2(x,y)$, and points inwards and uphill. \end{example} The intuition is that $\nabla f$ points orthogonally from contours of constraints in $f$. \begin{example} $f=\sqrt{x^2+y^2+z^2}=r$ is the distance from origin. $\nabla f=(x,y,z)/r=\hat{\boldsymbol{r}}$, and $\nabla f$ describes a sphere. \end{example} \begin{example} The hyperboloid $f=x^2+y^2-z^2$ has $\nabla f=2(x,y,-z)$, and $|\nabla f|=2\sqrt{x^2+y^2+z^2}$. This points radially outwards on the $xy$-plane but down/up in the upper/lower half of $\mathbb{R}^3$. \end{example} $\nabla f$ defines a normal $\nb$ to the surface $f=\textnormal{const}$, and hence defines the tangent plane via $\boldsymbol{r}\cdot\nb=\ab\cdot\nb$. \begin{example} Find the tangent plane to $x^2-y^2-z^2=1$ at $(-1,0,0)$. [This is a rotated paraboloid with the bottom at $(-1,0,0)$.] $\nabla f=2(x,-y,-z)$, and so $\nb\cdot\boldsymbol{r}=-2x$ whilst $\ab\cdot\nb=-2$. Thus $x=-1$ is the tangent plane. \end{example} \begin{example} Find the tangent plane to $x^2+yz=1$ at $(0,1,1)$. [An inclined hyperboloid.] $\nb=\nabla f=(2x,z,y)$, $\boldsymbol{r}=(0,1,1)$, so we have $\nb\cdot\boldsymbol{r}=z+y$, $\ab\cdot\nb=2$, so the tangent at that location satisfies the relation $z+y=2$. \end{example} If $\dy f/\dy x=0$ at a location, then $f$ has a turning point in the $x$-direction. In the second example, the turning point in the $x$-direction is a minimum. Looking at $f$ for fixed $y$ and $z$ shows this also. A point $P=\xb_0$ is a \Def{critical point} of $f$ if $\nabla f=\boldsymbol{0}$ there. \begin{example} $f(x,y,z)=x^2+yz$, $\nabla f=\boldsymbol{0}$ at $\xb_0=\boldsymbol{0}$. If $y$ and $z$ are of the same sign, it is a minimum, whilst if they are of different sign, it is a maximum. This an example of a \Def{saddle point}; it is a minimum when it is coming down the spine of the hyperboloid, but a maximum when coming up perpendicular to the spine. \end{example} We classify extrema by using the second order derivative, which is a matrix called the \Def{Hessian}, with \begin{equation} \mathsf{H}_{ij}=\ddy{^2 f}{x^i x^j}. \end{equation} \begin{example} Again, with $f=x^2+yz$, we have \begin{equation*} \mathsf{H}=\begin{pmatrix}2&0&0\\0&0&1\\0&1&0\end{pmatrix}. \end{equation*} \end{example} Note that $\mathsf{H}$ is symmetric assuming $f$ is sufficiently smooth so that derivative order may be interchanged. The eigenvalue of $\mathsf{H}$ then give us information about the nature of the extrema. \begin{example} $\mathsf{H}\boldsymbol{r}=\lambda\boldsymbol{r}$ gives $\lambda=\pm1,2$. Positive eigenvalues means a local minimum along the direction of the eigenvector, and the opposite is true for negative eigenvalues. If \begin{itemize} \item All eigenvalues positive, we have an absolute minimum of $f$. \item All eigenvalues negative, we have an absolute maximum of $f$. \item A saddle point if two eigenvalues are positive and one is negative. \item Hyperboloid of two sheets if two eigenvalues are negative and one is positive. \end{itemize} So again we confirm that we have a saddle point at the relevant location. \end{example} %------------------------------------------------------------------------------- \section{Eigenvalue and eigenvectors of $3\times3$ matrix} Recall that $\vb$ is an eigenvector of $\mathsf{A}$ if $\mathsf{A}\vb=\lambda=\vb$, and $\lambda$ is the eigenvalue associated with the eigenvector. \begin{lemma} If $\lambda$ is an eigenvalue of $\mathsf{A}$, then $|\mathsf{A}-\lambda\mathsf{I}|=0$. \end{lemma} \begin{proof} Assuming that we have an eigenvector $\vb$ associated with $\lambda$, suppose that $|\mathsf{A}-\lambda\mathsf{I}|\neq0$. Then there exists $\mathsf{B}$ that is the inverse of $(\mathsf{A}-\lambda\mathsf{I})$. However, \begin{equation*} \vb=\mathsf{I}\vb=\mathsf{B}(\mathsf{A}-\lambda\mathsf{I})\vb= \mathsf{B}(\mathsf{A}\vb-\lambda\mathsf{I}), \end{equation*} and since $\mathsf{A}\vb=\lambda\mathsf{I}$, for $\vb\neq\boldsymbol{0}$, we have a contradiction. \qed \end{proof} $|\mathsf{A}-\lambda\mathsf{I}|$ gives a polynomial in $\lambda$, which is a cubic in when we are concerned with vectors in $\mathbb{R}^3$. This polynomial is called the \Def{characteristic polynomial}, and $|\mathsf{A}-\lambda\mathsf{I}|=0$ is the \Def{characteristic equation} of $\mathsf{A}$. \begin{example} For $\mathsf{H}$ above, \begin{equation*} |\mathsf{H}-\lambda\mathsf{I}|=\left|\begin{matrix} 2-\lambda & 0 & 0\\ 0 & -\lambda & 1 \\ 0 & 1 & -\lambda\end{matrix} \right|= (2-\lambda)(\lambda^2-1)=0, \end{equation*} so $\lambda=2,\pm1$. It may be shown that the associated eigenvectors are \begin{equation*} \vb_2 = \eb_1,\qquad \vb_1 = \eb_2 + \eb_3,\qquad \vb_{-1} = \eb_2-\eb_3. \end{equation*} We notice that the eigenvectors are mutually orthogonal. \end{example} \begin{theorem} Let $\mathsf{A}$ be a real symmetric $3\times3$ matrix with eigenvalues $\lambda_i\in\mathbb{R}$ and associated eigenvectors $\vb_i \in\mathbb{R}^3$. Then: \begin{enumerate} \item if $\lambda_i \neq \lambda_j$, then $\vb_i \cdot \vb_j = 0$; \item if $\lambda_i = \lambda_j$, we may choose $\vb_i \cdot \vb_j = 0$. \end{enumerate} \end{theorem} \begin{proof} By writing $\vb_i \cdot \vb_j$ as $\vb_i^T \vb_j = \vb_i \vb_j^T$, then we have $\lambda_i \vb_i\cdot\vb_j = \lambda_i \vb_j^T \vb_i = \vb_j^T \lambda_i \vb_i = \vb_j^T \mathsf{A}\vb_i$. Taking the transpose does nothing to this because it is a scalar, so $\lambda_i \vb_i \cdot \vb_j = (\vb_j^T \mathsf{A} \vb_i)^T = \vb_i^T \mathsf{A}^T \vb_j$. Now, $\mathsf{A}$ is symmetric, so $\lambda_i \vb_i \cdot \vb_j = \vb_i^T \mathsf{A} \vb_j = \lambda_j \vb_i^T \vb_j = \lambda_j \vb_i\cdot \vb_j$. Thus we have $(\lambda_j-\lambda_i)\vb_i \cdot\vb_j =0$, and the first result follows. Suppose now that $\lambda_i = \lambda_j$. Then $\vb_i$ and $\vb_j$ span a 2D subspace/plane in which all vectors of $\mathsf{A}$ have the same eigenvalue. If $\vb_i\cdot\vb_j \neq=0$, then we may redefine $\vb_i$ as \begin{equation*} \vb_i' = \vb_j-\frac{(\vb_i \cdot \vb_j)\vb_j}{|\vb_j|^2}, \end{equation*} which is the projection of $\vb_i$ onto the space/plane that is orthogonal to $\vb_j$. \qed \end{proof} The latter process of using projections to generate an orthogonal basis is know as \Def{Gram--Schmidt orthogonalisation}. %------------------------------------------------------------------------------- \section{Quadric surfaces} A quadric is the generalisation of the conic to surfaces in $\mathbb{R}^3$. This has the form $\boldsymbol{r}^T\mathsf{A}\boldsymbol{r}+\bb\cdot\boldsymbol{r} + c=0$. $\mathsf{A}$ here is symmetric and of the form \begin{equation*} \mathsf{A}=\begin{pmatrix}a&b&c\\b&d&e\\c&e&f\end{pmatrix}. \end{equation*} \begin{example} Suppose all cross terms are zero except for the leading diagonal in $\mathsf{A}$, i.e., $ax^2+dy^2+fz^2=1$. Then we have the following cases: \begin{enumerate} \item An ellipsoid when $a,d,f>0$. This may be parametrised as \begin{equation*} x=\frac{1}{\sqrt{a}}\sin\theta\cos\phi,\qquad y=\frac{1}{\sqrt{d}}\sin\theta\sin\phi,\qquad z=\frac{1}{\sqrt{f}}\cos\phi. \end{equation*} \item A hyperboloid of one sheet when $a,d>0$, $f<0$. For fixed $z$, $x$ and $y$ are constrained to be on an ellipse (a circle if $a=d$). This may be parametrised as \begin{equation*} x=\frac{1}{\sqrt{a}}\cosh\chi\cos\phi,\qquad y=\frac{1}{\sqrt{d}}\sinh\chi\sin\phi,\qquad z=\frac{1}{\sqrt{f}}\sinh\chi. \end{equation*} \item A hyperboloid of two sheets when $a,d<0$, $f>0$. There is no solution for a limited range of $z$. This may be parametrised as \begin{equation*} x=\frac{1}{\sqrt{a}}\sinh\chi\cos\phi,\qquad y=\frac{1}{\sqrt{d}}\sinh\chi\sin\phi,\qquad z=\frac{1}{\sqrt{f}}\cosh\chi. \end{equation*} \item No solutions when $a,d,f<0$. \end{enumerate} \end{example} Note that, generally, we have \begin{equation*} f(x,y,z)=ax^2+2bxy+2cxz+dy^2+2eyz+fz^2 = \textnormal{constant}. \end{equation*} As with conics, it is useful to identify the nearest/furthest points from the origin. This will occur when the normal is pointing directly from the origin, i.e., $\nb=\nabla f\propto\boldsymbol{r}$, or $\nabla f=2\lambda\boldsymbol{r}$. Since $\nabla f=2\mathsf{A}\boldsymbol{r}$, we have $\mathsf{A}\boldsymbol{r} = \lambda\boldsymbol{r}$. Thus eigenvectors of $\mathsf{A}$ gives the direction of extrema of the quadric surface from the origin, and these principal axes are orthogonal by the previous theorem (since $\mathsf{A}$ is symmetric). Since any vector in $\mathbb{R}^3$ may be written as a sum of three orthogonal eigenvectors $\vb_1$, $\vb_2$, $\vb_3$, we have $\boldsymbol{r}=r_1 \vb_1 + r_2 \vb_2 + r_3 \vb_3$, so \begin{align*} \boldsymbol{r}^T \mathsf{A}\boldsymbol{r} &= (r_1 \vb_1^T + r_2\vb_2^T + r_3 \vb_3^T)\mathsf{A}(r_1 \vb_1 + r_2 \vb_2 + r_3 \vb_3)\\ &= r_1^2 \lambda_1|\vb_1|^2 + r_2^2 \lambda_2|\vb_2|^2 + r_3^2 \lambda_3|\vb_3|^2. \end{align*} Choosing $\vb_1$ to be unit vectors, we have $\boldsymbol{r}^T \mathsf{A}\boldsymbol{r}=r_1^2 \lambda_1 + r_2^2 \lambda_2 + r_3^2 \lambda_3$. Since $\vb_i$ are orthonormal, we can think of them as the new $xyz$-axis. \begin{example} Classify the quadric $5x^2-6xy+5y^2+9z^2=1$. \begin{equation*} \mathsf{A}=\begin{pmatrix}5&-3&0\\-3&5&0\\0&0&9\end{pmatrix},\qquad |\mathsf{A}-\lambda\mathsf{I}|=(9-\lambda)(8-\lambda)(2-\lambda)=0. \end{equation*} All eigenvalues are positive, so we have an ellipsoid. It may be shown the principal axes are \begin{equation*} \vb_9 = \eb_3,\qquad \vb_8=(\eb_1-\eb_2)/\sqrt{2},\qquad \vb_2=(\eb_1+\eb_2)/\sqrt{2}. \end{equation*} \end{example} %=============================================================================== \chapter{Linear maps} A set of vectors $\{\vb_1,\vb_2,\vb_3\}\in\mathbb{R}^3$ is a \Def{basis} for $\mathbb{R}^3$ if any vector $\ub\in\mathbb{R}^3$ may be written uniquely as $\ub=u_1 \vb_1 + u_2 \vb_2 + u_3 \vb_3$. $u_i$ here are the components of the vector $\ub$ with respect to the basis $\{\vb_i\}$. Suppose $\boldsymbol{r}$ has co-ordinates $x_1, x_2, x_3$ with respect to $\{\vb_i\}$. Let $\mathsf{P}=(\vb_1|\vb_2|\vb_3)$, the matrix with $\vb_i$ as columns. Then, by inspection, $\vb_i=\mathsf{P}\eb_i$, where $\eb_i$ is the standard Cartesian basis; $\mathsf{P}$ is the linear map that takes us from the standard basis to the basis $\{\vb_i\}$. \begin{theorem} Let $S=\{\vb_1,\vb_2,\vb_3\}$ be a set of three vectors in $\mathbb{R}^3$, and $\mathsf{P}=(\vb_1|\vb_2|\vb_3)$. $S$ is a basis of $\mathbb{R}^3$ if $|\mathsf{P}|\neq0$. \end{theorem} \begin{proof} Suppose that $|\mathsf{P}|\neq0$. Then there exists $\mathsf{P}^{-1}$. For an arbitrary vector $\xb\in\mathbb{R}^3$, $\xb=\mathsf{P}\mathsf{P}^{-1}\xb=\yb$ is unique, with $y=\mathsf{P}^{-1}\xb$. Now, \begin{equation*} \xb=\mathsf{P}\yb=y_1 \mathsf{P}\eb_1 + y_2 \mathsf{P}\eb_2 + y_3 \mathsf{P}\eb_3 = y_1 \vb_1 + y_2 \vb_2 + y_3 \vb_3, \end{equation*} so $S$ is a basis. Suppose now that $S$ is a basis. Then, in particular, the standard basis has the representation \begin{equation*} \eb_i = \sum_{j=1}^3 e_i^j \vb_j. \end{equation*} The scalar triple product of the standard basis is 1, so \begin{equation*} 1=[\eb_1,\eb_2,\eb_3]=\sum_{i,j,k,=1}^3 e_1^i e_2^j e_3^k [\vb_i,\vb_j,\vb_k]\quad\Rightarrow\quad [\vb_i,\vb_j,\vb_k]=|\mathsf{P}|=1. \end{equation*} Thus $|\mathsf{P}\neq0$ if $i\neq j\neq k$. \qed \end{proof} \begin{example} Which of \begin{equation*} B_1=\{(4,1,2),(2,5,1),(0,1,0)\},\qquad B_2=\{(4,1,2),(2,5,1),(0,0,1)\} \end{equation*} forms a basis in $\mathbb{R}^3$? $\mathsf{P}_1=0$ whilst $\mathsf{P}_2=18$, so only $B_2$ is a basis. \end{example} %------------------------------------------------------------------------------- \section{Axioms} A linear map $L:\mathbb{R}^3\rightarrow\mathbb{R}^3$ is a function such that, for all $\ub,\vb\in\mathbb{R}^3$ and $\lambda,\mu\in\mathbb{R}$, $L(\lambda\ub+\mu\vb)=\lambda L(\ub)+\mu L(\vb)$. Note then $L(\boldsymbol{0})=\boldsymbol{0}$. \begin{example} $L(\ub)=(3u_1 -u_2 +u_3, u_3, 0)$ is linear. $L(\xb)=(xy,x,z)$ is not linear since the first component is not a linear function. $L(\xb)=(x+1,y,z)$ is not linear since it is not a homogeneous function of degree $1$. $\mathsf{P}=(\vb_1 |\vb_2 |\vb_3)$ induces a linear map by definition. \end{example} \begin{theorem} A map $L:\mathbb{R}^3\rightarrow\mathbb{R}^3$ is linear iff there exists a $3\times3$ matrix $\mathsf{A}$ where $L(\vb)=\mathsf{A}\vb$ for all $\vb\in\mathbb{R}^3$. Moreover, $\mathsf{A}=(L(\eb_1)|L(\eb_2)|L(\eb_3))$. \end{theorem} \begin{proof} Assuming $L$ is a linear map, then by the linearity property, $L(\vb)=v_1 L(\eb_1)+ v_2 L(\eb_2) + v_3 L(\eb_3) = (L(\eb_1)|L(\eb_2)|L(\eb_3))\vb$ as required. Assuming we have $L(\vb)=\mathsf{A}\vb$, by properties of matrices, $\mathsf{A}$ automatically induces a linear map. \qed \end{proof} \begin{example} \begin{equation*} L\xb=\begin{pmatrix}2x-y+z\\ z\\ 0\end{pmatrix},\qquad \mathsf{A}=\begin{pmatrix}3&-1&1\\0&0&1\\0&0&0\end{pmatrix} \end{equation*} \begin{equation*} L\xb=\begin{pmatrix}6x-2y+5z\\2x+3y-z\\x+5y+2z\end{pmatrix},\qquad \mathsf{A}=\begin{pmatrix}6&-2&5\\2&3&-1\\1&5&2\end{pmatrix} \end{equation*} \end{example} Suppose $L_1$ and $L_2$ are linear maps associated with $\mathsf{A}_1$ and $\mathsf{A}_2$ respectively. Then a combined linear map $L_1\circ L_2(\vb)$ may be represented by $\mathsf{B}=\mathsf{A}_1 \mathsf{A}_2$. \begin{lemma} Suppose $L:\mathbb{R}^3\rightarrow\mathbb{R}^3$ is a linear map represented by $\mathsf{A}$. If $|\mathsf{A}|\neq0$, then there exists a map $L^{-1}$ such that $L^{-1}\circ L(\vb)=L\circ L^{-1}(\vb)=\vb$. \end{lemma} \begin{proof} This follows immediately from the fact that $\mathsf{A}^{-1}$ exists when $|\mathsf{A}|\neq0$. \end{proof} %------------------------------------------------------------------------------- \section{Geometry of linear maps} There are various linear maps with geometric interpretations. For example, a reflection in the $yz$-plane is represented by \begin{equation*} \begin{pmatrix}x\\y\\z\end{pmatrix}\rightarrow \begin{pmatrix}-x\\y\\z\end{pmatrix},\qquad \mathsf{A}=\begin{pmatrix}-1&0&0\\0&1&0\\0&0&1\end{pmatrix}. \end{equation*} A projection onto the $xy$ plane is represented by \begin{equation*} \begin{pmatrix}x\\y\\z\end{pmatrix}\rightarrow \begin{pmatrix}x\\y\\0\end{pmatrix},\qquad \mathsf{A}=\begin{pmatrix}1&0&0\\0&1&0\\0&0&0\end{pmatrix}. \end{equation*} A rotation of $\theta$ radians in the $xy$-plane is \begin{equation*} \mathsf{A}=\begin{pmatrix}\cos\theta&-\sin\theta&0\\ \sin\theta&\cos\theta&0\\0&0&1\end{pmatrix}. \end{equation*} \subsection{Projection} Let $\Pi$ be a plane through $\boldsymbol{0}$ with normal vector $\nb$. For $\xb\in\mathbb{R}^3$, \begin{equation*} \xb=\frac{\xb\cdot\nb}{|\nb|^2}\nb +\left(\xb-\frac{\xb\cdot\nb}{|\nb|^2}\nb\right). \end{equation*} We note that the first portion is normal to $\Pi$, whilst the second part is orthogonal to $\nb$. Since the equation of $\Pi$ is $\boldsymbol{r}\cdot\nb=0$, the projection of $\xb$ onto $\Pi$ is given by \begin{equation*} P(\xb)=\xb-\frac{\xb\cdot\nb}{|\nb|^2}\nb=\xb-\xb\cdot\hat{\nb}\hat{\nb}. \end{equation*} This is a linear map because \begin{equation*} P(\lambda\ub+\vb)=\lambda\ub+\vb-(\lambda\ub+\vb)\cdot\hat{\nb}\hat{\nb} =\lambda(\ub-\ub\cdot\hat{\nb}\hat{\nb})+\vb-\vb\cdot\hat{\nb}\hat{\nb}. \end{equation*} \subsection{Reflection} Similarly, if $\xb$ is the sum of $P(\xb)$ and normal to $\Pi$, then to reflect $\xb$ in $\Pi$, we subtract twice, so \begin{equation*} R(\xb)=\xb-2\xb\cdot\hat{\nb}\hat{\nb}. \end{equation*} \subsection{Rotation} In $\mathbb{R}^3$, a rotation always leaves a line/axis invariant. Let $\lb$ be a line through $\boldsymbol{0}$ with direction vector $\db$, then $\lb=\lambda\db$, so \begin{equation*} \xb=(\xb-\xb\cdot\hat{\db}\hat{\db})+\xb\cdot\hat{\db}\hat{\db}. \end{equation*} The first part is normal to $\lb$ by construction, so a vector orthogonal to $\lb$ would be $(\xb\cdot\hat{\db}\hat{\db})\times\hat{\db}=\xb\times\hat{\db}$. By analogy with rotation around a basis vector, we have \begin{equation*} R(\xb)=\xb\cdot\hat{\db}\hat{\db}+(\xb-\xb\cdot\hat{\db}\hat{\db})\cos\theta -(\xb\times\hat{\db})\sin\theta. \end{equation*} \begin{example} Let $\Pi$ be a plane through $\boldsymbol{0}$ and $\nb=(1,1,2)$ Find the matrix of projection onto $\Pi$ with respect to the standard basis. $\nb\cdot\xb=x+y+2z$, $|\nb|^2=6$, so \begin{equation*} P(\xb)=\begin{pmatrix}x-(x+y+2z)/6\\y-(x+y+2z)/6\\ z-(x+y+2z)/6\end{pmatrix},\qquad \mathsf{A}=\frac{1}{6}\begin{pmatrix} 5 & -1 & -2 \\ -1 & 5 & -2 \\ -2 & -2 & 2\end{pmatrix}. \end{equation*} \end{example} \begin{example} Let $\Pi$ be a plane through $\boldsymbol{0}$ with $\nb=(1,-1,1)$. Find the matrix of reflection in $\Pi$ with respect to the standard basis. $\nb\cdot\xb=x-y+z$, $|\nb|^2=3$, so \begin{equation*} R(\xb)=\begin{pmatrix}x-(x-y+z)(2/3)\\y+(x-y+z)(2/3)\\ z-(x-y+z)(2/3)\end{pmatrix},\qquad \mathsf{A}=\frac{1}{3}\begin{pmatrix} 1 & -2 & -2 \\ 2 & 1 & 2 \\ -2 & 2 & 1\end{pmatrix}. \end{equation*} \end{example} \begin{example} Find the matrix of rotation representing a $-\pi/3$ rotation about $\lb=\lambda(1,1,1)$. $\xb\cdot\db=x+y+z$, $|\db|^2=3$, and \begin{equation*} \xb-\xb\cdot\hat{\db}\hat{\db} =\frac{1}{3}\begin{pmatrix}2x-y-z\\-x+2y-z\\-x-y+2z\end{pmatrix},\qquad \xb\times\hat{\db} =\frac{1}{\sqrt{3}}\begin{pmatrix}y-z\\z-x\\x-y\end{pmatrix}. \end{equation*} Collecting this, \begin{equation*} R_{-\pi/3}(\xb) =\frac{1}{3}\begin{pmatrix}x+y+z\\x+y+z\\x+y+z\end{pmatrix}+ \frac{1}{2}\frac{1}{3} \begin{pmatrix}2x-y-z\\-x+2y-z\\-x-y+2z\end{pmatrix} +\frac{\sqrt{3}}{2}\frac{1}{\sqrt{3}} \begin{pmatrix}y-z\\z-x\\x-y\end{pmatrix}, \end{equation*} and \begin{equation*} \mathsf{A}=\frac{1}{6}\begin{pmatrix}1 & 2 & -1\\ -1 & 1 & 2\\2 & -1 & 1\end{pmatrix}. \end{equation*} \end{example} \begin{example} \begin{equation*} R(R(\xb))=R(\xb-2\xb\cdot\hat{\nb}\hat{\nb}) =R(\xb)-2R(\xb)\cdot\hat{\nb}\hat{\nb} =\xb-2\xb\cdot\hat{\nb}\hat{\nb} -2(\xb-2\xb\cdot\hat{\nb}\hat{\nb})\cdot\hat{\nb}\hat{\nb} =\xb. \end{equation*} \end{example} %------------------------------------------------------------------------------- \section{Change of basis} \begin{theorem} Let $L$ be a linear map given by $\mathsf{A}$ with respect to $\{\eb_i\}$. Let $S=\{\vb_i\}$ be another basis of $\mathbb{R}^3$, and $\mathsf{P}=(\vb_1 |\vb_2| \vb_3)$. Then, with respect to $S$, $L=\mathsf{P}^{-1}\mathsf{A}\mathsf{P}$. \end{theorem} \begin{proof} From a previous theorem, we have $\xb=\mathsf{P}\yb$, where $\yb$ is expanded in terms of $S$. Then $L\yb=L(\mathsf{P}^{-1}\xb)$ for $|\mathsf{P}|\neq0$. Since $L$ is linear, $L\yb=\mathsf{P}^{-1}L(\xb)=\mathsf{P}^{-1}\mathsf{A}\mathsf{P} \mathsf{P}^{-1}\xb=\mathsf{P}^{-1}\mathsf{A}\mathsf{P}(\yb)$, as required. \qed \end{proof} \begin{example} Suppose $L$ is represented by \begin{equation*} \mathsf{A}=\begin{pmatrix}1&3&3\\-2&3&-2\\3&2&1\end{pmatrix} \end{equation*} with respect to the standard basis. Find $\mathsf{A}$ with respect to $S=\{(1,0,0),(-1,1,0),(0,-1,1)\}$. \begin{equation*} \mathsf{P}=\begin{pmatrix}1&-1&0\\0&1&-1\\0&0&1\end{pmatrix}, \mathsf{P}^{-1}=\begin{pmatrix}1&1&1\\0&1&1\\0&0&1\end{pmatrix},\qquad \mathsf{P}^{-1}\mathsf{A}\mathsf{P}= \begin{pmatrix}2&5&-5\\1&4&-6\\3&-1&-1\end{pmatrix}. \end{equation*} \end{example} \begin{example} Find the matrix of linear map $L$ with respect to the basis $S$, where \begin{equation*} L\xb=\begin{pmatrix}4x-z\\2x+3y-2z\\2x+2y-z\end{pmatrix},\qquad S=\left\{\begin{pmatrix}1\\1\\1\end{pmatrix}, \begin{pmatrix}1\\2\\2\end{pmatrix},\begin{pmatrix}1\\2\\2\end{pmatrix}, \begin{pmatrix}1\\2\\3\end{pmatrix}\right\}. \end{equation*} \begin{equation*} \mathsf{A}=\begin{pmatrix}4&0&-1\\2&3&-2\\2&2&-1\end{pmatrix},\qquad \mathsf{P}=\begin{pmatrix}1&1&1\\1&2&2\\1&2&3\end{pmatrix},\qquad \mathsf{P}^{-1}\begin{pmatrix}2&-1&0\\-1&2&-1\\0&-1&1\end{pmatrix}, \end{equation*} so \begin{equation*} \mathsf{P}^{-1}\mathsf{A}\mathsf{P}= \begin{pmatrix}3&0&0\\0&2&1\\0&0&1\end{pmatrix}. \end{equation*} \end{example} Finally, we return to rotations. Since $(R_\theta)^{-1}=R_{-\theta}$, we have \begin{equation*} \mathsf{A}=\begin{pmatrix}\cos\theta&\sin\theta&0\\ -\sin\theta&\cos\theta&0\\0&0&1\end{pmatrix},\qquad \mathsf{A}^{-1}=\begin{pmatrix}\cos\theta&-\sin\theta&0\\ \sin\theta&\cos\theta&0\\0&0&1\end{pmatrix}=\mathsf{A}^{T}. \end{equation*} Now, the scalar product is preserved under rotation (since we are rotating something that only cares about the magnitude). Any transformation that has the property that $\mathsf{A}^{T}=\mathsf{A}^{-1}$ is called \Def{orthogonal}, and the set of orthogonal transformations form the \Def{group} $O(3)$, which includes rotations and reflections. It we restrict $O(3)$ to linear maps where the associated matrix has $|\mathsf{A}|>0$, then we only have rotations, and the subset is in fact a subgroup called the \Def{special orthogonal} group, denoted $SO(3)$. %=============================================================================== %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % r.5 contents %\tableofcontents %\listoffigures %\listoftables % r.7 dedication %\cleardoublepage %~\vfill %\begin{doublespace} %\noindent\fontsize{18}{22}\selectfont\itshape %\nohyphenation %Dedicated to those who appreciate \LaTeX{} %and the work of \mbox{Edward R.~Tufte} %and \mbox{Donald E.~Knuth}. %\end{doublespace} %\vfill % r.9 introduction % \cleardoublepage %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % actual useful crap (normal chapters) \mainmatter %\part{Basics (?)} %\backmatter %\bibliography{refs} \bibliographystyle{plainnat} %\printindex \end{document}
{ "alphanum_fraction": 0.6560165355, "avg_line_length": 39.234000977, "ext": "tex", "hexsha": "3179dc15b0286ae882558abc2aa931171e433348", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2022-02-04T11:18:12.000Z", "max_forks_repo_forks_event_min_datetime": "2022-02-04T11:18:12.000Z", "max_forks_repo_head_hexsha": "d90e2303f64c56e97f4124903cfcaa4aa73ca227", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Decenov/academic-notes", "max_forks_repo_path": "UG_maths/src/Geometry_1A.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "d90e2303f64c56e97f4124903cfcaa4aa73ca227", "max_issues_repo_issues_event_max_datetime": "2021-05-12T05:19:07.000Z", "max_issues_repo_issues_event_min_datetime": "2021-05-12T05:19:07.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Decenov/academic-notes", "max_issues_repo_path": "UG_maths/src/Geometry_1A.tex", "max_line_length": 199, "max_stars_count": null, "max_stars_repo_head_hexsha": "d90e2303f64c56e97f4124903cfcaa4aa73ca227", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Decenov/academic-notes", "max_stars_repo_path": "UG_maths/src/Geometry_1A.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 29826, "size": 80312 }
\documentclass{proposalnsf} %-------------------------------------------------------------------- PROCESS WITH XeLaTeX \usepackage{fontspec}% provides font selecting commands \usepackage{xunicode}% provides unicode character macros \usepackage{xltxtra} % provides some fixes/extras \setromanfont[Mapping=tex-text, SmallCapsFont={Palatino}, SmallCapsFeatures={Scale=0.85}]{Palatino} \setsansfont[Scale=0.85]{Trebuchet MS} \setmonofont[Scale=0.85]{Monaco} \renewcommand{\captionlabelfont}{\bf\sffamily} \usepackage[hang,flushmargin]{footmisc} % 'hang' flushes the footnote marker to the left, 'flushmargin' flushes the text as well. % Define the color to use in links: \definecolor{linkcol}{rgb}{0.459,0.071,0.294} \definecolor{sectcol}{rgb}{0.63,0.16,0.16} % {0,0,0} \definecolor{propcol}{rgb}{0.75,0.0,0.04} \definecolor{blue}{rgb}{0,0,0} \definecolor{green}{rgb}{0.5,0.5,0.5} \definecolor{gray}{rgb}{0.25,0.25,0.25} \definecolor{ngreen}{rgb}{0.7,0.7,0.7} % a darker shade of green \usepackage[ xetex, pdftitle={NSF proposal}, pdfauthor={Gregory V.\ Wilson}, pdfpagemode={UseOutlines}, pdfpagelayout={TwoColumnRight}, bookmarks, bookmarksopen,bookmarksnumbered={True}, pdfstartview={FitH}, colorlinks, linkcolor={sectcol},citecolor={sectcol},urlcolor={sectcol} ]{hyperref} %% Define a new style for the url package that will use a smaller font. \makeatletter \def\url@leostyle{% \@ifundefined{selectfont}{\def\UrlFont{\sf}}{\def\UrlFont{\small\ttfamily}}} \makeatother %% Now actually use the newly defined style. \urlstyle{leo} % this handles hanging indents for publications \def\rrr#1\\{\par \medskip\hbox{\vbox{\parindent=2em\hsize=6.12in \hangindent=4em\hangafter=1#1}}} \addto\captionsamerican{% \renewcommand{\refname}% {References Cited}% } % solution found here: http://www.tex.ac.uk/cgi-bin/texfaq2html?label=latexwords \def\baselinestretch{1} \setlength{\parindent}{0mm} \setlength{\parskip}{0.8em} \newlength{\up} \setlength{\up}{-4mm} \newlength{\hup} \setlength{\hup}{-2mm} \sectionfont{\large\bfseries\color{sectcol}\vspace{-2mm}} \subsectionfont{\normalsize\it\bfseries\vspace{-4mm}} \subsubsectionfont{\normalsize\mdseries\itshape\vspace{-4mm}} %\itshape \paragraphfont{\bfseries} % --------------------------------------------------------------------- \begin{document} % ------------------------------------------------------------------- Biosketch %\newpage \pagenumbering{arabic} \renewcommand{\thepage} {\footnotesize Bio.\,---\,\arabic{page}} \section*{Dr.\ Gregory V.\ Wilson} \small \textbf{Professional preparation:} \begin{tabular}{llcc} Institution & Major & Degree & Year \\ \hline Queen's University & Mathematics and Engineering & BSc & 1984 \\ University of Edinburgh & Artificial Intelligence & MSc & 1986 \\ University of Edinburgh & Computer Science & PhD & 1993 \\ \end{tabular} \textbf{Appointments:} \begin{tabular}{ll} 2012 -- present & Project lead, Software Carpentry, Mozilla Foundation \\ 2011 & Software engineer, Side Effects Software Inc. \\ 2010 -- 2011 & Independent consultant and trainer \\ 2006 -- 2010 & Assistant Professor, Dept.\ of Computer Science, University of Toronto \\ 2004 -- 2006 & Independent consultant and trainer \\ 2000 -- 2004 & Software engineer, Nevex/Baltimore Technologies/Hewlett-Packard \\ 1998 -- 2000 & Independent consultant and trainer \\ 1996 -- 1998 & Software engineer, Visible Decisions Inc. \\ 1995 -- 1996 & Scientist, Centre for Advanced Studies, IBM Toronto \\ 1992 -- 1995 & Post-doctoral researcher at various universities \\ 1986 -- 1992 & Software engineer, Edinburgh Parallel Computing Centre \\ 1985 & Software engineer, Bell-Northern Research \\ 1984 -- 1985 & Software engineer, Miller Communications Ltd. \\ \end{tabular} \textbf{Products:} \emph{Closely Related Products} \vspace{\up} \begin{list}{$\ast$}{\setlength{\leftmargin}{1em}} \item Software Carpentry: http://software-carpentry.org. Co-founded with Brent Gorda in 1998, this project's aim is to teach basic computing skills to researchers in science, engineering, medicine, and related fields. It now has over 100 volunteer instructors in a dozen countries, who delivered training to almost 4500 people in 2013 alone. All of Software Carpentry's lesson materials are freely available under the Creative Commons - Attribution license. \item Jo~Erskine Hannay, Hans~Petter Langtangen, Carolyn MacLeod, Dietmar Pfahl, Janice Singer, and Greg Wilson. How do scientists develop and use scientific software? In \emph{Proceedings of the Second International Workshop on Software Engineering for Computational Science and Engineering (SE-CSE 2009)}. IEEE, 2009. The largest study ever done of how scientists use computers in their research. \end{list} \vspace{\up} \emph{Other Significant Products} \vspace{\up} \begin{list}{$\ast$}{\setlength{\leftmargin}{1em}} \item Amy Brown and Greg Wilson (eds.). \emph{The Architecture of Open Source Applications}. Lulu, 2011. A series of books (the fourth volume is due to appear in 2014) in which the creators of major open source packages describe the architecture and performance of their software. \item Andy Oram and Greg Wilson (eds.). \emph{Making Software: What Really Works, and Why We Believe It}. O'Reilly, 2010. A collection of essays by leading researchers in empirical software engineering summarizing what we actually know about software development. \item Jennifer Campbell, Paul Gries, Jason Montojo, and Greg Wilson. \emph{Practical Programming}. Pragmatic Bookshelf, 2009. A CS-1 introduction to programming using Python. \item Andy Oram and Greg Wilson (eds.). \emph{Beautiful Code: Leading Programmers Explain How They Think}. O'Reilly, 2007. A collection of essays from leading programmers about elegant software; winner of the 2008 Jolt Award for Best General Book. \end{list} \pagebreak \textbf{Synergistic activities:} % -------------------------------------------- Dr.\ Wilson has consistently demonstrated a strong commitment to undergraduate education over more than two decades. He founded the Edinburgh Parallel Computing Centre's Summer Scholarship Programme, which recruited and trained 60 students between 1988 and 1992. While at the University of Toronto, he supervised over 150 undergraduates working alone or in small teams on almost 100 real-world projects, of which more than half were for clients outside the Computer Science department. In 2009, he created the UCOSP program, through which students from over a dozen Canadian universities work for a term in distributed teams on open source projects. For this and other work, Dr.\ Wilson was named ComputerWorld Canada's IT Educator of the Year in 2010. Dr.\ Wilson has also been very active in the open source community for more than 15 years: He is a member of the Python Software Foundation, and has been a mentor for Google's Summer of Code program since its inception in 2005. He also works to build bridges between empirical software engineering research and industrial practice: he was an invited keynote speaker at SPLASH 2013, and a steward of ``It Will Never Work in Theory'' (http://neverworkintheory.org), an online forum for discussion of empirical results in software engineering that are of particular interest to practitioners. As part of this work, he has initiated and edited a series of book connecting theory to practice, including \emph{Beautiful Code}, \emph{Making Software}, and a multi-volume series titled \emph{The Architecture of Open Source Applications} (available at http://aosabook.org). \textbf{Collaborators \& Other Affiliations:} % -------------------------------------------- Marian Petre (Open U.); Amy Brown (independent contractor); Eleni Stroulia (U.\ Alberta); Ken Bauer (Tecnologico de Monterey); Michelle Craig (U.\ Toronto); Karen Reid (U.\ Toronto); Andy Oram (O'Reilly Media); D.A.\ Aruliah (University of Ontario Institute of Technology); C.\ Titus Brown (Michigan State U.); Neil P.\ Chue Hong (Software Sustainability Institute); Matt Davis (Datapad, Inc.); Richard T.\ Guy (Microsoft); Steven H.D.\ Haddock (Monterey Bay Aquarium Research Institute); Kathryn D.\ Huff (U.\ California - Berkeley); Ian M.\ Mitchell (U. British Columbia); Mark D.\ Plumbley (Queen Mary University London); Ben Waugh (University College London); Ethan P. White (Utah State U.); Paul Wilson (U.\ Wisconsin - Madison); Allen Malony (U.\ Oregon); Jonathan Schaeffer (U.\ Alberta); Richard Brent (Australian National U.); Henri Bal (Vrije U.\ Amsterdam). \textbf{Graduated students}: Samira Abdi Ashtiani, Aran Donohue, Jeremy Handcock, Carolyn MacLeod, Jason Montojo, Rory Tulk. \end{document}
{ "alphanum_fraction": 0.7317434961, "avg_line_length": 38.9511111111, "ext": "tex", "hexsha": "02b9b32395537bc4255ea2fa734d7c353b0cc3dc", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2015-11-02T12:32:31.000Z", "max_forks_repo_forks_event_min_datetime": "2015-11-02T12:32:31.000Z", "max_forks_repo_head_hexsha": "b59deee565f248d07af912702d2b56faa50a3765", "max_forks_repo_licenses": [ "CC-BY-3.0" ], "max_forks_repo_name": "swcarpentry/iuse2014", "max_forks_repo_path": "wilson-greg-bio.tex", "max_issues_count": 3, "max_issues_repo_head_hexsha": "b59deee565f248d07af912702d2b56faa50a3765", "max_issues_repo_issues_event_max_datetime": "2016-01-12T19:19:30.000Z", "max_issues_repo_issues_event_min_datetime": "2016-01-12T19:19:25.000Z", "max_issues_repo_licenses": [ "CC-BY-3.0" ], "max_issues_repo_name": "swcarpentry/iuse2014", "max_issues_repo_path": "wilson-greg-bio.tex", "max_line_length": 151, "max_stars_count": 5, "max_stars_repo_head_hexsha": "b59deee565f248d07af912702d2b56faa50a3765", "max_stars_repo_licenses": [ "CC-BY-3.0" ], "max_stars_repo_name": "swcarpentry/iuse2014", "max_stars_repo_path": "wilson-greg-bio.tex", "max_stars_repo_stars_event_max_datetime": "2021-01-05T15:38:42.000Z", "max_stars_repo_stars_event_min_datetime": "2015-06-18T02:35:24.000Z", "num_tokens": 2282, "size": 8764 }
\chapter{Cookbook -- Cool things to do with it} \label{chapter:cookbook} Biopython now has two collections of ``cookbook'' examples -- this chapter (which has been included in this tutorial for many years and has gradually grown), and \url{http://biopython.org/wiki/Category:Cookbook} which is a user contributed collection on our wiki. We're trying to encourage Biopython users to contribute their own examples to the wiki. In addition to helping the community, one direct benefit of sharing an example like this is that you could also get some feedback on the code from other Biopython users and developers - which could help you improve all your Python code. In the long term, we may end up moving all of the examples in this chapter to the wiki, or elsewhere within the tutorial. \section{Working with sequence files} \label{sec:cookbook-sequences} This section shows some more examples of sequence input/output, using the \verb|Bio.SeqIO| module described in Chapter~\ref{chapter:seqio}. \subsection{Filtering a sequence file} Often you'll have a large file with many sequences in it (e.g. FASTA file or genes, or a FASTQ or SFF file of reads), a separate shorter list of the IDs for a subset of sequences of interest, and want to make a new sequence file for this subset. Let's say the list of IDs is in a simple text file, as the first word on each line. This could be a tabular file where the first column is the ID. Try something like this: %not a doctest to avoid temp files being left behind, also no >>> %makes it easier to copy and paste the example to a script file. \begin{minted}{python} from Bio import SeqIO input_file = "big_file.sff" id_file = "short_list.txt" output_file = "short_list.sff" with open(id_file) as id_handle: wanted = set(line.rstrip("\n").split(None,1)[0] for line in id_handle) print("Found %i unique identifiers in %s" % (len(wanted), id_file)) records = (r for r in SeqIO.parse(input_file, "sff") if r.id in wanted) count = SeqIO.write(records, output_file, "sff") print("Saved %i records from %s to %s" % (count, input_file, output_file)) if count < len(wanted): print("Warning %i IDs not found in %s" % (len(wanted) - count, input_file)) \end{minted} Note that we use a Python \verb|set| rather than a \verb|list|, this makes testing membership faster. As discussed in Section~\ref{sec:low-level-fasta-fastq}, for a large FASTA or FASTQ file for speed you would be better off not using the high-level \verb|SeqIO| interface, but working directly with strings. This next example shows how to do this with FASTQ files -- it is more complicated: %not a doctest to avoid temp files being left behind, also no >>> %makes it easier to copy and paste the example to a script file. \begin{minted}{python} from Bio.SeqIO.QualityIO import FastqGeneralIterator input_file = "big_file.fastq" id_file = "short_list.txt" output_file = "short_list.fastq" with open(id_file) as id_handle: # Taking first word on each line as an identifer wanted = set(line.rstrip("\n").split(None,1)[0] for line in id_handle) print("Found %i unique identifiers in %s" % (len(wanted), id_file)) with open(input_file) as in_handle: with open(output_file, "w") as out_handle: for title, seq, qual in FastqGeneralIterator(in_handle): # The ID is the first word in the title line (after the @ sign): if title.split(None, 1)[0] in wanted: # This produces a standard 4-line FASTQ entry: out_handle.write("@%s\n%s\n+\n%s\n" % (title, seq, qual)) count += 1 print("Saved %i records from %s to %s" % (count, input_file, output_file)) if count < len(wanted): print("Warning %i IDs not found in %s" % (len(wanted) - count, input_file)) \end{minted} \subsection{Producing randomised genomes} Let's suppose you are looking at genome sequence, hunting for some sequence feature -- maybe extreme local GC\% bias, or possible restriction digest sites. Once you've got your Python code working on the real genome it may be sensible to try running the same search on randomised versions of the same genome for statistical analysis (after all, any ``features'' you've found could just be there just by chance). For this discussion, we'll use the GenBank file for the pPCP1 plasmid from \textit{Yersinia pestis biovar Microtus}. The file is included with the Biopython unit tests under the GenBank folder, or you can get it from our website, \href{https://raw.githubusercontent.com/biopython/biopython/master/Tests/GenBank/NC_005816.gb} {\texttt{NC\_005816.gb}}. This file contains one and only one record, so we can read it in as a \verb|SeqRecord| using the \verb|Bio.SeqIO.read()| function: %doctest ../Tests/GenBank \begin{minted}{pycon} >>> from Bio import SeqIO >>> original_rec = SeqIO.read("NC_005816.gb", "genbank") \end{minted} So, how can we generate a shuffled versions of the original sequence? I would use the built in Python \verb|random| module for this, in particular the function \verb|random.shuffle| -- but this works on a Python list. Our sequence is a \verb|Seq| object, so in order to shuffle it we need to turn it into a list: %cont-doctest \begin{minted}{pycon} >>> import random >>> nuc_list = list(original_rec.seq) >>> random.shuffle(nuc_list) # acts in situ! \end{minted} Now, in order to use \verb|Bio.SeqIO| to output the shuffled sequence, we need to construct a new \verb|SeqRecord| with a new \verb|Seq| object using this shuffled list. In order to do this, we need to turn the list of nucleotides (single letter strings) into a long string -- the standard Python way to do this is with the string object's join method. %cont-doctest \begin{minted}{pycon} >>> from Bio.Seq import Seq >>> from Bio.SeqRecord import SeqRecord >>> shuffled_rec = SeqRecord(Seq("".join(nuc_list), original_rec.seq.alphabet), ... id="Shuffled", description="Based on %s" % original_rec.id) ... \end{minted} Let's put all these pieces together to make a complete Python script which generates a single FASTA file containing 30 randomly shuffled versions of the original sequence. This first version just uses a big for loop and writes out the records one by one (using the \verb|SeqRecord|'s format method described in Section~\ref{sec:Bio.SeqIO-and-StringIO}): \begin{minted}{python} import random from Bio.Seq import Seq from Bio.SeqRecord import SeqRecord from Bio import SeqIO original_rec = SeqIO.read("NC_005816.gb","genbank") with open("shuffled.fasta", "w") as output_handle: for i in range(30): nuc_list = list(original_rec.seq) random.shuffle(nuc_list) shuffled_rec = SeqRecord(Seq("".join(nuc_list), original_rec.seq.alphabet), id="Shuffled%i" % (i+1), description="Based on %s" % original_rec.id) out_handle.write(shuffled_rec.format("fasta")) \end{minted} Personally I prefer the following version using a function to shuffle the record and a generator expression instead of the for loop: \begin{minted}{python} import random from Bio.Seq import Seq from Bio.SeqRecord import SeqRecord from Bio import SeqIO def make_shuffle_record(record, new_id): nuc_list = list(record.seq) random.shuffle(nuc_list) return SeqRecord(Seq("".join(nuc_list), record.seq.alphabet), id=new_id, description="Based on %s" % original_rec.id) original_rec = SeqIO.read("NC_005816.gb","genbank") shuffled_recs = (make_shuffle_record(original_rec, "Shuffled%i" % (i+1)) for i in range(30)) SeqIO.write(shuffled_recs, "shuffled.fasta", "fasta") \end{minted} \subsection{Translating a FASTA file of CDS entries} \label{sec:SeqIO-translate} Suppose you've got an input file of CDS entries for some organism, and you want to generate a new FASTA file containing their protein sequences. i.e. Take each nucleotide sequence from the original file, and translate it. Back in Section~\ref{sec:translation} we saw how to use the \verb|Seq| object's \verb|translate method|, and the optional \verb|cds| argument which enables correct translation of alternative start codons. We can combine this with \verb|Bio.SeqIO| as shown in the reverse complement example in Section~\ref{sec:SeqIO-reverse-complement}. The key point is that for each nucleotide \verb|SeqRecord|, we need to create a protein \verb|SeqRecord| - and take care of naming it. You can write you own function to do this, choosing suitable protein identifiers for your sequences, and the appropriate genetic code. In this example we just use the default table and add a prefix to the identifier: \begin{minted}{python} from Bio.SeqRecord import SeqRecord def make_protein_record(nuc_record): """Returns a new SeqRecord with the translated sequence (default table).""" return SeqRecord(seq = nuc_record.seq.translate(cds=True), \ id = "trans_" + nuc_record.id, \ description = "translation of CDS, using default table") \end{minted} We can then use this function to turn the input nucleotide records into protein records ready for output. An elegant way and memory efficient way to do this is with a generator expression: \begin{minted}{python} from Bio import SeqIO proteins = (make_protein_record(nuc_rec) for nuc_rec in \ SeqIO.parse("coding_sequences.fasta", "fasta")) SeqIO.write(proteins, "translations.fasta", "fasta") \end{minted} This should work on any FASTA file of complete coding sequences. If you are working on partial coding sequences, you may prefer to use \verb|nuc_record.seq.translate(to_stop=True)| in the example above, as this wouldn't check for a valid start codon etc. \subsection{Making the sequences in a FASTA file upper case} Often you'll get data from collaborators as FASTA files, and sometimes the sequences can be in a mixture of upper and lower case. In some cases this is deliberate (e.g. lower case for poor quality regions), but usually it is not important. You may want to edit the file to make everything consistent (e.g. all upper case), and you can do this easily using the \verb|upper()| method of the \verb|SeqRecord| object (added in Biopython 1.55): \begin{minted}{python} from Bio import SeqIO records = (rec.upper() for rec in SeqIO.parse("mixed.fas", "fasta")) count = SeqIO.write(records, "upper.fas", "fasta") print("Converted %i records to upper case" % count) \end{minted} How does this work? The first line is just importing the \verb|Bio.SeqIO| module. The second line is the interesting bit -- this is a Python generator expression which gives an upper case version of each record parsed from the input file (\texttt{mixed.fas}). In the third line we give this generator expression to the \verb|Bio.SeqIO.write()| function and it saves the new upper cases records to our output file (\texttt{upper.fas}). The reason we use a generator expression (rather than a list or list comprehension) is this means only one record is kept in memory at a time. This can be really important if you are dealing with large files with millions of entries. \subsection{Sorting a sequence file} \label{sec:SeqIO-sort} Suppose you wanted to sort a sequence file by length (e.g. a set of contigs from an assembly), and you are working with a file format like FASTA or FASTQ which \verb|Bio.SeqIO| can read, write (and index). If the file is small enough, you can load it all into memory at once as a list of \verb|SeqRecord| objects, sort the list, and save it: \begin{minted}{python} from Bio import SeqIO records = list(SeqIO.parse("ls_orchid.fasta", "fasta")) records.sort(key=lambda r: len(r)) SeqIO.write(records, "sorted_orchids.fasta", "fasta") \end{minted} The only clever bit is specifying a comparison method for how to sort the records (here we sort them by length). If you wanted the longest records first, you could flip the comparison or use the reverse argument: \begin{minted}{python} from Bio import SeqIO records = list(SeqIO.parse("ls_orchid.fasta", "fasta")) records.sort(key=lambda r: -len(r)) SeqIO.write(records, "sorted_orchids.fasta", "fasta") \end{minted} Now that's pretty straight forward - but what happens if you have a very large file and you can't load it all into memory like this? For example, you might have some next-generation sequencing reads to sort by length. This can be solved using the \verb|Bio.SeqIO.index()| function. \begin{minted}{python} from Bio import SeqIO # Get the lengths and ids, and sort on length len_and_ids = sorted((len(rec), rec.id) for rec in SeqIO.parse("ls_orchid.fasta", "fasta")) ids = reversed([id for (length, id) in len_and_ids]) del len_and_ids # free this memory record_index = SeqIO.index("ls_orchid.fasta", "fasta") records = (record_index[id] for id in ids) SeqIO.write(records, "sorted.fasta", "fasta") \end{minted} First we scan through the file once using \verb|Bio.SeqIO.parse()|, recording the record identifiers and their lengths in a list of tuples. We then sort this list to get them in length order, and discard the lengths. Using this sorted list of identifiers \verb|Bio.SeqIO.index()| allows us to retrieve the records one by one, and we pass them to \verb|Bio.SeqIO.write()| for output. These examples all use \verb|Bio.SeqIO| to parse the records into \verb|SeqRecord| objects which are output using \verb|Bio.SeqIO.write()|. What if you want to sort a file format which \verb|Bio.SeqIO.write()| doesn't support, like the plain text SwissProt format? Here is an alternative solution using the \verb|get_raw()| method added to \verb|Bio.SeqIO.index()| in Biopython 1.54 (see Section~\ref{sec:seqio-index-getraw}). \begin{minted}{python} from Bio import SeqIO # Get the lengths and ids, and sort on length len_and_ids = sorted((len(rec), rec.id) for rec in SeqIO.parse("ls_orchid.fasta", "fasta")) ids = reversed([id for (length, id) in len_and_ids]) del len_and_ids # free this memory record_index = SeqIO.index("ls_orchid.fasta", "fasta") with open("sorted.fasta", "wb") as out_handle: for id in ids: out_handle.write(record_index.get_raw(id)) \end{minted} Note with Python 3 onwards, we have to open the file for writing in binary mode because the \verb|get_raw()| method returns bytes strings. As a bonus, because it doesn't parse the data into \verb|SeqRecord| objects a second time it should be faster. If you only want to use this with FASTA format, we can speed this up one step further by using the low-level FASTA parser to get the record identifiers and lengths: \begin{minted}{python} from Bio.SeqIO.FastaIO import SimpleFastaParser from Bio import SeqIO # Get the lengths and ids, and sort on length with open("ls_orchid.fasta") as in_handle: len_and_ids = sorted((len(seq), title.split(None, 1)[0]) for title, seq in SimpleFastaParser(in_handle)) ids = reversed([id for (length, id) in len_and_ids]) del len_and_ids # free this memory record_index = SeqIO.index("ls_orchid.fasta", "fasta") with open("sorted.fasta", "wb") as out_handle: for id in ids: out_handle.write(record_index.get_raw(id)) \end{minted} \subsection{Simple quality filtering for FASTQ files} \label{sec:FASTQ-filtering-example} The FASTQ file format was introduced at Sanger and is now widely used for holding nucleotide sequencing reads together with their quality scores. FASTQ files (and the related QUAL files) are an excellent example of per-letter-annotation, because for each nucleotide in the sequence there is an associated quality score. Any per-letter-annotation is held in a \verb|SeqRecord| in the \verb|letter_annotations| dictionary as a list, tuple or string (with the same number of elements as the sequence length). One common task is taking a large set of sequencing reads and filtering them (or cropping them) based on their quality scores. The following example is very simplistic, but should illustrate the basics of working with quality data in a \verb|SeqRecord| object. All we are going to do here is read in a file of FASTQ data, and filter it to pick out only those records whose PHRED quality scores are all above some threshold (here 20). For this example we'll use some real data downloaded from the ENA sequence read archive, \url{ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR020/SRR020192/SRR020192.fastq.gz} (2MB) which unzips to a 19MB file \texttt{SRR020192.fastq}. This is some Roche 454 GS FLX single end data from virus infected California sea lions (see \url{https://www.ebi.ac.uk/ena/data/view/SRS004476} for details). First, let's count the reads: \begin{minted}{python} from Bio import SeqIO count = 0 for rec in SeqIO.parse("SRR020192.fastq", "fastq"): count += 1 print("%i reads" % count) \end{minted} \noindent Now let's do a simple filtering for a minimum PHRED quality of 20: \begin{minted}{python} from Bio import SeqIO good_reads = (rec for rec in \ SeqIO.parse("SRR020192.fastq", "fastq") \ if min(rec.letter_annotations["phred_quality"]) >= 20) count = SeqIO.write(good_reads, "good_quality.fastq", "fastq") print("Saved %i reads" % count) \end{minted} \noindent This pulled out only $14580$ reads out of the $41892$ present. A more sensible thing to do would be to quality trim the reads, but this is intended as an example only. FASTQ files can contain millions of entries, so it is best to avoid loading them all into memory at once. This example uses a generator expression, which means only one \verb|SeqRecord| is created at a time - avoiding any memory limitations. Note that it would be faster to use the low-level \verb|FastqGeneralIterator| parser here (see Section~\ref{sec:low-level-fasta-fastq}), but that does not turn the quality string into integer scores. \subsection{Trimming off primer sequences} \label{sec:FASTQ-slicing-off-primer} For this example we're going to pretend that \texttt{GATGACGGTGT} is a 5' primer sequence we want to look for in some FASTQ formatted read data. As in the example above, we'll use the \texttt{SRR020192.fastq} file downloaded from the ENA (\url{ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR020/SRR020192/SRR020192.fastq.gz}). By using the main \verb|Bio.SeqIO| interface, the same approach would work with any other supported file format (e.g. FASTA files). However, for large FASTQ files it would be faster the low-level \verb|FastqGeneralIterator| parser here (see the earlier example, and Section~\ref{sec:low-level-fasta-fastq}). This code uses \verb|Bio.SeqIO| with a generator expression (to avoid loading all the sequences into memory at once), and the \verb|Seq| object's \verb|startswith| method to see if the read starts with the primer sequence: \begin{minted}{python} from Bio import SeqIO primer_reads = (rec for rec in \ SeqIO.parse("SRR020192.fastq", "fastq") \ if rec.seq.startswith("GATGACGGTGT")) count = SeqIO.write(primer_reads, "with_primer.fastq", "fastq") print("Saved %i reads" % count) \end{minted} \noindent That should find $13819$ reads from \texttt{SRR014849.fastq} and save them to a new FASTQ file, \texttt{with\_primer.fastq}. Now suppose that instead you wanted to make a FASTQ file containing these reads but with the primer sequence removed? That's just a small change as we can slice the \verb|SeqRecord| (see Section~\ref{sec:SeqRecord-slicing}) to remove the first eleven letters (the length of our primer): \begin{minted}{python} from Bio import SeqIO trimmed_primer_reads = (rec[11:] for rec in \ SeqIO.parse("SRR020192.fastq", "fastq") \ if rec.seq.startswith("GATGACGGTGT")) count = SeqIO.write(trimmed_primer_reads, "with_primer_trimmed.fastq", "fastq") print("Saved %i reads" % count) \end{minted} \noindent Again, that should pull out the $13819$ reads from \texttt{SRR020192.fastq}, but this time strip off the first ten characters, and save them to another new FASTQ file, \texttt{with\_primer\_trimmed.fastq}. Now, suppose you want to create a new FASTQ file where these reads have their primer removed, but all the other reads are kept as they were? If we want to still use a generator expression, it is probably clearest to define our own trim function: \begin{minted}{python} from Bio import SeqIO def trim_primer(record, primer): if record.seq.startswith(primer): return record[len(primer):] else: return record trimmed_reads = (trim_primer(record, "GATGACGGTGT") for record in \ SeqIO.parse("SRR020192.fastq", "fastq")) count = SeqIO.write(trimmed_reads, "trimmed.fastq", "fastq") print("Saved %i reads" % count) \end{minted} This takes longer, as this time the output file contains all $41892$ reads. Again, we're used a generator expression to avoid any memory problems. You could alternatively use a generator function rather than a generator expression. \begin{minted}{python} from Bio import SeqIO def trim_primers(records, primer): """Removes perfect primer sequences at start of reads. This is a generator function, the records argument should be a list or iterator returning SeqRecord objects. """ len_primer = len(primer) #cache this for later for record in records: if record.seq.startswith(primer): yield record[len_primer:] else: yield record original_reads = SeqIO.parse("SRR020192.fastq", "fastq") trimmed_reads = trim_primers(original_reads, "GATGACGGTGT") count = SeqIO.write(trimmed_reads, "trimmed.fastq", "fastq") print("Saved %i reads" % count) \end{minted} This form is more flexible if you want to do something more complicated where only some of the records are retained -- as shown in the next example. \subsection{Trimming off adaptor sequences} \label{sec:FASTQ-slicing-off-adaptor} This is essentially a simple extension to the previous example. We are going to going to pretend \texttt{GATGACGGTGT} is an adaptor sequence in some FASTQ formatted read data, again the \texttt{SRR020192.fastq} file from the NCBI (\url{ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR020/SRR020192/SRR020192.fastq.gz}). This time however, we will look for the sequence \emph{anywhere} in the reads, not just at the very beginning: \begin{minted}{python} from Bio import SeqIO def trim_adaptors(records, adaptor): """Trims perfect adaptor sequences. This is a generator function, the records argument should be a list or iterator returning SeqRecord objects. """ len_adaptor = len(adaptor) #cache this for later for record in records: index = record.seq.find(adaptor) if index == -1: #adaptor not found, so won't trim yield record else: #trim off the adaptor yield record[index+len_adaptor:] original_reads = SeqIO.parse("SRR020192.fastq", "fastq") trimmed_reads = trim_adaptors(original_reads, "GATGACGGTGT") count = SeqIO.write(trimmed_reads, "trimmed.fastq", "fastq") print("Saved %i reads" % count) \end{minted} Because we are using a FASTQ input file in this example, the \verb|SeqRecord| objects have per-letter-annotation for the quality scores. By slicing the \verb|SeqRecord| object the appropriate scores are used on the trimmed records, so we can output them as a FASTQ file too. Compared to the output of the previous example where we only looked for a primer/adaptor at the start of each read, you may find some of the trimmed reads are quite short after trimming (e.g. if the adaptor was found in the middle rather than near the start). So, let's add a minimum length requirement as well: \begin{minted}{python} from Bio import SeqIO def trim_adaptors(records, adaptor, min_len): """Trims perfect adaptor sequences, checks read length. This is a generator function, the records argument should be a list or iterator returning SeqRecord objects. """ len_adaptor = len(adaptor) #cache this for later for record in records: len_record = len(record) #cache this for later if len(record) < min_len: #Too short to keep continue index = record.seq.find(adaptor) if index == -1: #adaptor not found, so won't trim yield record elif len_record - index - len_adaptor >= min_len: #after trimming this will still be long enough yield record[index+len_adaptor:] original_reads = SeqIO.parse("SRR020192.fastq", "fastq") trimmed_reads = trim_adaptors(original_reads, "GATGACGGTGT", 100) count = SeqIO.write(trimmed_reads, "trimmed.fastq", "fastq") print("Saved %i reads" % count) \end{minted} By changing the format names, you could apply this to FASTA files instead. This code also could be extended to do a fuzzy match instead of an exact match (maybe using a pairwise alignment, or taking into account the read quality scores), but that will be much slower. \subsection{Converting FASTQ files} \label{sec:SeqIO-fastq-conversion} Back in Section~\ref{sec:SeqIO-conversion} we showed how to use \verb|Bio.SeqIO| to convert between two file formats. Here we'll go into a little more detail regarding FASTQ files which are used in second generation DNA sequencing. Please refer to Cock \textit{et al.} (2009) \cite{cock2010} for a longer description. FASTQ files store both the DNA sequence (as a string) and the associated read qualities. PHRED scores (used in most FASTQ files, and also in QUAL files, ACE files and SFF files) have become a \textit{de facto} standard for representing the probability of a sequencing error (here denoted by $P_e$) at a given base using a simple base ten log transformation: \begin{equation} Q_{\textrm{PHRED}} = - 10 \times \textrm{log}_{10} ( P_e ) \end{equation} This means a wrong read ($P_e = 1$) gets a PHRED quality of $0$, while a very good read like $P_e = 0.00001$ gets a PHRED quality of $50$. While for raw sequencing data qualities higher than this are rare, with post processing such as read mapping or assembly, qualities of up to about $90$ are possible (indeed, the MAQ tool allows for PHRED scores in the range 0 to 93 inclusive). The FASTQ format has the potential to become a \textit{de facto} standard for storing the letters and quality scores for a sequencing read in a single plain text file. The only fly in the ointment is that there are at least three versions of the FASTQ format which are incompatible and difficult to distinguish... \begin{enumerate} \item The original Sanger FASTQ format uses PHRED qualities encoded with an ASCII offset of 33. The NCBI are using this format in their Short Read Archive. We call this the \texttt{fastq} (or \texttt{fastq-sanger}) format in \verb|Bio.SeqIO|. \item Solexa (later bought by Illumina) introduced their own version using Solexa qualities encoded with an ASCII offset of 64. We call this the \texttt{fastq-solexa} format. \item Illumina pipeline 1.3 onwards produces FASTQ files with PHRED qualities (which is more consistent), but encoded with an ASCII offset of 64. We call this the \texttt{fastq-illumina} format. \end{enumerate} The Solexa quality scores are defined using a different log transformation: \begin{equation} Q_{\textrm{Solexa}} = - 10 \times \textrm{log}_{10} \left( \frac{P_e}{1-P_e} \right) \end{equation} Given Solexa/Illumina have now moved to using PHRED scores in version 1.3 of their pipeline, the Solexa quality scores will gradually fall out of use. If you equate the error estimates ($P_e$) these two equations allow conversion between the two scoring systems - and Biopython includes functions to do this in the \verb|Bio.SeqIO.QualityIO| module, which are called if you use \verb|Bio.SeqIO| to convert an old Solexa/Illumina file into a standard Sanger FASTQ file: \begin{minted}{python} from Bio import SeqIO SeqIO.convert("solexa.fastq", "fastq-solexa", "standard.fastq", "fastq") \end{minted} If you want to convert a new Illumina 1.3+ FASTQ file, all that gets changed is the ASCII offset because although encoded differently the scores are all PHRED qualities: \begin{minted}{python} from Bio import SeqIO SeqIO.convert("illumina.fastq", "fastq-illumina", "standard.fastq", "fastq") \end{minted} Note that using \verb|Bio.SeqIO.convert()| like this is \emph{much} faster than combining \verb|Bio.SeqIO.parse()| and \verb|Bio.SeqIO.write()| because optimised code is used for converting between FASTQ variants (and also for FASTQ to FASTA conversion). For good quality reads, PHRED and Solexa scores are approximately equal, which means since both the \texttt{fasta-solexa} and \texttt{fastq-illumina} formats use an ASCII offset of 64 the files are almost the same. This was a deliberate design choice by Illumina, meaning applications expecting the old \texttt{fasta-solexa} style files will probably be OK using the newer \texttt{fastq-illumina} files (on good data). Of course, both variants are very different from the original FASTQ standard as used by Sanger, the NCBI, and elsewhere (format name \texttt{fastq} or \texttt{fastq-sanger}). For more details, see the built in help (also \href{http://www.biopython.org/DIST/docs/api/Bio.SeqIO.QualityIO-module.html}{online}): \begin{minted}{pycon} >>> from Bio.SeqIO import QualityIO >>> help(QualityIO) ... \end{minted} \subsection{Converting FASTA and QUAL files into FASTQ files} \label{sec:SeqIO-fasta-qual-conversion} FASTQ files hold \emph{both} sequences and their quality strings. FASTA files hold \emph{just} sequences, while QUAL files hold \emph{just} the qualities. Therefore a single FASTQ file can be converted to or from \emph{paired} FASTA and QUAL files. Going from FASTQ to FASTA is easy: \begin{minted}{python} from Bio import SeqIO SeqIO.convert("example.fastq", "fastq", "example.fasta", "fasta") \end{minted} Going from FASTQ to QUAL is also easy: \begin{minted}{python} from Bio import SeqIO SeqIO.convert("example.fastq", "fastq", "example.qual", "qual") \end{minted} However, the reverse is a little more tricky. You can use \verb|Bio.SeqIO.parse()| to iterate over the records in a \emph{single} file, but in this case we have two input files. There are several strategies possible, but assuming that the two files are really paired the most memory efficient way is to loop over both together. The code is a little fiddly, so we provide a function called \verb|PairedFastaQualIterator| in the \verb|Bio.SeqIO.QualityIO| module to do this. This takes two handles (the FASTA file and the QUAL file) and returns a \verb|SeqRecord| iterator: \begin{minted}{python} from Bio.SeqIO.QualityIO import PairedFastaQualIterator for record in PairedFastaQualIterator(open("example.fasta"), open("example.qual")): print(record) \end{minted} This function will check that the FASTA and QUAL files are consistent (e.g. the records are in the same order, and have the same sequence length). You can combine this with the \verb|Bio.SeqIO.write()| function to convert a pair of FASTA and QUAL files into a single FASTQ files: \begin{minted}{python} from Bio import SeqIO from Bio.SeqIO.QualityIO import PairedFastaQualIterator with open("example.fasta") as f_handle, open("example.qual") as q_handle: records = PairedFastaQualIterator(f_handle, q_handle) count = SeqIO.write(records, "temp.fastq", "fastq") print("Converted %i records" % count) \end{minted} \subsection{Indexing a FASTQ file} \label{sec:fastq-indexing} FASTQ files are usually very large, with millions of reads in them. Due to the sheer amount of data, you can't load all the records into memory at once. This is why the examples above (filtering and trimming) iterate over the file looking at just one \verb|SeqRecord| at a time. However, sometimes you can't use a big loop or an iterator - you may need random access to the reads. Here the \verb|Bio.SeqIO.index()| function may prove very helpful, as it allows you to access any read in the FASTQ file by its name (see Section~\ref{sec:SeqIO-index}). Again we'll use the \texttt{SRR020192.fastq} file from the ENA (\url{ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR020/SRR020192/SRR020192.fastq.gz}), although this is actually quite a small FASTQ file with less than $50,000$ reads: \begin{minted}{pycon} >>> from Bio import SeqIO >>> fq_dict = SeqIO.index("SRR020192.fastq", "fastq") >>> len(fq_dict) 41892 >>> fq_dict.keys()[:4] ['SRR020192.38240', 'SRR020192.23181', 'SRR020192.40568', 'SRR020192.23186'] >>> fq_dict["SRR020192.23186"].seq Seq('GTCCCAGTATTCGGATTTGTCTGCCAAAACAATGAAATTGACACAGTTTACAAC...CCG', SingleLetterAlphabet()) \end{minted} When testing this on a FASTQ file with seven million reads, indexing took about a minute, but record access was almost instant. The sister function \verb|Bio.SeqIO.index_db()| lets you save the index to an SQLite3 database file for near instantaneous reuse - see Section~\ref{sec:SeqIO-index} for more details. The example in Section~\ref{sec:SeqIO-sort} show how you can use the \verb|Bio.SeqIO.index()| function to sort a large FASTA file -- this could also be used on FASTQ files. \subsection{Converting SFF files} \label{sec:SeqIO-sff-conversion} If you work with 454 (Roche) sequence data, you will probably have access to the raw data as a Standard Flowgram Format (SFF) file. This contains the sequence reads (called bases) with quality scores and the original flow information. A common task is to convert from SFF to a pair of FASTA and QUAL files, or to a single FASTQ file. These operations are trivial using the \verb|Bio.SeqIO.convert()| function (see Section~\ref{sec:SeqIO-conversion}): \begin{minted}{pycon} >>> from Bio import SeqIO >>> SeqIO.convert("E3MFGYR02_random_10_reads.sff", "sff", "reads.fasta", "fasta") 10 >>> SeqIO.convert("E3MFGYR02_random_10_reads.sff", "sff", "reads.qual", "qual") 10 >>> SeqIO.convert("E3MFGYR02_random_10_reads.sff", "sff", "reads.fastq", "fastq") 10 \end{minted} \noindent Remember the convert function returns the number of records, in this example just ten. This will give you the \emph{untrimmed} reads, where the leading and trailing poor quality sequence or adaptor will be in lower case. If you want the \emph{trimmed} reads (using the clipping information recorded within the SFF file) use this: \begin{minted}{pycon} >>> from Bio import SeqIO >>> SeqIO.convert("E3MFGYR02_random_10_reads.sff", "sff-trim", "trimmed.fasta", "fasta") 10 >>> SeqIO.convert("E3MFGYR02_random_10_reads.sff", "sff-trim", "trimmed.qual", "qual") 10 >>> SeqIO.convert("E3MFGYR02_random_10_reads.sff", "sff-trim", "trimmed.fastq", "fastq") 10 \end{minted} If you run Linux, you could ask Roche for a copy of their ``off instrument'' tools (often referred to as the Newbler tools). This offers an alternative way to do SFF to FASTA or QUAL conversion at the command line (but currently FASTQ output is not supported), e.g. \begin{minted}{console} $ sffinfo -seq -notrim E3MFGYR02_random_10_reads.sff > reads.fasta $ sffinfo -qual -notrim E3MFGYR02_random_10_reads.sff > reads.qual $ sffinfo -seq -trim E3MFGYR02_random_10_reads.sff > trimmed.fasta $ sffinfo -qual -trim E3MFGYR02_random_10_reads.sff > trimmed.qual \end{minted} \noindent The way Biopython uses mixed case sequence strings to represent the trimming points deliberately mimics what the Roche tools do. For more information on the Biopython SFF support, consult the built in help: \begin{minted}{pycon} >>> from Bio.SeqIO import SffIO >>> help(SffIO) ... \end{minted} \subsection{Identifying open reading frames} A very simplistic first step at identifying possible genes is to look for open reading frames (ORFs). By this we mean look in all six frames for long regions without stop codons -- an ORF is just a region of nucleotides with no in frame stop codons. Of course, to find a gene you would also need to worry about locating a start codon, possible promoters -- and in Eukaryotes there are introns to worry about too. However, this approach is still useful in viruses and Prokaryotes. To show how you might approach this with Biopython, we'll need a sequence to search, and as an example we'll again use the bacterial plasmid -- although this time we'll start with a plain FASTA file with no pre-marked genes: \href{https://raw.githubusercontent.com/biopython/biopython/master/Tests/GenBank/NC_005816.fna} {\texttt{NC\_005816.fna}}. This is a bacterial sequence, so we'll want to use NCBI codon table 11 (see Section~\ref{sec:translation} about translation). %doctest ../Tests/GenBank \begin{minted}{pycon} >>> from Bio import SeqIO >>> record = SeqIO.read("NC_005816.fna", "fasta") >>> table = 11 >>> min_pro_len = 100 \end{minted} Here is a neat trick using the \verb|Seq| object's \verb|split| method to get a list of all the possible ORF translations in the six reading frames: %cont-doctest \begin{minted}{pycon} >>> for strand, nuc in [(+1, record.seq), (-1, record.seq.reverse_complement())]: ... for frame in range(3): ... length = 3 * ((len(record)-frame) // 3) #Multiple of three ... for pro in nuc[frame:frame+length].translate(table).split("*"): ... if len(pro) >= min_pro_len: ... print("%s...%s - length %i, strand %i, frame %i" \ ... % (pro[:30], pro[-3:], len(pro), strand, frame)) GCLMKKSSIVATIITILSGSANAASSQLIP...YRF - length 315, strand 1, frame 0 KSGELRQTPPASSTLHLRLILQRSGVMMEL...NPE - length 285, strand 1, frame 1 GLNCSFFSICNWKFIDYINRLFQIIYLCKN...YYH - length 176, strand 1, frame 1 VKKILYIKALFLCTVIKLRRFIFSVNNMKF...DLP - length 165, strand 1, frame 1 NQIQGVICSPDSGEFMVTFETVMEIKILHK...GVA - length 355, strand 1, frame 2 RRKEHVSKKRRPQKRPRRRRFFHRLRPPDE...PTR - length 128, strand 1, frame 2 TGKQNSCQMSAIWQLRQNTATKTRQNRARI...AIK - length 100, strand 1, frame 2 QGSGYAFPHASILSGIAMSHFYFLVLHAVK...CSD - length 114, strand -1, frame 0 IYSTSEHTGEQVMRTLDEVIASRSPESQTR...FHV - length 111, strand -1, frame 0 WGKLQVIGLSMWMVLFSQRFDDWLNEQEDA...ESK - length 125, strand -1, frame 1 RGIFMSDTMVVNGSGGVPAFLFSGSTLSSY...LLK - length 361, strand -1, frame 1 WDVKTVTGVLHHPFHLTFSLCPEGATQSGR...VKR - length 111, strand -1, frame 1 LSHTVTDFTDQMAQVGLCQCVNVFLDEVTG...KAA - length 107, strand -1, frame 2 RALTGLSAPGIRSQTSCDRLRELRYVPVSL...PLQ - length 119, strand -1, frame 2 \end{minted} Note that here we are counting the frames from the 5' end (start) of \emph{each} strand. It is sometimes easier to always count from the 5' end (start) of the \emph{forward} strand. You could easily edit the above loop based code to build up a list of the candidate proteins, or convert this to a list comprehension. Now, one thing this code doesn't do is keep track of where the proteins are. You could tackle this in several ways. For example, the following code tracks the locations in terms of the protein counting, and converts back to the parent sequence by multiplying by three, then adjusting for the frame and strand: \begin{minted}{python} from Bio import SeqIO record = SeqIO.read("NC_005816.gb","genbank") table = 11 min_pro_len = 100 def find_orfs_with_trans(seq, trans_table, min_protein_length): answer = [] seq_len = len(seq) for strand, nuc in [(+1, seq), (-1, seq.reverse_complement())]: for frame in range(3): trans = str(nuc[frame:].translate(trans_table)) trans_len = len(trans) aa_start = 0 aa_end = 0 while aa_start < trans_len: aa_end = trans.find("*", aa_start) if aa_end == -1: aa_end = trans_len if aa_end-aa_start >= min_protein_length: if strand == 1: start = frame+aa_start*3 end = min(seq_len,frame+aa_end*3+3) else: start = seq_len-frame-aa_end*3-3 end = seq_len-frame-aa_start*3 answer.append((start, end, strand, trans[aa_start:aa_end])) aa_start = aa_end+1 answer.sort() return answer orf_list = find_orfs_with_trans(record.seq, table, min_pro_len) for start, end, strand, pro in orf_list: print("%s...%s - length %i, strand %i, %i:%i" \ % (pro[:30], pro[-3:], len(pro), strand, start, end)) \end{minted} \noindent And the output: \begin{minted}{text} NQIQGVICSPDSGEFMVTFETVMEIKILHK...GVA - length 355, strand 1, 41:1109 WDVKTVTGVLHHPFHLTFSLCPEGATQSGR...VKR - length 111, strand -1, 491:827 KSGELRQTPPASSTLHLRLILQRSGVMMEL...NPE - length 285, strand 1, 1030:1888 RALTGLSAPGIRSQTSCDRLRELRYVPVSL...PLQ - length 119, strand -1, 2830:3190 RRKEHVSKKRRPQKRPRRRRFFHRLRPPDE...PTR - length 128, strand 1, 3470:3857 GLNCSFFSICNWKFIDYINRLFQIIYLCKN...YYH - length 176, strand 1, 4249:4780 RGIFMSDTMVVNGSGGVPAFLFSGSTLSSY...LLK - length 361, strand -1, 4814:5900 VKKILYIKALFLCTVIKLRRFIFSVNNMKF...DLP - length 165, strand 1, 5923:6421 LSHTVTDFTDQMAQVGLCQCVNVFLDEVTG...KAA - length 107, strand -1, 5974:6298 GCLMKKSSIVATIITILSGSANAASSQLIP...YRF - length 315, strand 1, 6654:7602 IYSTSEHTGEQVMRTLDEVIASRSPESQTR...FHV - length 111, strand -1, 7788:8124 WGKLQVIGLSMWMVLFSQRFDDWLNEQEDA...ESK - length 125, strand -1, 8087:8465 TGKQNSCQMSAIWQLRQNTATKTRQNRARI...AIK - length 100, strand 1, 8741:9044 QGSGYAFPHASILSGIAMSHFYFLVLHAVK...CSD - length 114, strand -1, 9264:9609 \end{minted} If you comment out the sort statement, then the protein sequences will be shown in the same order as before, so you can check this is doing the same thing. Here we have sorted them by location to make it easier to compare to the actual annotation in the GenBank file (as visualised in Section~\ref{sec:gd_nice_example}). If however all you want to find are the locations of the open reading frames, then it is a waste of time to translate every possible codon, including doing the reverse complement to search the reverse strand too. All you need to do is search for the possible stop codons (and their reverse complements). Using regular expressions is an obvious approach here (see the Python module \verb|re|). These are an extremely powerful (but rather complex) way of describing search strings, which are supported in lots of programming languages and also command line tools like \texttt{grep} as well). You can find whole books about this topic! \section{Sequence parsing plus simple plots} \label{sec:sequence-parsing-plus-pylab} This section shows some more examples of sequence parsing, using the \verb|Bio.SeqIO| module described in Chapter~\ref{chapter:seqio}, plus the Python library matplotlib's \verb|pylab| plotting interface (see \href{https://matplotlib.org}{the matplotlib website for a tutorial}). Note that to follow these examples you will need matplotlib installed - but without it you can still try the data parsing bits. \subsection{Histogram of sequence lengths} There are lots of times when you might want to visualise the distribution of sequence lengths in a dataset -- for example the range of contig sizes in a genome assembly project. In this example we'll reuse our orchid FASTA file \href{https://raw.githubusercontent.com/biopython/biopython/master/Doc/examples/ls_orchid.fasta}\texttt{ls\_orchid.fasta} which has only 94 sequences. First of all, we will use \verb|Bio.SeqIO| to parse the FASTA file and compile a list of all the sequence lengths. You could do this with a for loop, but I find a list comprehension more pleasing: \begin{minted}{pycon} >>> from Bio import SeqIO >>> sizes = [len(rec) for rec in SeqIO.parse("ls_orchid.fasta", "fasta")] >>> len(sizes), min(sizes), max(sizes) (94, 572, 789) >>> sizes [740, 753, 748, 744, 733, 718, 730, 704, 740, 709, 700, 726, ..., 592] \end{minted} Now that we have the lengths of all the genes (as a list of integers), we can use the matplotlib histogram function to display it. \begin{minted}{python} from Bio import SeqIO sizes = [len(rec) for rec in SeqIO.parse("ls_orchid.fasta", "fasta")] import pylab pylab.hist(sizes, bins=20) pylab.title("%i orchid sequences\nLengths %i to %i" \ % (len(sizes),min(sizes),max(sizes))) pylab.xlabel("Sequence length (bp)") pylab.ylabel("Count") pylab.show() \end{minted} % % Have a HTML version and a PDF version to display nicely... % \begin{htmlonly} \noindent That should pop up a new window containing the following graph: \imgsrc[width=600, height=450]{images/hist_plot.png} \end{htmlonly} % % Now the PDF equivalent where we cannot always expect the figure % to be positioned right next to the text, so will use a reference. % \begin{latexonly} \begin{figure}[htbp] \centering \includegraphics[width=0.8\textwidth]{images/hist_plot.png} \caption{Histogram of orchid sequence lengths.} \label{fig:seq-len-hist} \end{figure} \noindent That should pop up a new window containing the graph shown in Figure~\ref{fig:seq-len-hist}. \end{latexonly} % % The text now continues... % Notice that most of these orchid sequences are about $740$ bp long, and there could be two distinct classes of sequence here with a subset of shorter sequences. \emph{Tip:} Rather than using \verb|pylab.show()| to show the plot in a window, you can also use \verb|pylab.savefig(...)| to save the figure to a file (e.g. as a PNG or PDF). \subsection{Plot of sequence GC\%} Another easily calculated quantity of a nucleotide sequence is the GC\%. You might want to look at the GC\% of all the genes in a bacterial genome for example, and investigate any outliers which could have been recently acquired by horizontal gene transfer. Again, for this example we'll reuse our orchid FASTA file \href{https://raw.githubusercontent.com/biopython/biopython/master/Doc/examples/ls_orchid.fasta}\texttt{ls\_orchid.fasta}. First of all, we will use \verb|Bio.SeqIO| to parse the FASTA file and compile a list of all the GC percentages. Again, you could do this with a for loop, but I prefer this: \begin{minted}{python} from Bio import SeqIO from Bio.SeqUtils import GC gc_values = sorted(GC(rec.seq) for rec in SeqIO.parse("ls_orchid.fasta", "fasta")) \end{minted} Having read in each sequence and calculated the GC\%, we then sorted them into ascending order. Now we'll take this list of floating point values and plot them with matplotlib: \begin{minted}{python} import pylab pylab.plot(gc_values) pylab.title("%i orchid sequences\nGC%% %0.1f to %0.1f" \ % (len(gc_values),min(gc_values),max(gc_values))) pylab.xlabel("Genes") pylab.ylabel("GC%") pylab.show() \end{minted} % % Have an HTML version and a PDF version to display nicely... % \begin{htmlonly} \noindent As in the previous example, that should pop up a new window containing a graph: \imgsrc[width=600, height=450]{images/gc_plot.png} \end{htmlonly} % % Now the PDF equivalent where we cannot always expect the figure % to be positioned right next to the text, so we'll use a reference. % \begin{latexonly} \begin{figure}[htbp] \centering \includegraphics[width=0.8\textwidth]{images/gc_plot.png} \caption{Histogram of orchid sequence lengths.} \label{fig:seq-gc-plot} \end{figure} \noindent As in the previous example, that should pop up a new window with the graph shown in Figure~\ref{fig:seq-gc-plot}. \end{latexonly} % % The text now continues... % If you tried this on the full set of genes from one organism, you'd probably get a much smoother plot than this. \subsection{Nucleotide dot plots} A dot plot is a way of visually comparing two nucleotide sequences for similarity to each other. A sliding window is used to compare short sub-sequences to each other, often with a mis-match threshold. Here for simplicity we'll only look for perfect matches (shown in black \begin{latexonly} in Figure~\ref{fig:nuc-dot-plot}). \end{latexonly} \begin{htmlonly} in the plot below). \end{htmlonly} % % Now the PDF equivalent where we cannot always expect the figure % to be positioned right next to the text, so we'll use a reference. % \begin{latexonly} \begin{figure}[htbp] \centering \includegraphics[width=0.8\textwidth]{images/dot_plot.png} \caption{Nucleotide dot plot of two orchid sequence lengths (using pylab's imshow function).} \label{fig:nuc-dot-plot} \end{figure} \end{latexonly} To start off, we'll need two sequences. For the sake of argument, we'll just take the first two from our orchid FASTA file \href{https://raw.githubusercontent.com/biopython/biopython/master/Doc/examples/ls_orchid.fasta}\texttt{ls\_orchid.fasta}: \begin{minted}{python} from Bio import SeqIO with open("ls_orchid.fasta") as in_handle: record_iterator = SeqIO.parse(in_handle, "fasta") rec_one = next(record_iterator) rec_two = next(record_iterator) \end{minted} We're going to show two approaches. Firstly, a simple naive implementation which compares all the window sized sub-sequences to each other to compiles a similarity matrix. You could construct a matrix or array object, but here we just use a list of lists of booleans created with a nested list comprehension: \begin{minted}{python} window = 7 seq_one = str(rec_one.seq).upper() seq_two = str(rec_two.seq).upper() data = [[(seq_one[i:i + window] != seq_two[j:j + window]) for j in range(len(seq_one) - window)] for i in range(len(seq_two) - window)] \end{minted} Note that we have \emph{not} checked for reverse complement matches here. Now we'll use the matplotlib's \verb|pylab.imshow()| function to display this data, first requesting the gray color scheme so this is done in black and white: \begin{minted}{python} import pylab pylab.gray() pylab.imshow(data) pylab.xlabel("%s (length %i bp)" % (rec_one.id, len(rec_one))) pylab.ylabel("%s (length %i bp)" % (rec_two.id, len(rec_two))) pylab.title("Dot plot using window size %i\n(allowing no mis-matches)" % window) pylab.show() \end{minted} %pylab.savefig("dot_plot.png", dpi=75) %pylab.savefig("dot_plot.pdf") % % Have a HTML version and a PDF version to display nicely... % \begin{htmlonly} \noindent That should pop up a new window containing a graph like this: \imgsrc[width=600, height=450]{images/dot_plot.png} \end{htmlonly} \begin{latexonly} \noindent That should pop up a new window showing the graph in Figure~\ref{fig:nuc-dot-plot}. \end{latexonly} % % The text now continues... % As you might have expected, these two sequences are very similar with a partial line of window sized matches along the diagonal. There are no off diagonal matches which would be indicative of inversions or other interesting events. The above code works fine on small examples, but there are two problems applying this to larger sequences, which we will address below. First off all, this brute force approach to the all against all comparisons is very slow. Instead, we'll compile dictionaries mapping the window sized sub-sequences to their locations, and then take the set intersection to find those sub-sequences found in both sequences. This uses more memory, but is \emph{much} faster. Secondly, the \verb|pylab.imshow()| function is limited in the size of matrix it can display. As an alternative, we'll use the \verb|pylab.scatter()| function. We start by creating dictionaries mapping the window-sized sub-sequences to locations: \begin{minted}{python} window = 7 dict_one = {} dict_two = {} for (seq, section_dict) in [(str(rec_one.seq).upper(), dict_one), (str(rec_two.seq).upper(), dict_two)]: for i in range(len(seq)-window): section = seq[i:i+window] try: section_dict[section].append(i) except KeyError: section_dict[section] = [i] #Now find any sub-sequences found in both sequences #(Python 2.3 would require slightly different code here) matches = set(dict_one).intersection(dict_two) print("%i unique matches" % len(matches)) \end{minted} \noindent In order to use the \verb|pylab.scatter()| we need separate lists for the $x$ and $y$ co-ordinates: \begin{minted}{python} # Create lists of x and y co-ordinates for scatter plot x = [] y = [] for section in matches: for i in dict_one[section]: for j in dict_two[section]: x.append(i) y.append(j) \end{minted} \noindent We are now ready to draw the revised dot plot as a scatter plot: \begin{minted}{python} import pylab pylab.cla() #clear any prior graph pylab.gray() pylab.scatter(x,y) pylab.xlim(0, len(rec_one)-window) pylab.ylim(0, len(rec_two)-window) pylab.xlabel("%s (length %i bp)" % (rec_one.id, len(rec_one))) pylab.ylabel("%s (length %i bp)" % (rec_two.id, len(rec_two))) pylab.title("Dot plot using window size %i\n(allowing no mis-matches)" % window) pylab.show() \end{minted} %pylab.savefig("dot_plot.png", dpi=75) %pylab.savefig("dot_plot.pdf") % % Have a HTML version and a PDF version to display nicely... % \begin{htmlonly} \noindent That should pop up a new window containing a graph like this: \imgsrc[width=600, height=450]{images/dot_plot_scatter.png} \end{htmlonly} \begin{latexonly} \noindent That should pop up a new window showing the graph in Figure~\ref{fig:nuc-dot-plot-scatter}. \begin{figure}[htbp] \centering \includegraphics[width=0.8\textwidth]{images/dot_plot_scatter.png} \caption{Nucleotide dot plot of two orchid sequence lengths (using pylab's scatter function).} \label{fig:nuc-dot-plot-scatter} \end{figure}\end{latexonly} Personally I find this second plot much easier to read! Again note that we have \emph{not} checked for reverse complement matches here -- you could extend this example to do this, and perhaps plot the forward matches in one color and the reverse matches in another. \subsection{Plotting the quality scores of sequencing read data} If you are working with second generation sequencing data, you may want to try plotting the quality data. Here is an example using two FASTQ files containing paired end reads, \texttt{SRR001666\_1.fastq} for the forward reads, and \texttt{SRR001666\_2.fastq} for the reverse reads. These were downloaded from the ENA sequence read archive FTP site (\url{ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR001/SRR001666/SRR001666_1.fastq.gz} and \url{ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR001/SRR001666/SRR001666_2.fastq.gz}), and are from \textit{E. coli} -- see \url{https://www.ebi.ac.uk/ena/data/view/SRR001666} for details. %Originally from ftp://ftp.ncbi.nlm.nih.gov/sra/static/SRX000/SRX000430/ In the following code the \verb|pylab.subplot(...)| function is used in order to show the forward and reverse qualities on two subplots, side by side. There is also a little bit of code to only plot the first fifty reads. \begin{minted}{python} import pylab from Bio import SeqIO for subfigure in [1,2]: filename = "SRR001666_%i.fastq" % subfigure pylab.subplot(1, 2, subfigure) for i,record in enumerate(SeqIO.parse(filename, "fastq")): if i >= 50 : break #trick! pylab.plot(record.letter_annotations["phred_quality"]) pylab.ylim(0,45) pylab.ylabel("PHRED quality score") pylab.xlabel("Position") pylab.savefig("SRR001666.png") print("Done") \end{minted} You should note that we are using the \verb|Bio.SeqIO| format name \texttt{fastq} here because the NCBI has saved these reads using the standard Sanger FASTQ format with PHRED scores. However, as you might guess from the read lengths, this data was from an Illumina Genome Analyzer and was probably originally in one of the two Solexa/Illumina FASTQ variant file formats instead. This example uses the \verb|pylab.savefig(...)| function instead of \verb|pylab.show(...)|, but as mentioned before both are useful. \begin{latexonly} \begin{figure}[htbp] \centering \includegraphics[width=0.8\textwidth]{images/SRR001666.png} \caption{Quality plot for some paired end reads.} \label{fig:paired-end-qual-plot} \end{figure} The result is shown in Figure~\ref{fig:paired-end-qual-plot}. \end{latexonly} \begin{htmlonly} Here is the result: %Blank lines here are important! \imgsrc[width=600, height=600]{images/SRR001666.png} \end{htmlonly} \section{Dealing with alignments} This section can been seen as a follow on to Chapter~\ref{chapter:align}. \subsection{Calculating summary information} \label{sec:summary_info} Once you have an alignment, you are very likely going to want to find out information about it. Instead of trying to have all of the functions that can generate information about an alignment in the alignment object itself, we've tried to separate out the functionality into separate classes, which act on the alignment. Getting ready to calculate summary information about an object is quick to do. Let's say we've got an alignment object called \verb|alignment|, for example read in using \verb|Bio.AlignIO.read(...)| as described in Chapter~\ref{chapter:align}. All we need to do to get an object that will calculate summary information is: \begin{minted}{python} from Bio.Align import AlignInfo summary_align = AlignInfo.SummaryInfo(alignment) \end{minted} The \verb|summary_align| object is very useful, and will do the following neat things for you: \begin{enumerate} \item Calculate a quick consensus sequence -- see section~\ref{sec:consensus} \item Get a position specific score matrix for the alignment -- see section~\ref{sec:pssm} \item Calculate the information content for the alignment -- see section~\ref{sec:getting_info_content} \item Generate information on substitutions in the alignment -- section~\ref{sec:sub_matrix} details using this to generate a substitution matrix. \end{enumerate} \subsection{Calculating a quick consensus sequence} \label{sec:consensus} The \verb|SummaryInfo| object, described in section~\ref{sec:summary_info}, provides functionality to calculate a quick consensus of an alignment. Assuming we've got a \verb|SummaryInfo| object called \verb|summary_align| we can calculate a consensus by doing: \begin{minted}{python} consensus = summary_align.dumb_consensus() \end{minted} As the name suggests, this is a really simple consensus calculator, and will just add up all of the residues at each point in the consensus, and if the most common value is higher than some threshold value will add the common residue to the consensus. If it doesn't reach the threshold, it adds an ambiguity character to the consensus. The returned consensus object is Seq object whose alphabet is inferred from the alphabets of the sequences making up the consensus. So doing a \verb|print consensus| would give: \begin{minted}{text} consensus Seq('TATACATNAAAGNAGGGGGATGCGGATAAATGGAAAGGCGAAAGAAAGAAAAAAATGAAT ...', IUPACAmbiguousDNA()) \end{minted} You can adjust how \verb|dumb_consensus| works by passing optional parameters: \begin{description} \item[the threshold] This is the threshold specifying how common a particular residue has to be at a position before it is added. The default is $0.7$ (meaning $70\%$). \item[the ambiguous character] This is the ambiguity character to use. The default is 'N'. \item[the consensus alphabet] This is the alphabet to use for the consensus sequence. If an alphabet is not specified than we will try to guess the alphabet based on the alphabets of the sequences in the alignment. \end{description} \subsection{Position Specific Score Matrices} \label{sec:pssm} Position specific score matrices (PSSMs) summarize the alignment information in a different way than a consensus, and may be useful for different tasks. Basically, a PSSM is a count matrix. For each column in the alignment, the number of each alphabet letters is counted and totaled. The totals are displayed relative to some representative sequence along the left axis. This sequence may be the consesus sequence, but can also be any sequence in the alignment. For instance for the alignment, \begin{minted}{text} GTATC AT--C CTGTC \end{minted} \noindent the PSSM is: \begin{minted}{text} G A T C G 1 1 0 1 T 0 0 3 0 A 1 1 0 0 T 0 0 2 0 C 0 0 0 3 \end{minted} Let's assume we've got an alignment object called \verb|c_align|. To get a PSSM with the consensus sequence along the side we first get a summary object and calculate the consensus sequence: \begin{minted}{python} summary_align = AlignInfo.SummaryInfo(c_align) consensus = summary_align.dumb_consensus() \end{minted} Now, we want to make the PSSM, but ignore any \verb|N| ambiguity residues when calculating this: \begin{minted}{python} my_pssm = summary_align.pos_specific_score_matrix(consensus, chars_to_ignore = ['N']) \end{minted} Two notes should be made about this: \begin{enumerate} \item To maintain strictness with the alphabets, you can only include characters along the top of the PSSM that are in the alphabet of the alignment object. Gaps are not included along the top axis of the PSSM. \item The sequence passed to be displayed along the left side of the axis does not need to be the consensus. For instance, if you wanted to display the second sequence in the alignment along this axis, you would need to do: \begin{minted}{python} second_seq = alignment.get_seq_by_num(1) my_pssm = summary_align.pos_specific_score_matrix(second_seq chars_to_ignore = ['N']) \end{minted} \end{enumerate} The command above returns a \verb|PSSM| object. To print out the PSSM as shown above, we simply need to do a \verb|print(my_pssm)|, which gives: \begin{minted}{text} A C G T T 0.0 0.0 0.0 7.0 A 7.0 0.0 0.0 0.0 T 0.0 0.0 0.0 7.0 A 7.0 0.0 0.0 0.0 C 0.0 7.0 0.0 0.0 A 7.0 0.0 0.0 0.0 T 0.0 0.0 0.0 7.0 T 1.0 0.0 0.0 6.0 ... \end{minted} You can access any element of the PSSM by subscripting like \verb|your_pssm[sequence_number][residue_count_name]|. For instance, to get the counts for the 'A' residue in the second element of the above PSSM you would do: \begin{minted}{pycon} >>> print(my_pssm[1]["A"]) 7.0 \end{minted} The structure of the PSSM class hopefully makes it easy both to access elements and to pretty print the matrix. \subsection{Information Content} \label{sec:getting_info_content} A potentially useful measure of evolutionary conservation is the information content of a sequence. A useful introduction to information theory targeted towards molecular biologists can be found at \url{http://www.lecb.ncifcrf.gov/~toms/paper/primer/}. For our purposes, we will be looking at the information content of a consesus sequence, or a portion of a consensus sequence. We calculate information content at a particular column in a multiple sequence alignment using the following formula: \begin{displaymath} IC_{j} = \sum_{i=1}^{N_{a}} P_{ij} \mathrm{log}\left(\frac{P_{ij}}{Q_{i}}\right) \end{displaymath} \noindent where: \begin{itemize} \item $IC_{j}$ -- The information content for the $j$-th column in an alignment. \item $N_{a}$ -- The number of letters in the alphabet. \item $P_{ij}$ -- The frequency of a particular letter $i$ in the $j$-th column (i.~e.~if G occurred 3 out of 6 times in an aligment column, this would be 0.5) \item $Q_{i}$ -- The expected frequency of a letter $i$. This is an optional argument, usage of which is left at the user's discretion. By default, it is automatically assigned to $0.05 = 1/20$ for a protein alphabet, and $0.25 = 1/4$ for a nucleic acid alphabet. This is for geting the information content without any assumption of prior distributions. When assuming priors, or when using a non-standard alphabet, you should supply the values for $Q_{i}$. \end{itemize} Well, now that we have an idea what information content is being calculated in Biopython, let's look at how to get it for a particular region of the alignment. First, we need to use our alignment to get an alignment summary object, which we'll assume is called \verb|summary_align| (see section~\ref{sec:summary_info}) for instructions on how to get this. Once we've got this object, calculating the information content for a region is as easy as: \begin{minted}{python} info_content = summary_align.information_content(5, 30, chars_to_ignore = ['N']) \end{minted} Wow, that was much easier then the formula above made it look! The variable \verb|info_content| now contains a float value specifying the information content over the specified region (from 5 to 30 of the alignment). We specifically ignore the ambiguity residue 'N' when calculating the information content, since this value is not included in our alphabet (so we shouldn't be interested in looking at it!). As mentioned above, we can also calculate relative information content by supplying the expected frequencies: \begin{minted}{python} expect_freq = { 'A' : .3, 'G' : .2, 'T' : .3, 'C' : .2} \end{minted} The expected should not be passed as a raw dictionary, but instead by passed as a \verb|SubsMat.FreqTable| object (see section~\ref{sec:freq_table} for more information about FreqTables). The FreqTable object provides a standard for associating the dictionary with an Alphabet, similar to how the Biopython Seq class works. To create a FreqTable object, from the frequency dictionary you just need to do: \begin{minted}{python} from Bio.Alphabet import IUPAC from Bio.SubsMat import FreqTable e_freq_table = FreqTable.FreqTable(expect_freq, FreqTable.FREQ, IUPAC.unambiguous_dna) \end{minted} Now that we've got that, calculating the relative information content for our region of the alignment is as simple as: \begin{minted}{python} info_content = summary_align.information_content(5, 30, e_freq_table = e_freq_table, chars_to_ignore = ['N']) \end{minted} Now, \verb|info_content| will contain the relative information content over the region in relation to the expected frequencies. The value return is calculated using base 2 as the logarithm base in the formula above. You can modify this by passing the parameter \verb|log_base| as the base you want: \begin{minted}{python} info_content = summary_align.information_content(5, 30, log_base = 10, chars_to_ignore = ['N']) \end{minted} By default nucleotide or amino acid residues with a frequency of 0 in a column are not take into account when the relative information column for that column is computed. If this is not the desired result, you can use \verb|pseudo_count| instead. \begin{minted}{python} info_content = summary_align.information_content(5, 30, chars_to_ignore = ['N'], pseudo_count = 1) \end{minted} In this case, the observed frequency $P_{ij}$ of a particular letter $i$ in the $j$-th column is computed as follow : \begin{displaymath} P_{ij} = \frac{n_{ij} + k\times Q_{i}}{N_{j} + k} \end{displaymath} \noindent where: \begin{itemize} \item $k$ -- the pseudo count you pass as argument. \item $k$ -- the pseudo count you pass as argument. \item $Q_{i}$ -- The expected frequency of the letter $i$ as described above. \end{itemize} Well, now you are ready to calculate information content. If you want to try applying this to some real life problems, it would probably be best to dig into the literature on information content to get an idea of how it is used. Hopefully your digging won't reveal any mistakes made in coding this function! \section{Substitution Matrices} \label{sec:sub_matrix} Substitution matrices are an extremely important part of everyday bioinformatics work. They provide the scoring terms for classifying how likely two different residues are to substitute for each other. This is essential in doing sequence comparisons. The book ``Biological Sequence Analysis'' by Durbin et al. provides a really nice introduction to Substitution Matrices and their uses. Some famous substitution matrices are the PAM and BLOSUM series of matrices. Biopython provides a ton of common substitution matrices, and also provides functionality for creating your own substitution matrices. \subsection{Using common substitution matrices} \subsection{Creating your own substitution matrix from an alignment} \label{sec:subs_mat_ex} A very cool thing that you can do easily with the substitution matrix classes is to create your own substitution matrix from an alignment. In practice, this is normally done with protein alignments. In this example, we'll first get a Biopython alignment object and then get a summary object to calculate info about the alignment. The file containing \href{examples/protein.aln}{protein.aln} (also available online \href{https://raw.githubusercontent.com/biopython/biopython/master/Doc/examples/protein.aln}{here}) contains the Clustalw alignment output. %doctest examples \begin{minted}{pycon} >>> from Bio import AlignIO >>> from Bio import Alphabet >>> from Bio.Alphabet import IUPAC >>> from Bio.Align import AlignInfo >>> filename = "protein.aln" >>> alpha = Alphabet.Gapped(IUPAC.protein) >>> c_align = AlignIO.read(filename, "clustal", alphabet=alpha) >>> summary_align = AlignInfo.SummaryInfo(c_align) \end{minted} Sections~\ref{sec:align_clustal} and~\ref{sec:summary_info} contain more information on doing this. Now that we've got our \verb|summary_align| object, we want to use it to find out the number of times different residues substitute for each other. To make the example more readable, we'll focus on only amino acids with polar charged side chains. Luckily, this can be done easily when generating a replacement dictionary, by passing in all of the characters that should be ignored. Thus we'll create a dictionary of replacements for only charged polar amino acids using: %cont-doctest \begin{minted}{pycon} >>> replace_info = summary_align.replacement_dictionary(["G", "A", "V", "L", "I", ... "M", "P", "F", "W", "S", ... "T", "N", "Q", "Y", "C"]) \end{minted} This information about amino acid replacements is represented as a python dictionary which will look something like (the order can vary): \begin{minted}{python} {('R', 'R'): 2079.0, ('R', 'H'): 17.0, ('R', 'K'): 103.0, ('R', 'E'): 2.0, ('R', 'D'): 2.0, ('H', 'R'): 0, ('D', 'H'): 15.0, ('K', 'K'): 3218.0, ('K', 'H'): 24.0, ('H', 'K'): 8.0, ('E', 'H'): 15.0, ('H', 'H'): 1235.0, ('H', 'E'): 18.0, ('H', 'D'): 0, ('K', 'D'): 0, ('K', 'E'): 9.0, ('D', 'R'): 48.0, ('E', 'R'): 2.0, ('D', 'K'): 1.0, ('E', 'K'): 45.0, ('K', 'R'): 130.0, ('E', 'D'): 241.0, ('E', 'E'): 3305.0, ('D', 'E'): 270.0, ('D', 'D'): 2360.0} \end{minted} This information gives us our accepted number of replacements, or how often we expect different things to substitute for each other. It turns out, amazingly enough, that this is all of the information we need to go ahead and create a substitution matrix. First, we use the replacement dictionary information to create an Accepted Replacement Matrix (ARM): %cont-doctest \begin{minted}{pycon} >>> from Bio import SubsMat >>> my_arm = SubsMat.SeqMat(replace_info) \end{minted} With this accepted replacement matrix, we can go right ahead and create our log odds matrix (i.~e.~a standard type Substitution Matrix): %cont-doctest \begin{minted}{pycon} >>> my_lom = SubsMat.make_log_odds_matrix(my_arm) \end{minted} The log odds matrix you create is customizable with the following optional arguments: \begin{itemize} \item \verb|exp_freq_table| -- You can pass a table of expected frequencies for each alphabet. If supplied, this will be used instead of the passed accepted replacement matrix when calculate expected replacments. \item \verb|logbase| - The base of the logarithm taken to create the log odd matrix. Defaults to base 10. \item \verb|factor| - The factor to multiply each matrix entry by. This defaults to 10, which normally makes the matrix numbers easy to work with. \item \verb|round_digit| - The digit to round to in the matrix. This defaults to 0 (i.~e.~no digits). \end{itemize} Once you've got your log odds matrix, you can display it prettily using the function \verb|print_mat|. Doing this on our created matrix gives: %cont-doctest \begin{minted}{pycon} >>> my_lom.print_mat() D 2 E -1 1 H -5 -4 3 K -10 -5 -4 1 R -4 -8 -4 -2 2 D E H K R \end{minted} Very nice. Now we've got our very own substitution matrix to play with! \section{BioSQL -- storing sequences in a relational database} \label{sec:BioSQL} \href{https://www.biosql.org/}{BioSQL} is a joint effort between the \href{https://www.open-bio.org/wiki/Main_Page}{OBF} projects (BioPerl, BioJava etc) to support a shared database schema for storing sequence data. In theory, you could load a GenBank file into the database with BioPerl, then using Biopython extract this from the database as a record object with features - and get more or less the same thing as if you had loaded the GenBank file directly as a SeqRecord using \verb|Bio.SeqIO| (Chapter~\ref{chapter:seqio}). Biopython's BioSQL module is currently documented at \url{http://biopython.org/wiki/BioSQL} which is part of our wiki pages.
{ "alphanum_fraction": 0.7400767896, "avg_line_length": 42.8997655334, "ext": "tex", "hexsha": "5bc400eb12c9430d49cb74f4a3cbc1a78cd66402", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "120616cf0d28cb8e581898afd6604e5a2065a137", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "EsamTolba/biopython", "max_forks_repo_path": "Doc/Tutorial/chapter_cookbook.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "120616cf0d28cb8e581898afd6604e5a2065a137", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "EsamTolba/biopython", "max_issues_repo_path": "Doc/Tutorial/chapter_cookbook.tex", "max_line_length": 513, "max_stars_count": 1, "max_stars_repo_head_hexsha": "120616cf0d28cb8e581898afd6604e5a2065a137", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "EsamTolba/biopython", "max_stars_repo_path": "Doc/Tutorial/chapter_cookbook.tex", "max_stars_repo_stars_event_max_datetime": "2020-02-13T14:32:44.000Z", "max_stars_repo_stars_event_min_datetime": "2020-02-13T14:32:44.000Z", "num_tokens": 19860, "size": 73187 }
\documentclass[pdftex,12pt,a4paper,oneside]{book} \usepackage[australian]{babel} \usepackage{graphicx,color} \usepackage{epsf,verbatim} \textheight = 257 true mm \textwidth = 160 true mm \hoffset = -12 true mm \voffset = -30 true mm \usepackage[pdftex,colorlinks,plainpages=false]{hyperref} %\usepackage[style=list,number=page]{glossary} %\makeglossary % \+ permits optional breaking of a word across lines without the % hyphen. \newcommand{\+}{\discretionary{}{}{}} \newcommand{\link}{\htmladdnormallink} \begin{document} % The beginning of the document. %\bibliographystyle{amsplain} \title{ESyS-Particle Tutorial and User's Guide \\ Version 2.3.1} \author{D. Weatherley, W. Hancock \& V. Boros\\\small{The University of Queensland} \\ S. Abe \\\small{Institute for Geothermal Resource Management} } \date{\today} \maketitle \section*{Preface} This document provides an introduction to Discrete Element Method (DEM) modelling using the ESyS-Particle Simulation Software developed by the \link{Centre for Geoscience Computing}{http://earth.uq.edu.au/centre-geoscience-computing} at \link{The University of Queensland}{http://www.uq.edu.au}. The guide is intended for new users and is written as a step-by-step tutorial on the basic principles and usage of the ESyS-Particle software. Readers are encouraged to obtain \link{a copy of the software}{https://launchpad.net/esys-particle/} and try the examples presented here. Readers are assumed to have had some experience using \link{Python}{http://www.python.org} and to be familiar with the fundamental principles of the DEM. If you have never used Python before, the \link{Python Language Tutorial}{http://docs.python.org/tut/tut.html} is an excellent starting point. \tableofcontents \listoffigures %EXAMPLE FOR MAKING GLOSSARY ENTRIES %NOAA\glossary{name=NOAA, %description=National Oceanographic and Atmospheric Administration,format=textbf} %EXAMPLE FOR CITING PAPERS %Impact of NTHMP on PTWC~\cite{McCreery2001} \include{bodytext} \include{gengeo} \newpage \section{Additional ESyS-Particle resources and documentation} \appendix \chapter{Code-listings for tutorial examples}\label{code} \section{ESyS-Particle scripts} \include{examples/bingle.py} \include{examples/bingle_output.py} \include{examples/bingle_chk.py} \include{examples/bingle_vis.py} \include{examples/POVsnaps.py} \include{examples/bingle_Runnable.py} \include{examples/gravity.py} \include{examples/gravity_cube.py} \include{examples/slope_fail.py} \include{examples/slope_friction.py} \include{examples/slope_friction_floor.py} \include{examples/slope_friction_walls.py} \include{examples/floorMesh.msh} \include{examples/hopper_flow.py} \include{examples/rot_compress.py} \include{examples/WallLoader.py} \include{examples/shearcell.py} \include{examples/ServoWallLoader.py} \section{\texttt{GenGeo} examples}\label{sec:geocode} \include{examples/simple_box_compact.py} \include{examples/smooth_box_compact.py} \include{examples/cluster_box.py} \chapter[Interaction Groups \& Fields]{Tables of ESyS-Particle Interaction Groups and Field Savers} \label{tables} \include{tables/interaction_groups} \include{tables/field_names} \chapter{File Formats} \include{tables/file_formats} \include{tables/file_format_description} %\bibliography{paper} %\printglossary \end{document} % The end of the document.
{ "alphanum_fraction": 0.7698320787, "avg_line_length": 37.5434782609, "ext": "tex", "hexsha": "c50e34bc046526005f879e500bb9868b183aaea3", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "e56638000fd9c4af77e21c75aa35a4f8922fd9f0", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "danielfrascarelli/esys-particle", "max_forks_repo_path": "Doc/Tutorial/paper.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e56638000fd9c4af77e21c75aa35a4f8922fd9f0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "danielfrascarelli/esys-particle", "max_issues_repo_path": "Doc/Tutorial/paper.tex", "max_line_length": 873, "max_stars_count": null, "max_stars_repo_head_hexsha": "e56638000fd9c4af77e21c75aa35a4f8922fd9f0", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "danielfrascarelli/esys-particle", "max_stars_repo_path": "Doc/Tutorial/paper.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 916, "size": 3454 }
\par \chapter{{\tt SubMtxList}: {\tt SubMtx} list object } \par This object was created to handle a list of lists of {\tt SubMtx} objects during a matrix solve. Its form and function is very close to the {\tt ChvList} object that handles lists of lists of {\tt Chv} objects during the factorization. \par Here are the main properties. \begin{enumerate} \item There are a fixed number of lists, set when the {\tt SubMtxList} object is initialized. \item For each list there is an expected count, the number of times an object will be added to the list. (Note, a {\tt NULL} object can be added to the list. In this case, nothing is added to the list, but its count is decremented.) \item There is one lock for all the lists, but each list can be flagged as necessary to lock or not necessary to lock before an insertion, count decrement, or an extraction is made to the list. \end{enumerate} \par The {\tt SubMtxList} object manages a number of lists that may require handling critical sections of code. For example, one thread may want to add an object to a particular list while another thread is removing objects. The critical sections are hidden inside the {\tt SubMtxList} object. Our solve code do not know about any mutual exclusion locks that govern access to the lists. \par There are four functions of the {\tt SubMtxList} object. \begin{itemize} \item Is the incoming count for a list nonzero? \item Is a list nonempty? \item Add an object to a list (possibly a {\tt NULL} object) and decrement the incoming count. \item Remove a subset of objects from a list. \end{itemize} The first two operations are queries, and can be done without locking the list. The third operation needs a lock only when two or more threads will be inserting objects into the list. The fourth operation requires a lock only when one thread will add an object while another thread removes the object and the incoming count is not yet zero. \par Having a lock associated with a {\tt SubMtxList} object is optional, for example, it is not needed during a serial factorization nor a MPI solve. In the latter case there is one {\tt SubMtxList} per process. For a multithreaded solve there is one {\tt SubMtxList} object that is shared by all threads. The mutual exclusion lock that is (optionally) embedded in the {\tt SubMtxList} object is a {\tt Lock} object from this library. It is inside the {\tt Lock} object that we have a mutual exclusion lock. Presently we support the Solaris and POSIX thread packages. Porting the multithreaded codes to another platform should be simple if the POSIX thread package is present. Another type of thread package will require some modifications to the {\tt Lock} object, but none to the {\tt SubMtxList} objects. \par
{ "alphanum_fraction": 0.7778993435, "avg_line_length": 39.1714285714, "ext": "tex", "hexsha": "ec7ab715d1f4e584a2e279a30f34e16d7a7ea9d8", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-08-29T18:41:28.000Z", "max_forks_repo_forks_event_min_datetime": "2019-08-29T18:41:28.000Z", "max_forks_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "alleindrach/calculix-desktop", "max_forks_repo_path": "ccx_prool/SPOOLES.2.2/SubMtxList/doc/intro.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_issues_repo_issues_event_max_datetime": "2018-01-25T16:08:31.000Z", "max_issues_repo_issues_event_min_datetime": "2017-09-21T17:03:55.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "alleindrach/calculix-desktop", "max_issues_repo_path": "ccx_prool/SPOOLES.2.2/SubMtxList/doc/intro.tex", "max_line_length": 68, "max_stars_count": null, "max_stars_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "alleindrach/calculix-desktop", "max_stars_repo_path": "ccx_prool/SPOOLES.2.2/SubMtxList/doc/intro.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 679, "size": 2742 }
\section{Acknowledgments} This material is based upon work supported under an Integrated University Program Graduate Fellowship. The authors would like to thank Nathan Ryan of the University of Illinois Urbana-Champaign for his assistance in technical editing.
{ "alphanum_fraction": 0.8396946565, "avg_line_length": 43.6666666667, "ext": "tex", "hexsha": "3cec5b148e76277790c007c505569d2db802eddd", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-09-16T13:12:19.000Z", "max_forks_repo_forks_event_min_datetime": "2021-09-16T13:12:19.000Z", "max_forks_repo_head_hexsha": "a64fa461f9507ee27a6e8a2449a08fb3bdbad485", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "abachma2/2021-bachmann-epjn", "max_forks_repo_path": "acks.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "a64fa461f9507ee27a6e8a2449a08fb3bdbad485", "max_issues_repo_issues_event_max_datetime": "2021-10-05T13:24:12.000Z", "max_issues_repo_issues_event_min_datetime": "2021-07-08T17:29:46.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "abachma2/2021-bachmann-epjn", "max_issues_repo_path": "acks.tex", "max_line_length": 90, "max_stars_count": 1, "max_stars_repo_head_hexsha": "a64fa461f9507ee27a6e8a2449a08fb3bdbad485", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "abachma2/2021-bachmann-epjn", "max_stars_repo_path": "acks.tex", "max_stars_repo_stars_event_max_datetime": "2021-04-30T12:56:38.000Z", "max_stars_repo_stars_event_min_datetime": "2021-04-30T12:56:38.000Z", "num_tokens": 50, "size": 262 }
\documentclass{memoir} \usepackage{notestemplate} %\logo{~/School-Work/Auxiliary-Files/resources/png/logo.png} %\institute{Rice University} %\faculty{Faculty of Whatever Sciences} %\department{Department of Mathematics} %\title{Class Notes} %\subtitle{Based on MATH xxx} %\author{\textit{Author}\\Gabriel \textsc{Gress}} %\supervisor{Linus \textsc{Torvalds}} %\context{Well, I was bored...} %\date{\today} %\makeindex \begin{document} % \maketitle % Notes taken on 06/03/21 \section{Group Representations and Free Groups} \label{sec:group_representations_and_free_groups} We revisit group representations by introducing some new concepts to incorporate, and see how that allows us to expand our theory. \begin{defn}[Commutator] Let \(x,y \in G\) be elements of a group, and let \(A,B \subset G\) be nonempty subsetf of \(G\). The \textbf{commutator of \(x\) and \(y\)} is denoted by \begin{align*} [x,y] = x ^{-1}y^{-1}xy \end{align*} and the group generated by commutators of elements from \(A,B\) is denoted by \begin{align*} [A,B] = \langle [a,b] \mid a \in A, b \in B \rangle . \end{align*} We can also define a subgroup of \(G\) by the group generated by commutators of elements of \(G\): \begin{align*} G' = \langle [x,y] \mid x,y \in G \rangle \end{align*} We call this the \textbf{commutator subgroup} of \(G\). \end{defn} This terminology arises because the commutator of \(x,y\) is 1 if and only if \(x\) and \(y \) commute. \begin{prop}[Properties of commutators] Let \(x,y \in G\) be elements of a group and let \(H\leq G\). Then \begin{itemize} \item \(xy = yx[x,y]\) \item \(H \triangleleft G\) if and only if \([H,G] \leq H\) \item \(\sigma [x,y] = [\sigma (x),\sigma (y)]\) for any automorphism \(\sigma \) of \(G\). Hence, \(G' \textrm{char}G\), and \(G / G'\) is abelian. \item If \(H \triangleleft G\) and \(G/H\) is abelian, then \(G' \leq H\). Conversely, if \(G' \leq H\), then \(H \triangleleft G\) and \(G / H\) is abelian. \item If \(\varphi :G\to H\) is a homomorphism of \(G\) into \(H\) and \(H\) is abelian, then \(G' \leq \textrm{Ker}\varphi \) and the following diagram commutes: \begin{center} \begin{tikzpicture} \matrix (m) [ matrix of math nodes, row sep = 3em, column sep = 4em ] { G & G / G' \\ & H \\ }; \path (m-1-2) edge [->] node {} (m-2-2) (m-1-1.east |- m-1-2) edge [->] node {} (m-1-2) (m-1-1) edge [->] node [below] {$\varphi$} (m-2-2); \end{tikzpicture} \end{center} \end{itemize} \end{prop} The way to think about this is that by passing to the quotient by the commutator subgroup of \(G\), we collapse all commutators to identity. Hence, all elements in the quotient group commute. This is why we have such a strong property in that, if \(G' \leq H\), then \(G / H\) must be abelian.\\ One word of caution-- there can be elements of the commutator subgroup that \textit{cannot} be written as a single commutator \([x,y]\) for any \(x,y\). In other words, \(G'\) is not just the set of single commutators, but is the group generated by elements of that form. \begin{prop} Let \(H,K \leq G\) be subgroups. The number of distinct ways of writing each element of the set \(HK\) in the form \(hk\), for some \(h \in H\), \(k \in K\), is \(\left| H \cap K \right| \).\\ If \(H\cap K = 1\), then each element of \(HK\) can be written uniquely as a product \(hk\) for some \(h \in H\), \(k \in K\). \end{prop} \begin{thm} Let \(H,K\leq G\) be subgroups of \(G\) such that \(H,K \triangleleft G\) and \(H\cap K = 1\). Then \begin{align*} HK \cong H\times K \end{align*} \end{thm} \subsection{Free Groups} \label{sub:free_groups} The idea of the free group is to define a group \(F(S)\) to be generated by some set \(S\) with no relations on any of the elements of \(S\). For example, if \(S = \left\{ a,b \right\} \), then some elements of \(F(S)\) would be of the form \(a,aa,ab,abab,bab\), as well as the inverses of these elements. We call elements of a free group \textbf{words}. Then we can multiply elements in the free group simply by concatenation. Our goal will be to define this formally and show it indeed satisfies the necessary properties. \begin{general}[Construction of Free Groups] Let \(S\) be a set, and let \(S^{-1}\) be a set disjoint from \(S\) such that there is a bijection from \(S\) to \(S^{-1}\). We denote the corresponding element for \(s \in S\) to be \(s\mapsto s^{-1}\in S^{-1}\), and furthermore we denote \((s^{-1})^{-1} = s\). Finally, we add a third singleton set disjoint from \(S,S^{-1}\) and call it \(\left\{ 1 \right\} \), and define it so \(1^{-1} = 1\). We also define that for any \(x \in S \cup S^{-1}\cup \left\{ 1 \right\} \), \(x^{1} = x\).\\ A \textbf{word} on \(S\) is a sequence \((s_1,s_2,s_3,\ldots)\) where \(s_i \in S\cup S^{-1} \cup \left\{ 1 \right\} \), and \(s_i = 1\) for all \(i\geq N\) for some arbitrarily large \(N\) (so that words are "infinite", but not in practice). In order to get uniqueness of words, we say a word is \textbf{reduced} if \begin{align*} s_{i+1} \neq s_{i}^{-1} \quad \forall i, s_i \neq 1\\ s_k = 1 \implies s_i = 1 \; \forall i \geq k \end{align*} We refer to the special word given by \begin{align*} (1,1,1,\ldots) \end{align*} to be the \textbf{empty word} and denote it by \(1\). Let \(F(S)\) be the set of reduced words on \(S\), and embed mKS into \(F(S)\) by \begin{align*} s \mapsto (s,1,1,1,\ldots) \end{align*} Hence we identify \(S\) with its image and consider \(S\subset F(S)\). Notie that if \(S = \emptyset\), \(F(S) = \left\{ 1 \right\} \).\\ Now we simply introduce a binary operation on \(F(S)\), so that two words in \(F(S)\) are concatenated, then reduced to their reduced word form. We leave the details of defining this binary operation to the reader, but one can check that this operation is well-defined and satisfies all the properties of a group operation. \end{general} \begin{thm} \(F(S)\) is a group by the binary operation of word concatenation with reduction. \end{thm} Furthermore, free groups satisfy a special kind of universal property. \begin{thm} Let \(G\) be a group, \(S\) a set, and \(\varphi :S\to G\) a set map. There is a unique group homomorphism \(\Phi :F(S) \to G\) such that the following diagram commutes: \begin{center} \begin{tikzpicture} \matrix (m) [ matrix of math nodes, row sep = 3em, column sep = 4em ] { S & F(S) \\ & G \\ }; \path (m-1-2) edge [->] node [right] {\(\Phi \)} (m-2-2) (m-1-1.east |- m-1-2) edge [->] node [above] {inclusion} (m-1-2) (m-1-1) edge [->] node [below] {$\varphi$} (m-2-2); \end{tikzpicture} \end{center} \end{thm} This further shows that \(F(S)\) is unique up to a unique isomorphism, which is the identity map on the set \(S\). \begin{defn}[Free Group] The group \(F(S)\) is called the \textbf{free group} on the set \(S\). A group \(F\) is a \textbf{free group} if there is some set \(S\) such that \(F = F(S)\), in which case we call \(S\) a set of \textbf{free generators} of \(F\). The cardinality of \(S\) is called the \textbf{rank} of the free group. \end{defn} \begin{thm} Subgroups of a free group are free. \end{thm} Furthermore, if \(G\leq F\) are free and \([F:G] = m\), then \begin{align*} \textrm{rank}(G) = 1 + m(\textrm{rank}(F)-1) \end{align*} Proving this requires a lot of other tools, such as covering spaces. \subsection{Presentations} \label{sub:presentations} Notice that if we take \(S = G\), then we can view \(G\) as a homomorphic image of the free group \(F(G)\) onto \(G\). Moreover, if \(G= \langle S \rangle \), there is a unique surjective homomorphism from \(F(S)\) onto \(G\) which is the identity on \(S\). This allows us to construct a more powerful construction of presentations, generators, and relations. \begin{defn} A subset \(S\subset G\) \textbf{generates \(G\)} by \(G = \langle S \rangle \) if and only if the map \(\pi :F(S) \to G\) which extends the identity map of \(S\) to \(G\) is surjective. \end{defn} This is distinct but equivalent to our earlier notion for subsets generating a group. However, it is more flexible, so we will use this from here on out. \begin{defn}[Presentations, Generators, and Relations] Let \(S\subset G\) be a subset of \(G\) such that \(G = \langle S \rangle \). A \textbf{presentation} for \(G\) is a pair \((S,R)\), where \(R\) is a set of words in \(F(S)\) such that \begin{align*} \textrm{ncl}_{F(S)}(\langle R \rangle ) = \textrm{Ker}(\pi ) \end{align*} where \(\textrm{ncl}\) denotes the normal closure (the smallest normal subgroup containing \(\langle R \rangle \)). The elements of \(S\) are called \textbf{generators}, and the elements of \(R\) are called \textbf{relations} of \(G\).\\ We say \(G\) is \textbf{finitely generated} if there is a presentation \((S,R)\) such that \(S\) is finite. Furthermore, \(G\) is \textbf{finitely presented} if \(R\) is also finite. \end{defn} A word of caution-- the kernel of the map \(F(S) \to G\) is \textit{not} \(\langle R \rangle \), but instead the union of all subsets conjugate to \(\langle R \rangle \) (including \(\langle R \rangle \) itself). Furthermore, even if \(S\) is fixed, a group will have many different presentations.\\ Finally, often when writing relations, if we have \(w_1w_2^{-1} = 1\), we might instead write \(w_1=w_2\), or vice versa. \subsection{Applying presentations to find homomorphisms and automorphisms} \label{sub:applying_presentations_to_find_homomorphisms_and_automorphisms} Suppose \(G\) is presented by \((\langle a,b \rangle , \langle r_1,\ldots,r_k \rangle )\). Then if \(a',b' \in H\) are elements that satisfy \(r_1,\ldots,r_k\), then there is a homomorphism from \(G\) into \(H\). If \(\pi :F(\left\{ a,b \right\} )\to G\) is the presentation homomorphism, we can define \begin{align*} \pi ':F(\left\{ a,b \right\} ) \to H\\ \pi'(a) = a', \; \pi'(b) = b'. \end{align*} This works because \(\textrm{Ker}\pi \leq \textrm{Ker}\pi'\), and so \(\pi '\) factors through \(\textrm{Ker}\pi \) and we get \begin{align*} G \cong F(\left\{ a,b \right\} ) / \textrm{Ker}\pi \to H \end{align*} Moreover, if \(\langle a',b' \rangle = H = G\), then this homomorphism is an automorphism of \(G\)(!!). In the other direction, any automorphism on a presentation must send a set of generators to another set of generators satisfying the same relations. \begin{exmp}[Dihedral presentation] Consider \(D_8 = \langle a,b \mid a^2 = b^{4} =1, aba = b^{-1} \rangle \). Any pair of elements \(a',b'\) that are of order 2 and 4 (and \(a'\) is noncentral) must satisfy the same relations. There are four noncentral elements of order 2, and two elements of order 4, so \(D_8\) has 8 automorphisms. \end{exmp} Similarly, any distinct pair of elements of order \(4\) in \(Q_8\) that are not inverses of each other necessarily generate \(Q_8\) and satisfy its relations. There are \(24\) such pairs, so \(\left| \textrm{Aut}(Q_8) \right| =24\). As one can see, free groups are an incredibly useful tool to classify these maps. % \printindex \end{document}
{ "alphanum_fraction": 0.6543552137, "avg_line_length": 55.631840796, "ext": "tex", "hexsha": "95e561264ea247be8b09dc609689838da0624c17", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "gjgress/LibreMath", "max_forks_repo_path": "Group Theory/Notes/source/GroupRepresentations2.tex", "max_issues_count": 12, "max_issues_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_issues_repo_issues_event_max_datetime": "2021-05-20T23:23:22.000Z", "max_issues_repo_issues_event_min_datetime": "2021-05-20T22:09:37.000Z", "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "gjgress/Libera-Mentis", "max_issues_repo_path": "Group Theory/Notes/source/GroupRepresentations2.tex", "max_line_length": 523, "max_stars_count": 1, "max_stars_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "gjgress/Libera-Mentis", "max_stars_repo_path": "Group Theory/Notes/source/GroupRepresentations2.tex", "max_stars_repo_stars_event_max_datetime": "2021-07-16T23:18:15.000Z", "max_stars_repo_stars_event_min_datetime": "2021-07-16T23:18:15.000Z", "num_tokens": 3705, "size": 11182 }
\section{Exercise 3. An equicontinous sequence of measures} \input{\ROOT/chapter_2/2_03/2_03.tex} \section{Exercise 6. Fourier series may diverge at $0$} \input{\ROOT/chapter_2/2_06.tex} \section{Exercise 9. Boundedness without closedness} \input{\ROOT/chapter_2/2_09.tex} \newpage \section{Exercise 10. Continuousness of bilinear mappings} \input{\ROOT/chapter_2/2_10.tex} \newpage \section{Exercise 12. A bilinear mapping that is not continuous} \input{\ROOT/chapter_2/2_12.tex} \newpage \section{Exercise 15. Baire cut} \input{\ROOT/chapter_2/2_15.tex} \newpage \section{Exercise 16. An elementary closed graph theorem} \input{\ROOT/chapter_2/2_16.tex}
{ "alphanum_fraction": 0.786259542, "avg_line_length": 36.3888888889, "ext": "tex", "hexsha": "7d0dce220db50f9f82cf619d02184eb8015a42ce", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "4d54af9cab1ce2bf512341cc1f2a0c81d7097754", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "gitcordier/FunctionalAnalysis", "max_forks_repo_path": "chapter_2/FA_chapter_2.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "4d54af9cab1ce2bf512341cc1f2a0c81d7097754", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "gitcordier/FunctionalAnalysis", "max_issues_repo_path": "chapter_2/FA_chapter_2.tex", "max_line_length": 64, "max_stars_count": null, "max_stars_repo_head_hexsha": "4d54af9cab1ce2bf512341cc1f2a0c81d7097754", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "gitcordier/FunctionalAnalysis", "max_stars_repo_path": "chapter_2/FA_chapter_2.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 218, "size": 655 }
\newcommand{\createRUpd}[4]{\fun{createRUpd}~\var{#1}~\var{#2}~\var{#3}~\var{#4}} \newcommand{\mkApparentPerformance}[3]{\fun{mkApparentPerformance}~{#1}~\var{#2}~\var{#3}} \newcommand{\Q}{\ensuremath{\mathbb{Q}}} \newcommand{\ActiveSlotCoeff}{\mathsf{ActiveSlotCoeff}} \newcommand{\EpochState}{\type{EpochState}} \newcommand{\BlocksMade}{\type{BlocksMade}} \newcommand{\RewardUpdate}{\type{RewardUpdate}} \newcommand{\PrtclState}{\type{PrtclState}} \newcommand{\PrtclEnv}{\type{PrtclEnv}} \newcommand{\PoolDistr}{\type{PoolDistr}} \newcommand{\BHBody}{\type{BHBody}} \newcommand{\HashHeader}{\type{HashHeader}} \newcommand{\HashBBody}{\type{HashBBody}} \newcommand{\BlockNo}{\type{BlockNo}} \newcommand{\Proof}{\type{Proof}} \newcommand{\OCert}{\type{OCert}} \newcommand{\bheader}[1]{\fun{bheader}~\var{#1}} \newcommand{\verifyVrf}[4]{\fun{verifyVrf}_{#1} ~ #2 ~ #3 ~#4} \newcommand{\slotToSeed}[1]{\fun{slotToSeed}~ \var{#1}} \newcommand{\XOR}{\mathsf{XOR}} \section{Removal of the Overlay Schedule} The overlay schedule was only used during the early days of the Shelley ledger, and can be safely removed. First, the protocol parameter $\var{d}$ is removed, and any functions that use it are reduced to the case $\var{d} = 0$. The function $\fun{mkApparentPerformance}$ is reduced to one of its branches, and its first argument is dropped. It is only used in the definition of $\fun{rewardOnePool}$, which needs to be adjusted accordingly. Additionally, the block header body now contains a single VRF value to be used for both the leader check and the block nonce. \begin{figure}[htb] \begin{align*} & \fun{mkApparentPerformance} \in \unitInterval \to \N \to \N \to \Q \\ & \mkApparentPerformance{\sigma}{n}{\overline{N}} = \frac{\beta}{\sigma} \\ & ~~~\where \\ & ~~~~~~~\beta = \frac{n}{\max(1, \overline{N})} \\ \end{align*} \caption{Function used in the Reward Calculation} \label{fig:functions:rewards} \end{figure} The function $\fun{createRUpd}$ is adjusted by simplifying $\eta$. \begin{figure}[htb] \emph{Calculation to create a reward update} % \begin{align*} & \fun{createRUpd} \in \N \to \BlocksMade \to \EpochState \to \Coin \to \RewardUpdate \\ & \createRUpd{slotsPerEpoch}{b}{es}{total} = \left( \Delta t_1,-~\Delta r_1+\Delta r_2,~\var{rs},~-\var{feeSS}\right) \\ & ~~~\where \\ & ~~~~~~~\dotsb \\ & ~~~~~~~\eta = \frac{blocksMade}{\floor{{slotsPerEpoch} \cdot \ActiveSlotCoeff}} \\ & ~~~~~~~\dotsb \end{align*} \caption{Reward Update Creation} \label{fig:functions:reward-update-creation} \end{figure} $\fun{incrBlocks}$ gets the same treatment as $\fun{mkApparentPerformance}$. Its invocation in $\mathsf{BBODY}$ needs to be adjusted as well. \begin{figure} \begin{align*} & \fun{incrBlocks} \in \KeyHash_{pool} \to \BlocksMade \to \BlocksMade \\ & \fun{incrBlocks}~\var{hk}~\var{b} = \begin{cases} b\cup\{\var{hk}\mapsto 1\} & \text{if }\var{hk}\notin\dom{b} \\ b\unionoverrideRight\{\var{hk}\mapsto n+1\} & \text{if }\var{hk}\mapsto n\in b \\ \end{cases} \end{align*} \end{figure} \newpage Finally, the $\mathsf{PRTCL}$ STS needs to be adjusted. To retire the $\mathsf{OVERLAY}$ STS, we inline the definition of its 'decentralized' case and drop all the unnecessary variables from its environment. It is invoked in $\mathsf{CHAIN}$, which needs to be adjusted accordingly. As there is now only a singe VRF check, slight modifications are needed for the definition of the block header body \text{BHBody} type and the function \text{vrfChecks}. The Shelley era accessor functions $\fun{bleader}$ and $\fun{bnonce}$ are replaced with new functions. \begin{figure*}[htb] % \emph{Block Header Body} % \begin{equation*} \BHBody = \left( \begin{array}{r@{~\in~}lr} \var{prev} & \HashHeader^? & \text{hash of previous block header}\\ \var{vk} & \VKey & \text{block issuer}\\ \var{vrfVk} & \VKey & \text{VRF verification key}\\ \var{blockno} & \BlockNo & \text{block number}\\ \var{slot} & \Slot & \text{block slot}\\ \hldiff{\var{vrfRes}} & \hldiff{\Seed} & \hldiff{\text{VRF result value}}\\ \var{prf} & \Proof & \text{vrf proof}\\ \var{bsize} & \N & \text{size of the block body}\\ \var{bhash} & \HashBBody & \text{block body hash}\\ \var{oc} & \OCert & \text{operational certificate}\\ \var{pv} & \ProtVer & \text{protocol version}\\ \end{array} \right) \end{equation*} % \emph{New Accessor Function} \begin{equation*} \begin{array}{r@{~\in~}l} \fun{bVrfRes} & \BHBody \to \Seed \\ \fun{bVrfProof} & \BHBody \to \Proof \\ \end{array} \end{equation*} % \emph{New Helper Functions} \begin{align*} & \fun{bleader} \in \BHBody \to \Seed \\ & \fun{bleader}~(\var{bhb}) = (\fun{hash}~``TEST") ~\XOR~ (\fun{bVrfRes}~\var{bhb})\\ \\ & \fun{bnonce} \in \BHBody \to \Seed \\ & \fun{bnonce}~(\var{bhb}) = (\fun{hash}~``NONCE") ~\XOR~ (\fun{bVrfRes}~\var{bhb})\\ \\ & \fun{vrfChecks} \in \Seed \to \BHBody \to \Bool \\ & \fun{vrfChecks}~\eta_0~\var{bhb} = \verifyVrf{\Seed}{\var{vrfK}}{(\slotToSeed{slot}~\XOR~\eta_0)}{(\var{value},~\var{proof}}) \\ & ~~~~\where \\ & ~~~~~~~~~~\var{slot} \leteq \bslot{bhb} \\ & ~~~~~~~~~~\var{vrfK} \leteq \fun{bvkvrf}~\var{bhb} \\ & ~~~~~~~~~~\var{value} \leteq \fun{bVrfRes}~\var{bhb} \\ & ~~~~~~~~~~\var{proof} \leteq \fun{bVrfProof}~\var{bhb} \\ \end{align*} % \caption{Block Definitions} \label{fig:defs:blocks} \end{figure*} \begin{figure} \emph{Protocol environments} \begin{equation*} \PrtclEnv = \left( \begin{array}{r@{~\in~}lr} \var{pd} & \PoolDistr & \text{pool stake distribution} \\ \eta_0 & \Seed & \text{epoch nonce} \\ \end{array} \right) \end{equation*} \caption{Protocol transition-system types} \label{fig:ts-types:prtcl} \end{figure} \begin{figure}[ht] \begin{equation}\label{eq:prtcl} \inference[PRTCL] { \var{bhb}\leteq\bheader{bh} & \eta\leteq\fun{bnonce}~(\bhbody{bhb}) \\~\\ { \eta \vdash {\left(\begin{array}{c} \eta_v \\ \eta_c \\ \end{array}\right)} \trans{\hyperref[fig:rules:update-nonce]{updn}}{\var{slot}} {\left(\begin{array}{c} \eta_v' \\ \eta_c' \\ \end{array}\right)} }\\~\\ { \vdash\var{cs}\trans{\hyperref[fig:rules:ocert]{ocert}}{\var{bh}}\var{cs'} } \\~\\ \fun{praosVrfChecks}~\eta_0~\var{pd}~\ActiveSlotCoeff~\var{bhb} } { {\begin{array}{c} \var{pd} \\ \eta_0 \\ \end{array}} \vdash {\left(\begin{array}{c} \var{cs} \\ \eta_v \\ \eta_c \\ \end{array}\right)} \trans{prtcl}{\var{bh}} {\left(\begin{array}{c} \varUpdate{cs'} \\ \varUpdate{\eta_v'} \\ \varUpdate{\eta_c'} \\ \end{array}\right)} } \end{equation} \caption{Protocol rules} \label{fig:rules:prtcl} \end{figure}
{ "alphanum_fraction": 0.6012990603, "avg_line_length": 37.2989690722, "ext": "tex", "hexsha": "447557f33bc5b9188dd714f819b5b766d0223e0a", "lang": "TeX", "max_forks_count": 29, "max_forks_repo_forks_event_max_datetime": "2022-03-29T12:10:55.000Z", "max_forks_repo_forks_event_min_datetime": "2019-03-25T11:13:24.000Z", "max_forks_repo_head_hexsha": "c5f3e9db1c22af5d284885ddb1785f1bd7755c67", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "RoyLL/cardano-ledger", "max_forks_repo_path": "eras/babbage/formal-spec/remove-overlay.tex", "max_issues_count": 545, "max_issues_repo_head_hexsha": "c5f3e9db1c22af5d284885ddb1785f1bd7755c67", "max_issues_repo_issues_event_max_datetime": "2022-03-31T21:41:28.000Z", "max_issues_repo_issues_event_min_datetime": "2019-03-19T17:23:38.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "RoyLL/cardano-ledger", "max_issues_repo_path": "eras/babbage/formal-spec/remove-overlay.tex", "max_line_length": 440, "max_stars_count": 67, "max_stars_repo_head_hexsha": "c5f3e9db1c22af5d284885ddb1785f1bd7755c67", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "RoyLL/cardano-ledger", "max_stars_repo_path": "eras/babbage/formal-spec/remove-overlay.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-29T01:57:29.000Z", "max_stars_repo_stars_event_min_datetime": "2019-03-20T21:30:17.000Z", "num_tokens": 2524, "size": 7236 }
\section{Extracurricular Activities} \denseouterlist{ \entrymid[\textbullet] {Rock climbing, playing music}{} {} }
{ "alphanum_fraction": 0.7394957983, "avg_line_length": 10.8181818182, "ext": "tex", "hexsha": "c22706df2dfd44c98dd7e897c1a65cbba1a3d185", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-11-24T16:09:30.000Z", "max_forks_repo_forks_event_min_datetime": "2020-11-24T16:09:30.000Z", "max_forks_repo_head_hexsha": "ee6d4a53c3d2f7a20dcc6162c1baa35c83434a71", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "sdaveas/Simple-CV", "max_forks_repo_path": "sections/extracurricular.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ee6d4a53c3d2f7a20dcc6162c1baa35c83434a71", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "sdaveas/Simple-CV", "max_issues_repo_path": "sections/extracurricular.tex", "max_line_length": 36, "max_stars_count": null, "max_stars_repo_head_hexsha": "ee6d4a53c3d2f7a20dcc6162c1baa35c83434a71", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "sdaveas/Simple-CV", "max_stars_repo_path": "sections/extracurricular.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 33, "size": 119 }
% $Header: /cvsroot/latex-beamer/latex-beamer/solutions/conference-talks/conference-ornate-20min.en.tex,v 1.6 2004/10/07 20:53:08 tantau Exp $ \documentclass{beamer} \mode<presentation> { \usetheme{Hawke} % or ... \setbeamercovered{transparent} % or whatever (possibly just delete it) } \usepackage[english]{babel} % or whatever \usepackage[latin1]{inputenc} % or whatever \usepackage{times} \usepackage[T1]{fontenc} \usepackage{multimedia} %%%%%% % My Commands %%%%%% \newcommand{\bb}{{\boldsymbol{b}}} \newcommand{\bx}{{\boldsymbol{x}}} \newcommand{\by}{{\boldsymbol{y}}} \newcommand{\bfm}[1]{{\boldsymbol{#1}}} %%%% \title[Lecture 17] % (optional, use only with long paper titles) {Lecture 17 - Predictor Corrector Methods} \author[I. Hawke] % (optional, use only with lots of authors) {I.~Hawke} \institute[University of Southampton] % (optional, but mostly needed) { % \inst{1}% School of Mathematics, \\ University of Southampton, UK } \date[Semester 1] % (optional, should be abbreviation of conference name) {MATH3018/6141, Semester 1} \subject{Numerical methods} % This is only inserted into the PDF information catalog. Can be left % out. \pgfdeclareimage[height=0.5cm]{university-logo}{mathematics_7469} \logo{\pgfuseimage{university-logo}} \AtBeginSection[] { \begin{frame}<beamer> \frametitle{Outline} \tableofcontents[currentsection] \end{frame} } \begin{document} \begin{frame} \titlepage \end{frame} \section{Predictor-Corrector methods} \subsection{Predictor-Corrector methods} \begin{frame} \frametitle{Geometrical interpretation of Euler's method} Considering IVPs in the form \begin{equation*} \by'(x) = \bfm{f}(x, \by(x)). \end{equation*} The simple (first order accurate, explicit) Euler method is \begin{equation*} \by_{n+1} = \by_n + h \bfm{f}(x_n, \by_n). \end{equation*} \pause Euler's method is not sufficiently accurate for practical use. Euler predictor-corector is second order. \pause Use \emph{multiple} approximations to the slope for a more accurate result. \end{frame} \section{Runge-Kutta methods} \subsection{Runge-Kutta methods} \begin{frame} \frametitle{Runge-Kutta methods} In a Runge-Kutta method, Taylor's theorem is used from the start to ensure the desired accuracy. \pause \vspace{1ex} Consider a single step from known data $y_n(x_n)$. Compute one estimate ($k_1$) for $f(x_n, y_n)$ using the known data. \pause Then compute $y^{(1)}$ at $x_n + \alpha h$ using $y_n + \beta k_1$. \pause From this compute another estimate ($k_2$) for $f(x, y)$ at $x_n + \alpha h$. Compute $y^{(2)}$ etc; combine as $y_{n+1} = a k_1 + b k_2 + \dots$. \pause \vspace{1ex} Such methods are called \emph{multistage}: \begin{itemize} \item a number of estimates of $f$ are combined to improve accuracy; \item only the previous value $y_n$ is required to start the algorithm. \end{itemize} \vspace{1ex} To derive $a, b, \dots, \alpha, \beta, \dots$ expand the algorithm and match to exact solution using Taylor's theorem, chain rule and IVP. \end{frame} \begin{frame} \frametitle{Example: RK2} \begin{overlayarea}{\textwidth}{0.8\textheight} \only<1|handout:1> { For the second order method we have \begin{align*} y_{n+1} & = y_n + a k_1 + b k_2, \\ k_1 & = h f(x_n, y_n), \\ k_2 & = h f(x_n + \alpha h, y_n + \beta k_1). \end{align*} We have four free parameters $a, b, \alpha, \beta$ to fix. } \only<2|handout:1> { Taylor expand the definition of $y_{n+1} = y(x_n + h)$: \begin{align*} y_{n+1} & = y_n + h y'_n + \tfrac{h^2}{2} y''_n + \dots \\ & = y_n + h f_n + \tfrac{h^2}{2} \left( f_n \right)' + \dots \\ \intertext{using the original IVP, then use the chain rule:} & = y_n + h f_n + \tfrac{h^2}{2} \left( \partial_x f_n + (\partial_y f)_n f_n \right) + \dots . \end{align*} } \only<3-|handout:2> { Algorithm: \begin{align*} y_{n+1} & = y_n + a k_1 + b k_2 \\ & = y_n + h f_n + \tfrac{h^2}{2} \left( \partial_x f_n + (\partial_y f)_n f_n \right) + \dots \end{align*} Compare against the Taylor expansion of the second order method \begin{align*} y_{n+1} & = y_n + a h f_n + b h f(x_n + \alpha h, y_n + \beta h f_n) \\ & = y_n + h (a + b) f_n + h^2 \left[ (\partial_x f)_n \alpha b + (\partial_y f)_n f_n \beta b \right]. \end{align*} } \only<4|handout:2> { Matching coefficients \begin{equation*} \left\{ \begin{aligned} a + b & = 1 \\ \alpha b & = 1 / 2 \\ \beta b & = 1 / 2 \end{aligned} \right. . \end{equation*} } \end{overlayarea} \end{frame} \begin{frame} \frametitle{Example: RK2 (II)} The RK2 method \begin{align*} y_{n+1} & = y_n + a k_1 + b k_2, \\ k_1 & = h f(x_n, y_n), \\ k_2 & = h f(x_n + \alpha h, y_n + \beta k_1) \end{align*} with coefficients \begin{equation*} \left\{ \begin{aligned} a + b & = 1 \\ \alpha b & = 1 / 2 \\ \beta b & = 1 / 2 \end{aligned} \right. \end{equation*} is not completely specified; there is essentially one free parameter. \pause \vspace{1ex} Not all choices are stable. The classic choice is $a = 1/2 = b$, $\alpha = 1 = \beta$: this is Euler predictor-corrector. \end{frame} \begin{frame} \frametitle{Runge-Kutta 4} The most used is the classic fourth order Runge-Kutta method. This requires fixing eight free parameters by matching to order $h^4$. This gives a family of methods again. \pause \vspace{1ex} Standard choice is \begin{align*} y_{n+1} & = y_n + \tfrac{1}{6} \left( k_1 + 2 (k_2 + k_3) + k_4 \right), \\ k_1 & = h f(x_n, y_n), \\ k_2 & = h f(x_n + h / 2, y_n + k_1 / 2), \\ k_3 & = h f(x_n + h / 2, y_n + k_2 / 2), \\ k_4 & = h f(x_n + h , y_n + k_3 ). \end{align*} \pause \vspace{1ex} The local error term is ${\cal O}(h^5)$ leading to a global error ${\cal O}(h^4)$. \end{frame} \begin{frame} \frametitle{Example} Apply the RK4 method to \begin{equation*} y'(x) = - \sin(x), \quad y(0) = 1. \end{equation*} Integrate to $x = 0.5$. Using $h = 0.1$ gives an error $4.8 \times 10^{-7}\%$; using $h = 0.01$ gives an error of $4.8 \times 10^{-11}\%$, showing fourth order convergence. \pause \vspace{1ex} Compare with an error, for $h=0.01$, of $10^{-3}\%$ for the Euler predictor-corrector method, and $0.24\%$ for the simple Euler method. \vspace{1ex} RK4 more efficient \emph{despite} needing four times the function evaluations of Euler's method. \end{frame} \begin{frame} \frametitle{Example: 2} Consider the system \begin{equation*} \left\{ \begin{aligned} \dot{x} & = -y \\ \dot{y} & = x \end{aligned} \right., \quad x(0) = 1, \, \, y(0) = 0. \end{equation*} In polar coordinates this is $\dot{r} = 0$, $\dot{\phi} = 1$. \begin{columns} \begin{column}{0.5\textwidth} \begin{overlayarea}{\textwidth}{0.4\textheight} \only<2-3|handout:1> { Use the RK4 method with $h=0.1$. At $t=500$ the result matches the correct answer to the eye. } \only<3|handout:1> { \vspace{1ex} The growth of the radius makes the errors visible, but they are still tiny. } \only<4-5|handout:2> { Use the RK4 method with $h=0.01$. At $t=500$ the result matches the correct answer to the eye. } \only<5|handout:2> { \vspace{1ex} The growth of the radius remains, but is minute. } \end{overlayarea} \end{column} \begin{column}{0.5\textwidth} \begin{overlayarea}{\textwidth}{0.6\textheight} \only<2|handout:0> { \begin{center} \includegraphics[height=0.5\textheight]{figures/RK4_1} \end{center} } \only<3|handout:1> { \begin{center} \includegraphics[height=0.5\textheight]{figures/RK4_rad1} \end{center} } \only<4|handout:0> { \begin{center} \includegraphics[height=0.5\textheight]{figures/RK4_2} \end{center} } \only<5|handout:2> { \begin{center} \includegraphics[height=0.5\textheight]{figures/RK4_rad2} \end{center} } \end{overlayarea} \end{column} \end{columns} \end{frame} \section{Summary} \subsection{Summary} \begin{frame} \frametitle{Summary} \begin{itemize} \item Euler's method has local error $\propto h^2$, hence global error $\propto h$. \item The Euler predictor-corrector method has local error $\propto h^3$, hence global error $\propto h^2$. \item \emph{Multistage} methods such as Runge-Kutta methods require only one known value $\by_{n}$ to start, and compute (many) estimates of the function $\bfm{f}$ for the algorithm to update $\by_{n+1}$. \item Runge-Kutta methods are the classic multistage methods; the predictor-corrector method is a second order RK method. \item RK4 is useful in practice. \end{itemize} \end{frame} \end{document}
{ "alphanum_fraction": 0.5994077834, "avg_line_length": 24.9498680739, "ext": "tex", "hexsha": "45d952e52903ba6cceb0543f084f0e756a17fe82", "lang": "TeX", "max_forks_count": 41, "max_forks_repo_forks_event_max_datetime": "2022-02-15T09:59:39.000Z", "max_forks_repo_forks_event_min_datetime": "2015-01-05T13:30:47.000Z", "max_forks_repo_head_hexsha": "03cb91114b3f5eb1b56916920ad180d371fe5283", "max_forks_repo_licenses": [ "CC-BY-3.0" ], "max_forks_repo_name": "josh-gree/NumericalMethods", "max_forks_repo_path": "Lectures/tex/Lecture17_EPC_RK.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "03cb91114b3f5eb1b56916920ad180d371fe5283", "max_issues_repo_issues_event_max_datetime": "2018-01-23T21:40:42.000Z", "max_issues_repo_issues_event_min_datetime": "2017-05-24T19:49:52.000Z", "max_issues_repo_licenses": [ "CC-BY-3.0" ], "max_issues_repo_name": "josh-gree/NumericalMethods", "max_issues_repo_path": "Lectures/tex/Lecture17_EPC_RK.tex", "max_line_length": 142, "max_stars_count": 76, "max_stars_repo_head_hexsha": "03cb91114b3f5eb1b56916920ad180d371fe5283", "max_stars_repo_licenses": [ "CC-BY-3.0" ], "max_stars_repo_name": "josh-gree/NumericalMethods", "max_stars_repo_path": "Lectures/tex/Lecture17_EPC_RK.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-26T15:34:11.000Z", "max_stars_repo_stars_event_min_datetime": "2015-02-12T19:51:52.000Z", "num_tokens": 3213, "size": 9456 }
%%% AAAI \documentclass[letterpaper]{article} \usepackage{aaai} \usepackage{times} \usepackage{helvet} \usepackage{courier} \usepackage{url} \usepackage{booktabs} \usepackage{graphics} \begin{document} \title{Width and Inference Based Planners: $\SR$, $\textit{BFS(f)}$, and $\textit{PROBE}$} % \author{Name1 Surname1 \and Name2 Surname2 \and Name3 Surname3\institute{University of Leipzig, % Germany, email: [email protected]} } %%% AAAI \author{Nir Lipovetzky \\ University of Melbourne \\ Melbourne, Australia\\ {\normalsize\url{@unimelb.edu.au}} \And Miquel Ramirez \\ RMIT University \\ Melbourne, Australia\\ {\normalsize\url{@rmit.edu.au}} \And Christian Muise \\ University of Melbourne \\ Melbourne, Australia\\ {\normalsize\url{@unimelb.edu.au}} \And Hector Geffner \\ ICREA \& Universitat Pompeu Fabra \\ Barcelona, SPAIN \\ {\normalsize\url{@upf.edu}}\thanks{firstname.lastname} } \newcommand{\tuple}[1]{{\langle #1\rangle}} \newcommand{\triple}[1]{{\langle #1\rangle}} \newcommand{\pair}[1]{{\langle #1\rangle}} \newcommand{\Omit}[1]{} \newcommand{\OmitEcai}[1]{} \newcommand{\eqdef}{\stackrel{\hbox{\tiny{def}}}{=}} % \newcommand{\IR}{{\textit{IR}}} % \newcommand{\SR}{{\textit{SR}}} \newcommand{\IR}{{\textit{IW}}} \newcommand{\SR}{{\textit{SIW}}} \newcommand{\ID}{{\textit{ID}}} \newcommand{\BRFS}{{\textit{BrFS}}} \newtheorem{theorem}{Theorem} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{definition}[theorem]{Definition} \maketitle % AAAI \section{Introduction} \input{introduction.tex} \section{$\SR$: Iterated Width Search} \input{siw.tex} \section{$\textit{BFS(f)}$: Novelty Best-First Search} \input{novelty.tex} \section{$\textit{PROBE}$} \input{probe.tex} \section{Implementation Notes} \input{aptk.tex} %% \section{Experiments} %% \input{experiments.tex} %% \section{Discussion} %% \input{discussion.tex} \subsubsection{Acknowledgments} This work was partly supported by Australian Research Council Linkage grant LP11010015, and Discovery Projects DP120100332 and DP130102825. \bibliographystyle{aaai} \bibliography{thesisbib,crossref} \end{document}
{ "alphanum_fraction": 0.7024793388, "avg_line_length": 21.8952380952, "ext": "tex", "hexsha": "f8522e7181385aec5467610474d5fcaa8225a0e3", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "dff3e635102bf351906807c5181113fbf4b67083", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "miquelramirez/aamas18-planning-for-transparency", "max_forks_repo_path": "planners/lapkt-public/documentation/ipc-2014-paper/paper.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "dff3e635102bf351906807c5181113fbf4b67083", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "miquelramirez/aamas18-planning-for-transparency", "max_issues_repo_path": "planners/lapkt-public/documentation/ipc-2014-paper/paper.tex", "max_line_length": 139, "max_stars_count": null, "max_stars_repo_head_hexsha": "dff3e635102bf351906807c5181113fbf4b67083", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "miquelramirez/aamas18-planning-for-transparency", "max_stars_repo_path": "planners/lapkt-public/documentation/ipc-2014-paper/paper.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 722, "size": 2299 }
\subsection{Boundary Points, Open \& Closed Sets} \noindent Given some set $\Omega \subset \mathbb{R}^n$, $x$ is an interior point to $\Omega$ if there exists some $\delta$ such that $N(x, \delta) \subset \Omega$. That is, $x$ is an interior point to $\Omega$ if you can draw a circle of non-zero radius around $x$ such that the entire circle is inside of $\Omega$. All points that are not interior points are boundary points. Formally, $x$ is a boundary point of $\Omega$ if for all $\delta$, $N(x,\delta) \not\subset \Omega$. Using our definitions of interior and boundary points, we can define and open set as one that doesn't contain any of its boundary points and a closed set as one that contains all of its boundary point. Note that a set that contains some of its boundary points is neither open nor closed.
{ "alphanum_fraction": 0.7384428224, "avg_line_length": 117.4285714286, "ext": "tex", "hexsha": "cf94f6ed14585b0c29544f521f7be075fc7baf5e", "lang": "TeX", "max_forks_count": 10, "max_forks_repo_forks_event_max_datetime": "2021-08-17T15:21:12.000Z", "max_forks_repo_forks_event_min_datetime": "2020-04-10T05:41:17.000Z", "max_forks_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "aneziac/Math-Summaries", "max_forks_repo_path": "multiCalc/differentialMultivariableCalculus/boundaryPointsOpenClosedSets.tex", "max_issues_count": 26, "max_issues_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_issues_repo_issues_event_max_datetime": "2021-10-07T04:47:03.000Z", "max_issues_repo_issues_event_min_datetime": "2020-03-28T17:44:18.000Z", "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "aneziac/Math-Summaries", "max_issues_repo_path": "multiCalc/differentialMultivariableCalculus/boundaryPointsOpenClosedSets.tex", "max_line_length": 287, "max_stars_count": 39, "max_stars_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "aneziac/Math-Summaries", "max_stars_repo_path": "multiCalc/differentialMultivariableCalculus/boundaryPointsOpenClosedSets.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-17T17:38:45.000Z", "max_stars_repo_stars_event_min_datetime": "2020-03-26T06:20:36.000Z", "num_tokens": 208, "size": 822 }
\documentclass[11pt, twoside, pdftex]{article} % This include all the settings that we should use for the document \newcommand{\PDFTitle}{SignalTap with VHDL Designs} \newcommand{\commonPath}{../../../Common} \input{\commonPath/Docs/defaulttext.tex} \input{\commonPath/Docs/preamble.tex} %%%%%%%%%%%%%%%%%%%%%%%%% % Add title \newcommand{\doctitle}{SignalTap \\ with VHDL Designs} \newcommand{\dochead}{SignalTap with VHDL Designs} % Usually no need to change these two lines \title{\fontfamily{phv}\selectfont{\doctitle} } \chead{ \small{\textsc{\bfseries \dochead} } } % Customizations %%%%%%%%%%%%%%%%%%%%%%%%% % Allows multiple figures per page \renewcommand\floatpagefraction{.9} \renewcommand\topfraction{.9} \renewcommand\bottomfraction{.9} \renewcommand\textfraction{.1} \setcounter{totalnumber}{50} \setcounter{topnumber}{50} \setcounter{bottomnumber}{50} \raggedbottom %%%%%%%%%%%%%%%%%% %%% DOCUMENT START %\begin{document} \begin{document} \begin{table} \centering \begin{tabular}{p{5cm}p{4cm}} \hspace{-3cm} & \raisebox{1\height}{\parbox[h]{0.5\textwidth}{\Large\fontfamily{phv}\selectfont{\textsf{\doctitle}}}} \end{tabular} \label{tab:logo} \end{table} \colorbox[rgb]{0,0.384,0.816}{\parbox[h]{\textwidth}{\color{white}\textsf{\textit{\textBar}}}} \thispagestyle{plain} \section{Introduction} This tutorial explains how to use the SignalTap feature within the Intel\textsuperscript{\textregistered} Quartus \textsuperscript{\textregistered} Prime software. The SignalTap Embedded Logic Analyzer is a system-level debugging tool that captures and displays signals in circuits designed for implementation in Intel's FPGAs. \\ \\ {\bf Contents}: \begin{itemize} \item Example Circuit \item Enabling the Quartus Prime TalkBack Feature \item Using the SignalTap Logic Analyzer \item Probing the Design Using SignalTap \item Advanced Trigger Options \item Sample Depth and Buffer Acquisition Modes \end{itemize} \clearpage \newpage \section{Background} Quartus\textsuperscript{\textregistered} Prime software includes a system-level debugging tool called SignalTap that can be used to capture and display signals in real time in any FPGA design. During this tutorial, the reader will learn about: \begin{itemize} \item Probing signals using the SignalTap software \item Setting up triggers to specify when data is to be captured \end{itemize} This tutorial is aimed at the reader who wishes to probe signals in circuits defined using the VHDL hardware description language. An equivalent tutorial is available for the reader who prefers the Verilog language. \noindent The reader is expected to have access to a computer that has Quartus Prime software installed. The detailed examples in the tutorial were obtained using Quartus Prime version \versnum, but other versions of the software can also be used. \\ \\ \noindent {\bf Note}: There are no red LEDs on a DE0-Nano board. All procedures using red LEDs in this tutorial are to be completed on the DE0-Nano board using green LEDs instead. If you are doing this tutorial on a DE0-Nano board, replace {\it LEDR} with {\it LED} below. Additionally, the DE0-Nano is limited to 2 keys. If you are doing this tutorial on a DE0-Nano, replace all occurrences of {\it [3:0]} with {\it [1:0]} below. \section{Example Circuit} As an example, we will use the key circuit implemented in VHDL in Figure~\ref{fig:1}. This circuit simply connects the first 4 keys on a DE-series board to the first 4 red LEDs on the board. It does so at the positive edge of the clock (CLOCK\_50) by loading the values of the keys into a register whose output is connected directly to the red LEDs. \begin{figure}[H] \begin{lstlisting}[language=VHDL, xleftmargin=1cm] LIBRARY ieee; USE ieee.std_logic_1164.all; ENTITY keys IS PORT ( CLOCK_50 : IN STD_LOGIC; KEY : IN STD_LOGIC_VECTOR(3 DOWNTO 0); LEDR : OUT STD_LOGIC_VECTOR(3 DOWNTO 0)); -- red LEDs END keys; ARCHITECTURE Behavior OF keys IS BEGIN PROCESS (CLOCK_50) BEGIN IF(RISING_EDGE(CLOCK_50)) THEN LEDR <= KEY; END IF; END PROCESS; END Behavior; \end{lstlisting} \caption{The key circuit implemented in VHDL code} \label{fig:1} \end{figure} Implement this circuit as follows: \begin{itemize} \item Create a project {\it keys}. \item Include a file {\it keys.v}, which corresponds to Figure~\ref{fig:1}, in the project. \item Select the correct device that is associated with the DE-series board. A list of device names for the DE-series boards can be found in Table~\ref{tab:device}. \item Import the relevant qsf file. For example, for a DE1-SoC board, this file is called {\it DE1\_SoC.qsf} and can be imported by clicking {\sf Assignments > Import Assignments}. For convenience, the qsf files are hosted on the Intel FPGA University Program's \href{https://www.altera.com/support/training/university/boards.html}{website}. Simple navigate to the materials section of your DE-series board's page. The node names used in the sample circuit correspond to the names used in these files. \item Compile the design. \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{| c | c |} \hline Board & Device Name \\ \hline DE0-CV & Cyclone\textsuperscript{\textregistered} V 5CEBA4F23C7 \\ \hline DE0-Nano & Cyclone\textsuperscript{\textregistered} IVE EP4CE22F17C6 \\ \hline DE0-Nano-SoC & Cyclone\textsuperscript{\textregistered} V SoC 5CSEMA4U23C6\\ \hline DE1-SoC & Cyclone\textsuperscript{\textregistered} V SoC 5CSEMA5F31C6 \\ \hline DE2-115 & Cyclone\textsuperscript{\textregistered} IVE EP4CE115F29C7 \\ \hline DE10-Lite & Max\textsuperscript{\textregistered} 10 10M50DAF484C7G \\ \hline DE10-Standard & Cyclone\textsuperscript{\textregistered} V SoC 5CSXFC6D6F31C6 \\ \hline DE10-Nano & Cyclone\textsuperscript{\textregistered} V SE 5CSEBA6U2317 \\ \hline \end{tabular} \caption{DE-series FPGA device names} \label{tab:device} \end{center} \end{table} \section{Using the SignalTap software} \noindent In the first part of the tutorial, we are going to set up the SignalTap Logic Analyzer to probe the values of the 4 LED keys. We will also set up the circuit to trigger when the first key (LED[0]) is low. \begin{enumerate} \item Open the SignalTap window by selecting {\sf File $>$ New}, which gives the window shown in Figure~\ref{fig:3}. Choose {\sf SignalTap Logic Analyzer File} and click {\sf OK}. \begin{figure}[H] \begin{center} \includegraphics[scale=0.65]{figures/figure3.png} \caption{Need to prepare a new file.} \label{fig:3} \end{center} \end{figure} \item The SignalTap window with the {\sf Setup} tab selected is depicted in Figure~\ref{fig:4}. Save the file under the name {\it keys.stp}. In the dialog box that follows (Figure~\ref{fig:5}), click {\sf OK}. For the dialog "Do you want to enable SignalTap file 'keys.stp' for the current project?" click {\sf Yes} (Figure~\ref{fig:6}). The file {\it keys.stp} is now the SignalTap file associated with the project. Note: If you want to disable this file from the project, or to disable SignalTap from the project, go to {\sf Assignments > Settings}. In the category list, select {\sf SignalTap Logic Analyzer}, bringing up the window in Figure~\ref{fig:7}. To turn off the analyzer, uncheck {\sf Enable SignalTap Logic Analyzer}. It is possible to have multiple SignalTap files for a given project, but only one of them can be enabled at a time. Having multiple SignalTap files might be useful if the project is very large and different sections of the project need to be probed. To create a new SignalTap file for a project, simply follow Steps 1 and 2 again and give the new file a different name. To change the SignalTap file associated with the project, in the {\sf SignalTap File name} box browse for the file wanted, click {\sf Open}, and then click {\sf OK}. For this tutorial we want to leave SignalTap enabled and we want the SignalTap File name to be {\it keys.stp}. Make sure this is the case and click {\sf OK} to leave the settings window. \begin{figure}[H] \begin{center} \includegraphics[scale=0.6]{figures/figure4.png} \caption{The SignalTap window.} \label{fig:4} \end{center} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[scale=0.65]{figures/figure5.png} \caption{Click {\sf OK} to this dialog.} \label{fig:5} \end{center} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[scale=0.65]{figures/figure6.png} \caption{Click {\sf Yes} to this dialog.} \label{fig:6} \end{center} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[scale=0.65]{figures/figure7.png} \caption{The SignalTap Settings window.} \label{fig:7} \end{center} \end{figure} \item We now need to add the nodes in the project that we wish to probe. In the Setup tab of the SignalTap window, double-click in the area labeled {\sf Double-click to add nodes}, bringing up the Node Finder window shown in Figure~\ref{fig:8}. Click on \includegraphics[scale=0.7]{figures/icon3.png} or \includegraphics[scale=0.7]{figures/icon6.png} to show or hide more search options. For the {\sf Filter} field, select {\sf SignalTap: pre-synthesis}, and for the {\sf Look in} field select {\sf |keys|}. Click {\sf List}. This will now display all the nodes that can be probed in the project. Highlight KEY[0] to KEY[3], and then click the \includegraphics[scale=0.90]{figures/icon4.png} button to add the keys to be probed. Click {\sf Insert} to insert the selected nodes, then {\sf Close} to close the Node Finder window. \begin{figure}[H] \begin{center} \includegraphics[scale=0.6]{figures/figure8.png} \caption{Adding nodes in the Node Finder window on a DE-series board.} \label{fig:8} \end{center} \end{figure} \item Before the SignalTap analyzer can work, we need to specify what clock is going to run the SignalTap module that will be instantiated within our design. To do this, in the Clock box of the Signal Configuration pane of the SignalTap window, click \includegraphics[scale=0.7]{figures/icon5.png}, which will again bring up the Node Finder window. Select {\sf List} to display all the nodes that can be added as the clock, and then double-click CLOCK\_50, which results in the image shown in Figure~\ref{fig:9}. Click {\sf OK}. \begin{figure}[H] \begin{center} \includegraphics[scale=0.6]{figures/figure9.png} \caption{Setting CLOCK\_50 as the clock for the SignalTap instance on a DE-series board.} \label{fig:9} \end{center} \end{figure} \item With the {\sf Setup} tab of the SignalTap window selected, select the checkbox in the Trigger Conditions column. In the dropdown menu at the top of this column, select {\sf Basic AND}. Right-click on the Trigger Conditions cell corresponding to the node KEY[0] and select {\sf Low}. Now, the trigger for running the Logic Analyzer will be when the first key on the DE-series board is pressed, as shown in Figure~\ref{fig:10}. Note that you can right-click on the Trigger Conditions cell of any of the nodes being probed and select the trigger condition from a number of choices. The actual trigger condition will be true when the logical AND of all these conditions is satisfied. For now, just keep the trigger condition as KEY[0] set to low and the others set to their default value, {\sf Don't Care}. \begin{figure}[H] \begin{center} \includegraphics[scale=0.6]{figures/figure10.png} \caption{Setting the trigger conditions.} \label{fig:10} \end{center} \end{figure} \item For SignalTap to work, we need to properly set up the hardware. First, make sure the DE-series board is plugged in and turned on. In the Hardware section of the SignalTap window, located in the top right corner, click {\sf Setup...}, bringing up the window in Figure~\ref{fig:11}. Double click DE-SoC in the Available Hardware Items menu, then click {\sf Close}. If you are using a DE0-CV, DE0-Nano, DE2-115, or the DE10-Lite, you will select USB-Blaster from the Available Hardware Items menu. \begin{figure}[H] \begin{center} \includegraphics[scale=0.6]{figures/figure11.png} \caption{Setting up hardware.} \label{fig:11} \end{center} \end{figure} \item In the Device section of the main SignalTap window, select the device that corresponds to the FPGA on your DE-series board. Do not select the {\sf SOCVHPS} device as this corresponds to the ARM Cortex-A9* processor. If you are using the DE0-CV, DE0-Nano, DE2-115, or DE10-Lite there should be only one device that is selectable. \item The last step in instantiating SignalTap in your design is to compile the design. In the main Quartus Prime window, select {\sf Processing > Start Compilation} and indicate that you want to save the changes to the file by clicking {\sf Yes}. After compilation, go to {\sf Tools > Programmer} and load the project onto the DE-series board. \end{enumerate} \section{Probing the Design Using SignalTap} Now that the project with SignalTap instantiated has been loaded onto the DE-series board, we can probe the nodes as we would with an external logic analyzer. \begin{enumerate} \item On the DE-series board, first ensure that none of the keys (0-3) is being pressed. We will try to probe the values of these keys once key 0 is pressed. \item In the SignalTap window, select {\sf Processing > Run Analysis} or click the \includegraphics[scale=0.7]{figures/icon1.png} icon. You should get a screen similar to Figure~\ref{fig:12}. Note that the status column of the SignalTap Instance Manager pane says "Waiting for trigger." This is because the trigger condition (Key 0 being low) has not yet been met. \begin{figure}[H] \begin{center} \includegraphics[scale=0.65]{figures/figure12.png} \caption{SignalTap window on a DE-series board after Run Analysis has been clicked.} \label{fig:12} \end{center} \end{figure} \item Now, to observe the trigger feature of the Logic Analyzer, click on the {\sf Data} tab of the SignalTap Window and then press and hold Key 0 on the DE-series board. The data window of the SignalTap window should display the image in Figure~\ref{fig:13}. Note that this window shows the data levels of the 4 nodes before and after the trigger condition was met. As an exercise, unpress Key 0 then click {\sf Run Analysis} again. Hold down any of Keys 1-3, then press Key 0. When Key 0 is pressed, you will see that the values of Keys 1-3 displayed on the SignalTap Logic Analyzer match what is being pressed on the board. \end{enumerate} \begin{figure}[H] \begin{center} \includegraphics[scale=0.65]{figures/figure13.png} \caption{Graphical display of values after trigger condition is met.} \label{fig:13} \end{center} \end{figure} \section{Advanced Trigger Options} Sometimes in a design you may want to have a more complicated triggering condition than SignalTap's basic triggering controls allow. The following section describes how to have multiple trigger levels. % as well as how to create advanced triggering options. \subsection{Multiple Trigger Levels} In this section, we will set up the analyzer to trigger when there is a positive edge from Key 0, Key 1, Key 2, and then Key 3, in that order. \begin{enumerate} \item Click the {\sf Setup} tab of the SignalTap window. \item In the Signal Configuration pane, select 4 from Trigger Conditions dropdown menu as in Figure~\ref{fig:14} (you may have to scroll down in the Signal Configuration pane to see this menu). This modifies the node list window by creating three new Trigger Conditions columns. \begin{figure}[H] \begin{center} \includegraphics[scale=0.65]{figures/figure14.png} \caption{Set trigger conditions to 4.} \label{fig:14} \end{center} \end{figure} \item Right click the Trigger Condition 1 cell for KEY[0], and select {\sf Rising Edge}. Do the same for the Trigger Condition 2 cell for KEY[1], Trigger Condition 3 for KEY[2], and Trigger Condition 4 for KEY[3]. You should end up with a window that looks like Figure~\ref{fig:15}. \begin{figure}[H] \begin{center} \includegraphics[scale=0.65]{figures/figure15.png} \caption{Multiple trigger levels set.} \label{fig:15} \end{center} \end{figure} \item Now, recompile the design and load it onto the DE-series board again. \item Go back to the SignalTap window, click on the Data tab, and then click {\sf Processing > Run Analysis}. Note that the window will say "Waiting for trigger" until the appropriate trigger condition is met. Then, in sequence, press and release keys 0, 1, 2, and then 3. After this has been done, you will see the values of all the keys displayed as in Figure~\ref{fig:16}. Experiment by following the procedure outlined in this section to set up other trigger conditions and use the DE-series board to test these trigger conditions. If you want to continuously probe the analyzer, instead of clicking "Run Analysis," click "Autorun Analysis" which is the icon right next to the "Run Analysis" icon. If you do this, every time the trigger condition is met the value in the display will be updated. You do not have to re-select "Run Analysis." To stop the "Autorun Analysis" function, click the \includegraphics[scale=0.7]{figures/icon2.png} icon. \begin{figure}[H] \begin{center} \includegraphics[scale=0.65]{figures/figure16.png} \caption{Logic Analyzer display when all four trigger conditions have been met.} \label{fig:16} \end{center} \end{figure} \end{enumerate} \subsection{Advanced Trigger Conditions} In this section we will learn how to create advanced trigger conditions. Our trigger condition will be whenever any one of the first 3 LED displays have a positive or negative edge. This means that the Logic Analyzer will update its display every time one of these inputs changes. Note that we could have any logical function of the nodes being probed to trigger the analyzer. This is just an example. After you implement this in the next few steps, experiment with your own advanced triggers. \begin{enumerate} \item Have the {\it keys} project opened and compiled from the previous examples in this tutorial. \item Open the SignalTap window and select the Setup tab. In the Signal Configuration pane make sure that the number of Trigger Conditions is set to 1. \item In the Trigger Conditions column of the node list, make sure the box is checked and select {\sf Advanced} from the dropdown menu as in Figure~\ref{fig:17}. This will immediately bring up the window in Figure~\ref{fig:18}. This window allows you to create a logic circuit using the various nodes that you are probing with SignalTap. \begin{figure}[H] \begin{center} \includegraphics[scale=0.65]{figures/figure17.png} \caption{Select Advanced from the Trigger Level dropdown menu.} \label{fig:17} \end{center} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[scale=0.65]{figures/figure18.png} \caption{The Advanced Trigger editing window.} \label{fig:18} \end{center} \end{figure} \item In the node list section of this window, highlight the 3 nodes KEY[0] to KEY[2], and click and drag them into the white space of the Advanced trigger window, resulting in Figure~\ref{fig:19}. Note that you can also drag and drop each node individually. \begin{figure}[H] \begin{center} \includegraphics[scale=0.65]{figures/figure19.png} \caption{The three input nodes of interest dragged into the Advanced Trigger Editing Window.} \label{fig:19} \end{center} \end{figure} \item We now need to add the necessary logical operators to our circuit. We will need an OR gate as well as three edge level detectors. To access the OR gate, expand {\sf Logical Operators} in the Object Library and select {\sf Logical Or}, as in Figure~\ref{fig:20}. Then drag and drop the operator into the editing window. \begin{figure}[H] \begin{center} \includegraphics[scale=0.65]{figures/figure20.png} \caption{Select the Logical Or operator from the Object Library window and drag this into the editing window.} \label{fig:20} \end{center} \end{figure} \item In the object library click {\sf Edge and Level Detector} and drag this into the editing window. Do this three times and then arrange the circuit as in Figure~\ref{fig:21}. The three inputs should each be connected to the input of an edge and level detector and the output of each of these detectors should be connected to the OR gate. The output of the OR gate should be connected to the output pin already in the editing window. \begin{figure}[H] \begin{center} \includegraphics[scale=0.65]{figures/figure21.png} \caption{Arrange the elements to create a circuit that looks like this.} \label{fig:21} \end{center} \end{figure} \item We now need to set each edge and level detector to sense either a falling edge or a rising edge. Double click one of the edge and level detectors, bringing up the window in Figure~\ref{fig:22}. Type E in the setting box and then click {\sf OK}. This will mean that the detector will output 1 whenever there is either a falling edge or a rising edge of its input. Repeat this step for the two remaining edge and level detectors. \begin{figure}[H] \begin{center} \includegraphics[scale=0.60]{figures/figure22.png} \caption{Type E in the setting box so that the function triggers on both rising and falling edges.} \label{fig:22} \end{center} \end{figure} \item To test this Advanced trigger condition, compile the designed circuit again and load it onto the DE-series board. Then run Signal Tap as described in the previous section. You should note that the Analyzer should sense every time you change one of the first three keys on the board. \end{enumerate} \section{Sample Depth and Buffer Acquisition Modes} In this section, we will learn how to set the Sample Depth of our analyzer and about the two buffer acquisition modes. To do this, we will use the previous project and use segmented buffering. Segmented buffering allows us to divide the acquisition buffer into a number of separate, evenly sized segments. We will create a sample depth of 128 bits and divide this into eight 32-sample segments. This will allow us to capture 4 distinct events that occur around the time of our trigger. \begin{enumerate} \item Change the trigger condition back to Basic AND and have only one trigger condition. Make the trigger condition to be at the falling edge of KEY[0]. \item In the Signal Configuration pane of the SignalTap window, in the {\sf Sample depth} dropdown menu of the Data pane select {\sf 128}. This option allows you to specify how many samples will be taken around the triggers in your design. If you require many samples to debug your design, select a larger sample depth. Note, however, that if the sample depth selected is too large, there might not be enough room on the board to hold your design and the design will not compile. If this happens, try reducing the sample depth. \item In the Signal Configuration pane of the SignalTap window, in the Data section of the pane check {\sf Segmented}. In the dropdown menu beside Segmented, select {\sf 4 32 sample segments}. This will result in a pane that looks like Figure~\ref{fig:23}. \begin{figure}[H] \begin{center} \includegraphics[scale=0.65]{figures/figure23.png} \caption{Select Segmented buffer acquisition mode with 4 32 sample segments.} \label{fig:23} \end{center} \end{figure} \item Recompile and load the designed circuit onto the DE-series board. Now, we will be able to probe the design using the Segmented Acquisition mode. \item Go back to the SignalTap window and click {\sf Processing > Run Analysis}. Now, press and release KEY[0], and in between clicks change the values of the other 3 keys. After you have done this 4 times, the values in the buffer will be displayed in the data window, and this will display the values that the 4 keys had at around each trigger. A possible waveform is presented in Figure~\ref{fig:24}. This resulted from the user pressing and holding one more key between each click of KEY[0]. \end{enumerate} \begin{figure}[H] \begin{center} \includegraphics[scale=0.65]{figures/figure24.png} \caption{Possible waveforms that could result when using the Segmented Acquisition mode.} \label{fig:24} \end{center} \end{figure} \subsection{Use of Keep Attribute} Sometimes a design you create will have wires in it that the Quartus compiler will optimize away. A very simple example is the VHDL code below: \begin{figure}[H] \begin{lstlisting}[language=VHDL, xleftmargin=2cm] LIBRARY ieee; USE ieee.std_logic_1164.all; ENTITY threeInputAnd IS PORT ( CLOCK_50 : IN STD_LOGIC; SW : IN STD_LOGIC_VECTOR(2 DOWNTO 0); LEDR : OUT STD_LOGIC_VECTOR(0 DOWNTO 0)); END threeInputAnd; ARCHITECTURE Behavior OF threeInputAnd IS SIGNAL ab, abc : STD_LOGIC; ATTRIBUTE keep : BOOLEAN; ATTRIBUTE keep OF ab, abc : SIGNAL IS true; BEGIN ab <= SW(0) AND SW(1); abc <= ab AND SW(2); PROCESS (CLOCK_50) BEGIN IF (RISING_EDGE(CLOCK_50)) THEN LEDR(0) <= abc; END IF; END PROCESS; END Behavior; \end{lstlisting} \caption{Using the 'keep' attribute in Quartus Prime.} \label{fig:25} \end{figure} A diagram of this circuit is shown in Figure~\ref{fig:26}. The triangular symbols labeled {\bf ab} and {\bf abc} are buffers inserted by Quartus. They do not modify the signals passing through them. \begin{figure}[H] \begin{center} \includegraphics[scale=0.65]{figures/figure26.png} \caption{The circuit implemented by the code in Figure~\ref{fig:25}} \label{fig:26} \end{center} \end{figure} We wish to instantiate a SignalTap module that will probe the values of the inputs SW[2:0] and the outputs LEDR[2:0]. We also want to probe the internal wire {\bf ab}. However, normally when this VHDL code is compiled (without the two ATTRIBUTE lines), the wire {\bf ab} is optimized away into one logic element, as in Figure~\ref{fig:27}. \begin{figure}[H] \begin{center} \includegraphics[scale=0.6]{figures/figure27.png} \caption{The same circuit without the 'keep' attribute.} \label{fig:27} \end{center} \end{figure} If you wish to probe this internal wire, however, you will have to direct Quartus that you do not want this wire to be optimized away. To do so, first an attribute called 'keep' of type BOOLEAN needs to be declared. This is what the first line ({\it ATTRIBUTE keep : BOOLEAN;}) is for. Then, the attribute needs to be applied to the desired signals (in this case, signals {\bf ab} and {\bf abc}). This is achieved with the second line ({\it ATTRIBUTE keep OF ab, abc : SIGNAL IS true; }). Figure~\ref{fig:25} already contains these lines. We will now demonstrate how this wire can be probed: \begin{enumerate} \item Create a new Quartus project threeInputAnd and copy the VHDL code from Figure~\ref{fig:25}. Compile the project. \item Go to {\sf Tools > SignalTap Logic Analyzer}, and then in the Setup pane of the SignalTap window, right click and choose {\sf Add Nodes}. \item For the {\sf Filter} field, select {\sf SignalTap: pre-synthesis}. Select {\sf |threeInputAnd|} in the {\sf Look in} drop-down menu and click the {\sf List} button. Move the nodes {\bf ab}, {\bf SW[0]}, {\bf SW[1]}, {\bf SW[2]}, and {\bf LEDR[0]} into the Selected Nodes list and then click {\sf OK}. \item In the Signal Configuration pane, select {\bf CLOCK\_50} as the clock signal. \item Set a Trigger Condition to trigger when {\bf ab} becomes high. \item Import the relevant pin assignment file for the DE-series board (or assign the pins manually, as described in Section 7 of the Quartus Prime Introduction tutorials). For a DE2-115 board, this file is named {\it DE2\_115.qsf} \item Compile the project again. \item Go to {\sf Tools > Programmer} and load the circuit onto the DE-series board. \item Open the SignalTap window again, and select the Data tab. Set all the switches on the DE-series board to the low position. Then, start the analysis by selecting {\sf Processing > Run Analysis}. \item Set the first two switches to the high position. The Trigger Condition should be satisfied. \end{enumerate} % Copyright and Trademark \input{\commonPath/Docs/copyright.tex} \end{document}
{ "alphanum_fraction": 0.7454825606, "avg_line_length": 47.0444810544, "ext": "tex", "hexsha": "6d8362274c542fc3c96217deffca3d18d57449ff", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d2c352472bc3dfab88a3497efd259f5fabbf3952", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "fpgacademy/Tutorials", "max_forks_repo_path": "Hardware_Design/SignalTap/VHDL/SignalTap_VHDL.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d2c352472bc3dfab88a3497efd259f5fabbf3952", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "fpgacademy/Tutorials", "max_issues_repo_path": "Hardware_Design/SignalTap/VHDL/SignalTap_VHDL.tex", "max_line_length": 501, "max_stars_count": null, "max_stars_repo_head_hexsha": "d2c352472bc3dfab88a3497efd259f5fabbf3952", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "fpgacademy/Tutorials", "max_stars_repo_path": "Hardware_Design/SignalTap/VHDL/SignalTap_VHDL.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7546, "size": 28556 }
\chapterimage{chapter_head_2.pdf} \chapter{Error-handling} \section{Use Unchecked Exceptions}\index{Error-handling!Use Unchecked Exceptions} Although checked exception brings benefits, it comes with prices. The price of checked exceptions is an Open/Closed Principle violation. If you throw a checked exception from a method in your code and the \inlinecode[green]{catch} is three levels above, you must declare that exception in the signature of each method between you and the \inlinecode[green]{catch} . This means that a change at a low level of the software can force signature changes on many higher levels. The changed modules must be rebuilt and redeployed, even though nothing they care about changed. \section{Define the Normal Flow}\index{Error-handling!Define the Normal Flow} \inlinecode[green]{try-catch} is great, but sometimes it looks awkward if the catch block does some "exceptional" processing other than loging-and-throwing-exception. For example: \begin{tcolorbox}[breakable, colback=red!10!white, colframe=red!85!black, sidebyside, righthand width = 3cm, tikz lower] \begin{lstlisting}[language = java, basicstyle=\small] try { MealExpenses expenses = expenseReportDAO.getMeals(employee.getID()); m_total += expenses.getTotal(); } catch(MealExpensesNotFound e) { m_total += getMealPerDiem(); } \end{lstlisting} \tcblower \path[fill = yellow, draw = yellow!75!red] (0, 0) circle (1cm); \fill[red] (45:5mm) circle (1mm); \fill[red] (135:5mm) circle (1mm); \draw[line width=1mm,red] (230:6mm) arc (145:35:5mm); \end{tcolorbox} What's awkward about this is that it wraps a flow of normal logic inside the catch block. The exception clutters the logic. Wouldn't it be better if we didn't have to deal with the special case? If we didn't, our code would look much simpler. It would look like this: \begin{tcolorbox}[breakable, colback=green!10!white, colframe=green!85!black, sidebyside, righthand width = 3cm, tikz lower, label=blocks-and-indenting-good] \begin{lstlisting}[language = java, basicstyle=\small] MealExpenses expenses = expenseReportDAO.getMeals(employee.getID()); m_total += expenses.getTotal(); \end{lstlisting} \tcblower \path[fill = yellow, draw = yellow!75!red] (0, 0) circle (1cm); \fill[red] (45:5mm) circle (1mm); \fill[red] (135:5mm) circle (1mm); \draw[line width=1mm,red] (215:5mm) arc (215:325:5mm); \end{tcolorbox} Can we make the code that simple? It turns out that we can. We can change the ExpenseReportDAO so that it always returns a MealExpense object. If there are no meal expenses, it returns a MealExpense object that returns the per diem as its total: \begin{tcolorbox}[breakable, colback=green!10!white, colframe=green!85!black, sidebyside, righthand width = 3cm, tikz lower, label=blocks-and-indenting-good] \begin{lstlisting}[language = java, basicstyle=\small] public class PerDiemMealExpenses implements MealExpenses { public int getTotal() { // return the per diem default } } \end{lstlisting} \tcblower \path[fill = yellow, draw = yellow!75!red] (0, 0) circle (1cm); \fill[red] (45:5mm) circle (1mm); \fill[red] (135:5mm) circle (1mm); \draw[line width=1mm,red] (215:5mm) arc (215:325:5mm); \end{tcolorbox} This is called the \textbf{SPECIAL CASE PATTERN}. You create a class or configure an object so that it handles a special case for you. When you do, the client code doesn't have to deal with exceptional behavior. That behavior is encapsulated in the special case object. \begin{marker} Create a class or configure an object so that it handles a special case for you \end{marker} \section{Don't Return Null}\index{Error-handling!Don't Return Null} When we return null, we are essentially creating work for ourselves and foisting problems upon our callers. All it takes is one missing null check to send an application spinning out of control. \begin{marker} If you are tempted to return null from a method, consider throwing an exception or returning a SPECIAL CASE object, such as \inlinecode[green]{Optional} instead. \end{marker}
{ "alphanum_fraction": 0.7600493218, "avg_line_length": 45.5617977528, "ext": "tex", "hexsha": "ebf2afbfad587947dedefda1af616399b2b8ae87", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "dd3b407e033f2504acb245bb3ce8465464c57620", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "QubitPi/jersey-fundamentals", "max_forks_repo_path": "docs/assets/pdf/review/parts/1/error-handling.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "dd3b407e033f2504acb245bb3ce8465464c57620", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "QubitPi/jersey-fundamentals", "max_issues_repo_path": "docs/assets/pdf/review/parts/1/error-handling.tex", "max_line_length": 569, "max_stars_count": null, "max_stars_repo_head_hexsha": "dd3b407e033f2504acb245bb3ce8465464c57620", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "QubitPi/jersey-fundamentals", "max_stars_repo_path": "docs/assets/pdf/review/parts/1/error-handling.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1116, "size": 4055 }
\chapter{Rings}
{ "alphanum_fraction": 0.6666666667, "avg_line_length": 4.5, "ext": "tex", "hexsha": "e0d4b206f581ea470916462a5d95a9264ff81889", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/geometry/rings/00-00-Chapter_name.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/geometry/rings/00-00-Chapter_name.tex", "max_line_length": 15, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/geometry/rings/00-00-Chapter_name.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7, "size": 18 }
\begin{frame}{\ft{Our NA3 Business Strategy}} \section{Business Strategy} \vspace{2pt} {\begin{minipage}[c]{1.01\textwidth} {\color[rgb]{.0,0.1,0.1}{\Large\fontfamily{bch}\fontseries{b}\selectfont The NA3 Protocols encapsulate new technical models for Qt development and address recognized gaps in the Qt ecosystem, particularly the lack of a standard Qt Cloud Services. The strategy for promoting NA3 is therefore oriented to establishing NA3 within the Qt community, using the NA3's Qt implementation as a foundation to document NA3's essential hypergraph and type-theoretic concepts. On this basis NA3 can then be promoted in other application-development contexts.}} %\vspace{1em} %Combining Natural Language Processing and Conversation Analysis: \\ %for Next Generation Language and Conversation Tools \end{minipage}} %\vspace{1em} \vspace{15pt} \definecolor{blueback}{RGB}{0,100,100} \definecolor{bluefront}{RGB}{0,100,50} {\Large\fontfamily{uhv}\selectfont \begin{minipage}{\textwidth} \fcolorbox{darkRed}{blueback}{ \begin{minipage}{.98\textwidth} \vspace*{1pt} \hspace{-3pt}% \fcolorbox{darkRed}{white}% {\begin{minipage}{.992\textwidth}% \vspace{.8em} {\centerline{\LARGE\color{bluefront} \textbf{Within the Qt Market}}} \vspace{1em} {\setlength{\leftmargini}{30pt} {\Large\begin{itemize} \sqitem {\lsep} Promote NCN as a standard solution for Qt/Cloud Integration. \vspace{.2em} \sqitem {\lsep} Promote A3R tools for building custom scripting languages for Qt. \vspace{.2em} \sqitem {\lsep} \parbox[t]{18.8cm}{Promote the A3R protocol as a standard model for inter-application networking, describing applications, and serializing application-specific data structures.} \vspace{.2em} \sqitem {\lsep} \parbox[t]{20.5cm}{On the basis of these enhancements to the Qt ecosystem, LTS hopes to join the \textbf{Qt partners} program, which would expose NA3's unique features to a worldwide developer community.} \end{itemize}}}\vspace{3pt} \end{minipage}}\\\vspace{10pt}\\ % %\color{white}{\hrule} \vspace*{-41pt} \hspace{-7pt}\fcolorbox{darkRed}{white}{\begin{minipage}{.992\textwidth}% \vspace{.8em} {\centerline{\LARGE\color{bluefront} \textbf{Outside of Qt (see slide 9)}}\vspace{.8em}} {\setlength{\leftmargini}{30pt} {\Large%\begin{minipage}{.96\textwidth} \begin{itemize} \sqitem {\lsep} Generalize the NA3 C++ reflection model and hypergraph libraries to standard (non-Qt) C++. \vspace{.5em} \sqitem {\lsep} Implement the A3R Protocols for standard C++ and for other languages (C\#, Java, etc.). \vspace{.5em} \sqitem {\lsep} \parbox[t]{20cm}{Implement language-agnostic hypergraph serialization to allow A3R networking between applications written for different operating systems and/or programming languages.} \vspace{.8em} \end{itemize}%\end{minipage} } } \end{minipage}}\vspace{3em}\end{minipage}} \end{minipage}} \end{frame}
{ "alphanum_fraction": 0.7506061656, "avg_line_length": 33.183908046, "ext": "tex", "hexsha": "edfee1862cd83e31a45324ece5aefdf60f6d81dd", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8e3fe51f5e9071fb24b41586b5151576a932dd1b", "max_forks_repo_licenses": [ "BSL-1.0" ], "max_forks_repo_name": "ScignScape-RZ/ntxh", "max_forks_repo_path": "NA3/presentation (copy)/slide4c.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8e3fe51f5e9071fb24b41586b5151576a932dd1b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSL-1.0" ], "max_issues_repo_name": "ScignScape-RZ/ntxh", "max_issues_repo_path": "NA3/presentation (copy)/slide4c.tex", "max_line_length": 107, "max_stars_count": null, "max_stars_repo_head_hexsha": "8e3fe51f5e9071fb24b41586b5151576a932dd1b", "max_stars_repo_licenses": [ "BSL-1.0" ], "max_stars_repo_name": "ScignScape-RZ/ntxh", "max_stars_repo_path": "NA3/presentation (copy)/slide4c.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 905, "size": 2887 }
% Options for packages loaded elsewhere \PassOptionsToPackage{unicode}{hyperref} \PassOptionsToPackage{hyphens}{url} % \documentclass[ ignorenonframetext, ]{beamer} \usepackage{pgfpages} \setbeamertemplate{caption}[numbered] \setbeamertemplate{caption label separator}{: } \setbeamercolor{caption name}{fg=normal text.fg} \beamertemplatenavigationsymbolsempty % Prevent slide breaks in the middle of a paragraph \widowpenalties 1 10000 \raggedbottom \setbeamertemplate{part page}{ \centering \begin{beamercolorbox}[sep=16pt,center]{part title} \usebeamerfont{part title}\insertpart\par \end{beamercolorbox} } \setbeamertemplate{section page}{ \centering \begin{beamercolorbox}[sep=12pt,center]{part title} \usebeamerfont{section title}\insertsection\par \end{beamercolorbox} } \setbeamertemplate{subsection page}{ \centering \begin{beamercolorbox}[sep=8pt,center]{part title} \usebeamerfont{subsection title}\insertsubsection\par \end{beamercolorbox} } \AtBeginPart{ \frame{\partpage} } \AtBeginSection{ \ifbibliography \else \frame{\sectionpage} \fi } \AtBeginSubsection{ \frame{\subsectionpage} } \usepackage{lmodern} \usepackage{amssymb,amsmath} \usepackage{ifxetex,ifluatex} \ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{textcomp} % provide euro and other symbols \else % if luatex or xetex \usepackage{unicode-math} \defaultfontfeatures{Scale=MatchLowercase} \defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1} \fi % Use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} \IfFileExists{microtype.sty}{% use microtype if available \usepackage[]{microtype} \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts }{} \makeatletter \@ifundefined{KOMAClassName}{% if non-KOMA class \IfFileExists{parskip.sty}{% \usepackage{parskip} }{% else \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt}} }{% if KOMA class \KOMAoptions{parskip=half}} \makeatother \usepackage{xcolor} \IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available \IfFileExists{bookmark.sty}{\usepackage{bookmark}}{\usepackage{hyperref}} \hypersetup{ pdftitle={R\_PPT}, pdfauthor={Fan}, hidelinks, pdfcreator={LaTeX via pandoc}} \urlstyle{same} % disable monospaced font for URLs \newif\ifbibliography \usepackage{graphicx} \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi} \def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} \makeatother % Scale images if necessary, so that they will not overflow the page % margins by default, and it is still possible to overwrite the defaults % using explicit options in \includegraphics[width, height, ...]{} \setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} % Set default figure placement to htbp \makeatletter \def\fps@figure{htbp} \makeatother \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \setcounter{secnumdepth}{-\maxdimen} % remove section numbering \title{R\_PPT} \author{Fan} \date{2020年4月20日} \begin{document} \frame{\titlepage} \hypertarget{in-the-morning}{% \section{In the morning}\label{in-the-morning}} \begin{frame}{Getting up} \protect\hypertarget{getting-up}{} \begin{itemize} \tightlist \item Turn off alarm \item Get out of bed \end{itemize} \end{frame} \begin{frame}{Breakfast} \protect\hypertarget{breakfast}{} \begin{itemize} \tightlist \item Eat eggs \item Drink coffee \end{itemize} \end{frame} \hypertarget{in-the-evening}{% \section{In the evening}\label{in-the-evening}} \begin{frame}{Dinner} \protect\hypertarget{dinner}{} \begin{itemize} \tightlist \item Eat spaghetti \item Drink wine \end{itemize} \end{frame} \begin{frame} \begin{figure} \centering \includegraphics{r_ppt_files/figure-beamer/cars-1.pdf} \caption{A scatterplot.} \end{figure} \end{frame} \begin{frame}{Going to sleep} \protect\hypertarget{going-to-sleep}{} \begin{itemize} \tightlist \item Get in bed \item Count sheep \end{itemize} \end{frame} \end{document}
{ "alphanum_fraction": 0.7605435801, "avg_line_length": 25.4047619048, "ext": "tex", "hexsha": "3d494c91886f8b7b08ae7b119dbc043f66ac46d0", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "eb356d243f2c0b6548326d5cd1baffed96dfde63", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "FanJingithub/MyCode_Project", "max_forks_repo_path": "temp/r_ppt.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "eb356d243f2c0b6548326d5cd1baffed96dfde63", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "FanJingithub/MyCode_Project", "max_issues_repo_path": "temp/r_ppt.tex", "max_line_length": 83, "max_stars_count": null, "max_stars_repo_head_hexsha": "eb356d243f2c0b6548326d5cd1baffed96dfde63", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "FanJingithub/MyCode_Project", "max_stars_repo_path": "temp/r_ppt.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1417, "size": 4268 }
\definecolor {processblue}{cmyk}{0.96,0,0,0} \clearpage \section{Reaching Definitions Example} (Prepared by Namrata Priyadarshini, Shivam Bansal) We've been looking at different types of data flow analyses and trying to tie them into a single framework. One of the data flow analysis that we looked at was reaching definitions and recall that a reaching definition basically is defined as follows: A definition D of the form x = y + z reaches a point P in the program if there exists a path from the point immediately following D to P such that D is not killed, in other words x is not overwritten along that path. So even if there exists one such path where from the end of D to the beginning of P such that x has not been overwritten on that path then D would be considered to be to reach P. So let's look at this example: \begin{figure}[h!] \begin {center} \begin{minipage}{.5\textwidth} \centering \caption{Control Flow Graph} \begin {tikzpicture}[-latex ,auto ,node distance =3.5cm and 5cm ,on grid , semithick , state/.style ={ rectangle ,top color =white , bottom color = processblue!20 , draw,processblue , text=blue , scale = 0.7 ,minimum width =4 cm, minimum height = 4 cm}] \node[state] (A){} node [label = {[label distance = 0.3cm]90:},rectangle split,rectangle split parts=1]{% d1 : b=3 }; \node[state] (B) [below = of A]{} node [label = {[label distance = 0.3cm]90:}, rectangle split,rectangle split parts=1] [below = of A] {% d2 : c = 3 }; \node[state] (D) [below =of B]{} node [label = {[label distance = 0.65cm]90:}, rectangle split,rectangle split parts=1] [below = of B] {% d3 : c = 4 % }; \path[->] (A) edge node [below=0.3cm] {} (B); \path[->] (B) edge node [above=0.3cm] {} (D); \path[->] (B) edge [out=300,in=72,looseness=3] node[align = right][right] {} (B); % \path[->] (B) edge [loop right] node {} (B) ; \draw[->] (D) --++(0,-2.5cm) node [above left = 0.05cm] {} ; \draw[<-] (A) --++(0,2.5cm) node [above left = 0.05cm] {} ; \end{tikzpicture} \end{minipage}% \begin{minipage}{.5\textwidth} \centering \caption{Reaching definitions} \begin {tikzpicture}[-latex ,auto ,node distance =3.5cm and 5cm ,on grid , semithick , state/.style ={ rectangle ,top color =white , bottom color = processblue!20 , draw,processblue , text=blue , scale = 0.7 ,minimum width =4 cm, minimum height = 4 cm}] \node[state] (A){} node [label = {[label distance = 0.3cm]90:},rectangle split,rectangle split parts=3]{% $\Phi$ \nodepart{second} d1 : b=3 \nodepart{third} $\{d1\}$ }; \node[state] (B) [below = of A]{} node [label = {[label distance = 0.3cm]90:}, rectangle split,rectangle split parts=3] [below = of A] {% $\{d1,d2\}$ \nodepart{two} d2 : c = 3 \nodepart{three} $\{d1,d2\}$ }; \node[state] (D) [below =of B]{} node [label = {[label distance = 0.65cm]90:}, rectangle split,rectangle split parts=3] [below = of B] {% $\{d1,d2\}$ \nodepart{two} d3 : c = 4 \nodepart{three} $\{d1,d3\}$ % }; \path[->] (A) edge node [below=0.3cm] {} (B); \path[->] (B) edge node [above=0.3cm] {} (D); \path[->] (B) edge [out=300,in=72,looseness=3] node[align = right][right] {} (B); % \path[->] (B) edge [loop right] node {} (B) ; \draw[->] (D) --++(0,-2.5cm) node [above left = 0.05cm] {} ; \draw[<-] (A) --++(0,2.5cm) node [above left = 0.05cm] {} ; \end{tikzpicture} \end{minipage} \end{center} \end{figure} It has three definitions d1, d2, d3 and the reaching definitions are given in the figure. At the beginning, we initialize the boundary conditions to the empty set. Let us assume there's no reaching definitions in the beginning. Just after d1, it is only \{d1\} and just before d2 it's actually \{d1,d2\} because there's a path from d1 and then there's a path where d2 also reaches which is the path that takes the cycle.Then if we look at the end of d2 then it's also \{d1,d2\} because there are multiple paths that can reach that allow d2 to reach this particular point. Before d3, it's \{d1,d2\} because both d1 and d2 can reach here and then at the very end it's \{d1,d3\} and d2 cannot reach here because notice that both d2 and d3 are assigning to c and d3 is overwriting the c so d3 is killing d2 and so d2 doesn't exist here. So two important things \begin{itemize} \item \{d1,d2\} are present even before d2 because of the loop \item d2 is not present at the exit of the program because d3 has killed d2 \end{itemize} \begin{table} \centering \caption{Reaching Definitions DFA} \begin{tabular}{ c|c} Domain & Sets of Definitions \\ \hline Direction & Forward \\ \hline Transfer Function & \begin{tabular}[x]{@{}c@{}}$Out[B]=(in[B]-kill[B]) \cup Gen[B]$ \\Gen: Locally exposed definition of B \\ Kill: Definitions overwritten by B \end{tabular} \\ \hline Meet Operator & Set Union $\cup$ \\ \hline Boundary Condition & $Out[Entry]=\Phi$ \\ \end{tabular} \end{table} If we were to look at the data flow analysis for reaching definitions the domain is basically the sets of definitions. For example $\{d1, d2, d3\}$. Direction is forward. Transfer function $Out[B]=(in[B]-kill[B])$ where kill is defined by statements that are uh overwriting an existing definition and gen is basically the new definition itself. So, kill is definitions over written by B, gen is the locally exposed definitions of B. So if we're talking about a basic block then it's about the locally exposed definitions. So definitions that have been made but have not been killed subsequently. Meet operator is union because we are looking at any such path so if there exists any such path we're going to consider it to reach. Boundary condition is $Out[Entry]=\Phi$. \section{Must Reach Definitions} So now we are going to just change this analysis slightly just to show how the small differences can give you different analysis completely. So let's say we define an analysis called must reach definition which is defined as follows: a definition of the form x = y + z must reach point program point P if and only if D appears at least once along all paths leading to P and x is not redefined or in other words D is not killed along any path after the last appearance of D and before P. So, basically on all possible paths D is reaching P and on none of those paths there's another statement that is killing D . So the last definition of x is because of D on all paths that are reaching P. So that's what must reach definitions says. \begin{table} \centering \caption{Must Reach Definitions DFA} \begin{tabular}{ c|c} Domain & Sets of Definitions \\ \hline Direction & Forward \\ \hline Transfer Function & \begin{tabular}[x]{@{}c@{}}$Out[B]=(in[B]-kill[B]) \cup Gen[B]$ \\Gen: Locally exposed definition of B \\ Kill: Definitions overwritten by B \end{tabular} \\ \hline Meet Operator & Set Intersection $\cap$ \\ \hline Boundary Condition & $Out[Entry]=\Phi$ \\ \end{tabular} \end{table} Data flow analysis of must reach definitions is identical with reaching definitions but just the meet operator has changed and instead of set union it becomes set intersection and that's going to capture the fact that we want D to reach on all parts and not just any path and so the transfer function remains the same because once again we are interested in definitions that have not been killed but everything else remains the same. The boundary condition remains the same, the direction remains the same, the domains remain the same. Just the meat operator changes and we get a completely different analysis. So that's the power of this common framework you can just change one parameter and you don't have to rewrite the algorithm, you don't have to change anything else, you can just basically reuse the existing infrastructure. \section{Must Reach Definitions Example} \begin{figure}[h!] \begin {center} \begin{minipage}{.5\textwidth} \centering \caption{Reaching definitions} \begin {tikzpicture}[-latex ,auto ,node distance =3.5cm and 5cm ,on grid , semithick , state/.style ={ rectangle ,top color =white , bottom color = processblue!20 , draw,processblue , text=blue , scale = 0.7 ,minimum width =4 cm, minimum height = 4 cm}] \node[state] (A){} node [label = {[label distance = 0.3cm]90:},rectangle split,rectangle split parts=3]{% $\Phi$ \nodepart{second} d1 : b=3 \nodepart{third} $\{d1\}$ }; \node[state] (B) [below = of A]{} node [label = {[label distance = 0.3cm]90:}, rectangle split,rectangle split parts=3] [below = of A] {% $\{d1,d2\}$ \nodepart{two} d2 : c = 3 \nodepart{three} $\{d1,d2\}$ }; \node[state] (D) [below =of B]{} node [label = {[label distance = 0.65cm]90:}, rectangle split,rectangle split parts=3] [below = of B] {% $\{d1,d2\}$ \nodepart{two} d3 : c = 4 \nodepart{three} $\{d1,d3\}$ % }; \path[->] (A) edge node [below=0.3cm] {} (B); \path[->] (B) edge node [above=0.3cm] {} (D); \path[->] (B) edge [out=300,in=72,looseness=3] node[align = right][right] {} (B); % \path[->] (B) edge [loop right] node {} (B) ; \draw[->] (D) --++(0,-2.5cm) node [above left = 0.05cm] {} ; \draw[<-] (A) --++(0,2.5cm) node [above left = 0.05cm] {} ; \end{tikzpicture} \end{minipage}% \begin{minipage}{.5\textwidth} \centering \caption{Must reach definitions} \begin {tikzpicture}[-latex ,auto ,node distance =3.5cm and 5cm ,on grid , semithick , state/.style ={ rectangle ,top color =white , bottom color = processblue!20 , draw,processblue , text=blue , scale = 0.7 ,minimum width =4 cm, minimum height = 4 cm}] \node[state] (A){} node [label = {[label distance = 0.3cm]90:},rectangle split,rectangle split parts=3]{% $\Phi$ \nodepart{second} d1 : b=3 \nodepart{third} $\{d1\}$ }; \node[state] (B) [below = of A]{} node [label = {[label distance = 0.3cm]90:}, rectangle split,rectangle split parts=3] [below = of A] {% $\{d1\}$ \nodepart{two} d2 : c = 3 \nodepart{three} $\{d1,d2\}$ }; \node[state] (D) [below =of B]{} node [label = {[label distance = 0.65cm]90:}, rectangle split,rectangle split parts=3] [below = of B] {% $\{d1,d2\}$ \nodepart{two} d3 : c = 4 \nodepart{three} $\{d1,d3\}$ % }; \path[->] (A) edge node [below=0.3cm] {} (B); \path[->] (B) edge node [above=0.3cm] {} (D); \path[->] (B) edge [out=300,in=72,looseness=3] node[align = right][right] {} (B); % \path[->] (B) edge [loop right] node {} (B) ; \draw[->] (D) --++(0,-2.5cm) node [above left = 0.05cm] {} ; \draw[<-] (A) --++(0,2.5cm) node [above left = 0.05cm] {} ; \end{tikzpicture} \end{minipage} \end{center} \end{figure} So just to see an example to understand the difference between reaching definitions and must reach definitions let's take the same example with three definitions d1, d2, d3. d1 is assigning to b and d2 and d3 are assigning to c. Our reaching definitions is basically something that we have seen before. Reaching definitions and must reach definitions are same at all points but there's a difference just before d2. It's $\{d1, d2\}$ in reaching definitions but in must reach definitions it's only $\{d1\}$ because there exists a path where d2 doesn't reach this point and that path is the straight line path without the loop and so here it's just d1 but in reaching definitions it becomes $\{d1, d2\}$. So once again for all other points actually it has the same answer. We can check this, for example just after d3. So, just before d3, on all possible paths d2 reaches d3 without getting killed so if we just take the straight line path without taking the loop d2 reached. d2 reaches even if we take a loop. We can take any iterations of the loop and d2 would still reach so on all possible paths d2 is reaching. So d2 must reach this program point and similarly d2 gets killed just after d3 and so d1 and d3 are the only definitions that must reach the point just after d3 and the reaching definitions also has the same answer in this case.
{ "alphanum_fraction": 0.687400319, "avg_line_length": 54.397260274, "ext": "tex", "hexsha": "dc5ccc3e0d8d341898ade1559adbfa8fa68283f1", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2021-04-12T19:11:33.000Z", "max_forks_repo_forks_event_min_datetime": "2021-02-16T08:32:53.000Z", "max_forks_repo_head_hexsha": "af3788cde815a5b1d19f206ec8605c0e372c1833", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "arpit-saxena/compiler-notes", "max_forks_repo_path": "module93.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "af3788cde815a5b1d19f206ec8605c0e372c1833", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "arpit-saxena/compiler-notes", "max_issues_repo_path": "module93.tex", "max_line_length": 1075, "max_stars_count": null, "max_stars_repo_head_hexsha": "af3788cde815a5b1d19f206ec8605c0e372c1833", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "arpit-saxena/compiler-notes", "max_stars_repo_path": "module93.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3725, "size": 11913 }
\section{Implementation Strategy} \markboth{Implementation Strategy}{Implementation Strategy} \thispagestyle{myheadings} \label{isw} \subsection{Summary} \label{isw-is} A general strategy is to develop different modules, which are interrelated by a services provider -- services consumer concept, and which offer programming interfaces to interested programmers, who want to embed the modules into other, possibly more complex, application contexts. The following modules are about to be realized: \begin{enumerate} \item {\em Secure storage and cryptography}: A cryptographic system for RSA-, DSA- and DES-algorithms as well as hash functions, connected to a Personal Security Environment (PSE). It provides functions which perform cryptographic algorithms and a secure storage of keys and other personal local information. The smartcard technology is not visible at the interface. The smartcard service realized so far, is the so called ``software smartcard''. \item {\em Authentication framework certificate handling}: A richer functionality of cryptographic functions and secure storage of keys according to X.509 equipped with a more complex structure due to the description of these principles of security operations. There are two submodules, one supporting {\em local} (i.e. smartcard oriented) handling of signatures and certificates; the other one supports {\em distributed} (i.e. directory based) certificate and black lists information. \item {\em ``Pricacy Enhanced Mail''} (``PEM'') functions which {\em protect messages} according to the Internet specifications RFC 1421 - 1423 \cite{rfc1}. The protected messages can be securely exchanged within an unprotected network. \item {\em Key management functions}: There are functions to generate encryption keys and signatures, other functions to support secure interchange of security information like certificates, pairs of root-CA keys, etc. These functions are of interest for certification authorities, and for users to maintain their PSEs. \item {\em Auxiliary functions} of general applicability to support all modules. These functions include octetstring handling, file handling, printable representation of SecuDE objects, support of ASN.1 encoding/decoding, etc. \end{enumerate} \subsection{The Modular Structure} \label{isw-sm} The specifications describe basic functions in terms of C-language procedure calls. They constitute a basic set of software modules which are intended to support {\em security applications}, in that they provide {\em security application programming interfaces}. The applications {\em Secured X.500} and {\em PEM} (Privacy Enhanced Mail, RFC 1421-1424) are based on these functions. They comprise the application programming interfaces {\em Secure-IF}, {\em AF-IF}, and {\em PEM-IF} as shown in Fig. 17 below. This interface structure was chosen for the following reasons: \\ [1ex] Security applications should be independent of underlying technology, i.e. it should not make any difference for the security functionality in an application whether cryptographic algorithms are realized in software or hardware chips or in external devices like smartcards or smartcard readers, nor should it make a difference how and where security relevant information like keys are stored. They need an interface which largely hides the implications of particular technology and which is oriented on user required security characteristics in order to support migrations to advanced technologies. For this purpose we defined the interface {\em Secure-IF}. Secure-IF provides {\em Cryptographic Functions} and a {\em Personal Security Environment} (PSE) in a technology independent form. \\ [1ex] The {\em Personal Security Environment} can be realized by various methods. The highest level of security against unallowed manipulation of security relevant information requires hardware support. By current standards, {\em smartcards} are appropriate means to store security relevant personal data because they combine the two aspects \bi \m security against eavesdropping through hardware properties and \m mobility and portability of personal information. \ei However, smartcard technology is currently developing fast, and standardisation in this area is not stable yet. For this reason, we provide a software substitute for a smartcard environment. We assume that all security relevant functions are performed by software (e.g. in work stations, PC's or main frames) and that all security relevant information is stored on disks or other background memory of the systems. Protection mechanisms as they are offered by smartcards are to be modelled by means of file encryption and electronic signatures. Therefore we speak of the so called {\em Software PSE} (SW-PSE) to express the analogy of security functionality of the smartcard and its software substitute. \begin{figure} \input{genstruct11} {\footnotesize Fig.\arabic{Abb}: Software Modules Interface Structure} \label{ifstruct} \end{figure} Security of the applications X.500 and PEM is based on certification procedures and formats as defined in {\em Volume 2: Specification of Security Interfaces} of SecuDe in accordance to {\em CCITT X.500ff}. An essential part of it is the {\em Authentication Framework} X.509. For this functionality we defined the interface {\em AF-IF} (AF stands for Authentication Framework). One AF module provides the local handling of certificates and a certification path oriented signature verification procedure. Another AF module provides access to directories in order to support distributed security information. Directory access will particularly be used to retrieve or store certificates and black lists, both by certification authorities and ordinary users. It can be expected, however, that the functionality defined by {\em AF-IF} is not limited to the purpose of Electronic Mail, but that it is useful for a wide range of applications. On the other hand, one can imagine other applications, e.g. using ECMA certification methods, which use {\em Secure-IF} directly. \\[1em] The interface {\em PEM-IF} was defined for the purpose of the application {\em Privacy Enhanced Mail}. It comprises functions which are necessary in the context of RFC 1421 - 1423. \\[1em] Naturally, a security system based on asymmetric cryptography depends on a well organized {\em key management} (``KM'') infrastructure including certification authorities, users and directories. For this purpose, the KM module is defined, comprising functions to generate keys and certificates, and other functions to support the secure exchange of security information such as keys and certificates. \\[1em] Additionally, there is a collection of useful auxiliary functions mostly destined for the handling of octetstrings, object identifiers, algorithm identifiers, and PEM-specific transformations. Some of them are of internal use only, and the application programmer will not need to handle them, but they are mentioned here for completeness. \subsection{Secure Storage and Cryptography} \label{isw-sec} This module consists of two submodules. One submodule performs cryptographic algorithms. The other submodule performs secure storage of data on a local smartcard. The cryptographic submodule uses the service of the secure storage submodule, in that cryptographic keys on the smartcard can be used. The cryptographic submodule contains the following set of functions: There is a set of functions which performs arithmetic operations on very large integer numbers. These functions are available in C and in a number of assembler codes for different CPU platforms for efficiency reasons. There are asymmetric RSA and DSA functions based on these arithmetic functions. There are symmetric DES encryption and decryption functions. There are also functions to perform different hash algorithms, which are used in the asymmetric signature and verification functions. There are functions to generate keys and keep them in a local reference table or store them in a PSE. There is also a function to generate a DES-key and encrypt it with an asymmetric public key, and the inverse hereof. Finally, the generation of a random number is also supported by this submodule. The other submodule provides functions with the help of which sensitive data (such as keys or certificates) can be stored on or retrieved from a PSE. This submodule comprises functions that simply open, close, create, read or write a named PSE object. The handling of pin-protection for the PSE and for objects on the PSE is performed by this submodule. \subsection{Authentication Framework: Certificate Handling} \label{isw-af} This module supports the handling of X.509 data formats and procedures. It consists of two submodules. One of the submodules deals with the local handling of keys and certificates, in that it accesses the PSE. The other submodule deals with the remote security information, in that it accesses an X.500 directory. The local-AF submodule invokes the functions for secure storage and cryptography but embeds them into a richer structure due to X.509. For example, this module defines and utilizes the certificates structure in order to verify a signature. It also defines and utilizes the X.509 format of a certificate or a Key Information when accessing PSE objects. This submodule ``knows'' the structure of the PSE objects. It is by purpose, that the design of this AF programming interface is hierarchically ``above'' the SEC interface. This is in order to give one complete programming interface to programmers who want to implement an X.509 oriented security application. The remote-AF submodule deals with the same X.509 formats. However, it supports the application programmer to implement programs for users to retrieve information from or store information into a distributed directory data base. This is required by certification authorities in order to maintain their certificates and black lists of certificates in directory attributes. This is also required by users who maintain their certificates in directory attributes. Beside these security information, this submodule provides also access to directory name attributes in order to support users and CAs in their secure certificate and message handling. \subsection{Key Management} \label{isw-km} This module supports users and certification authorities to maintain consistent security information within the communication environment. The communication community needs support to keep current and correct information on PSEs, in directory entries, and to exchange security information. The principles of key operations are described in previous chapters of this volume. The following functions are provided: \begin{itemize} \item create a PSE with the objects defined in the authentication framework module and with initial contents, \item check PSE for validity and consistency, \item display PSE objects, \item create a certificate or prototype certificate, \item send a a certificate or prototype certificate, \item extract oneself's certificate from a message and store it on the PSE, \item extract a partner's certificate from a message and store it in a local certificate list, \item edit PSE certificate lists, \item check certificate format and content from a CA's point of view (including uniqeness of public key), \item sign certificate, \item check certificate format and content from the owner's point of view, \item send a a certification path, \item extract a certification path from a message and store it in the PSE, \item send a pair of root-CA keys, \item extract a pair of root-CA keys from a message and store it in the PSE, \item maintain directory entry ``Old Certificates'', \item change signature keys (for users, CAs, root-CAs), \item change encryption keys, \item inform others about change of keys (for users, CAs, root-CAs), \item maintain black lists of certificates. \end{itemize}
{ "alphanum_fraction": 0.8112860013, "avg_line_length": 45.7624521073, "ext": "tex", "hexsha": "df39792ce1d8ba747bb040cfadcfc2af319ae093", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2019-04-21T07:52:15.000Z", "max_forks_repo_forks_event_min_datetime": "2018-03-09T16:50:31.000Z", "max_forks_repo_head_hexsha": "9eeb0dae04da62c858d018b5f5b2e0a96bdd162d", "max_forks_repo_licenses": [ "BSD-4-Clause-UC" ], "max_forks_repo_name": "scorpiochn/Applied-Cryptography", "max_forks_repo_path": "secude-4.1.all/secude/doc/vol1-imp.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9eeb0dae04da62c858d018b5f5b2e0a96bdd162d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-4-Clause-UC" ], "max_issues_repo_name": "scorpiochn/Applied-Cryptography", "max_issues_repo_path": "secude-4.1.all/secude/doc/vol1-imp.tex", "max_line_length": 99, "max_stars_count": 4, "max_stars_repo_head_hexsha": "9eeb0dae04da62c858d018b5f5b2e0a96bdd162d", "max_stars_repo_licenses": [ "BSD-4-Clause-UC" ], "max_stars_repo_name": "rekav0k/Applied-Cryptography", "max_stars_repo_path": "secude-4.1.all/secude/doc/vol1-imp.tex", "max_stars_repo_stars_event_max_datetime": "2021-04-17T04:59:54.000Z", "max_stars_repo_stars_event_min_datetime": "2017-04-18T23:33:21.000Z", "num_tokens": 2453, "size": 11944 }
\documentclass{article} \usepackage{hevea} % ww: this gibberish is ignored by hevea but makes the PDF look better \begin{latexonly} \oddsidemargin 4.5pc \evensidemargin 4.5pc \advance\oddsidemargin by -1.2in \advance\evensidemargin by -1.2in \marginparwidth 0pt \marginparsep 11pt \topmargin 4.5pc \advance\topmargin by -1in \headheight 0pt \headsep 0pt \advance\topmargin by -37pt \headheight 12pt \headsep 25pt \textheight 666pt \textwidth 44pc \end{latexonly} % cilversion.tex is generated automatically to define \cilversion \include{cil.version} \def\secref#1{Section~\ref{sec-#1}} \def\chref#1{Chapter~\ref{ch-#1}} \def\apiref#1#2#3{\ahref{api/#1.html\##2#3}{#1.#3}} \def\moduleref#1{\ahref{api/#1.html}{#1}} % Use this to refer to a Cil type/val \def\ciltyperef#1{\apiref{Cil}{TYPE}{#1}} \def\cilvalref#1{\apiref{Cil}{VAL}{#1}} \def\cilvisit#1{\apiref{Cil.cilVisitor}{#1}} \def\cilprinter#1{\apiref{Cil.cilPrinter}{#1}} % Use this to refer to a type/val in the Pretty module \def\ptyperef#1{\apiref{Pretty}{TYPE}{#1}} \def\pvalref#1{\apiref{Pretty}{VAL}{#1}} % Use this to refer to a type/val in the Errormsg module \def\etyperef#1{\apiref{Errormsg}{TYPE}{#1}} \def\evalref#1{\apiref{Errormsg}{VAL}{#1}} \def\formatcilvalref#1{\apiref{Formatcil}{VAL}{#1}} \def\cfgref#1{\apiref{Cfg}{VAL}{#1}} %---------------------------------------------------------------------- % MACROS \newcommand{\hsp}{\hspace{0.5in}} \def\t#1{{\tt #1}} \newcommand\codecolor{\ifhevea\blue\else\fi} \renewcommand\c[1]{{\codecolor #1}} % Use for code fragments %%% Define an environment for code %% Unfortunately since hevea is not quite TeX you have to use this as follows %\begin{code} % ... %\end{verbatim}\end{code} \def\code{\begingroup\codecolor\begin{verbatim}} \def\endcode{\endgroup} %use this for links to external pages. It will open pages in the %top frame. \newcommand\ahreftop[2]{{\ahref{javascript:loadTop('#1')}{#2}}} %---------------------------------------------------------------------- % Make sure that most documents show up in the main frame, % and define javascript:loadTop for those links that should fill the window. \makeatletter \let\oldmeta=\@meta \def\@meta{% \oldmeta \begin{rawhtml} <base target="main"> <script language="JavaScript"> <!-- Begin function loadTop(url) { parent.location.href= url; } // --> </script> \end{rawhtml}} \makeatother \begin{document} \begin{latexonly} \title{CIL: Infrastructure for C Program Analysis and Transformation} \end{latexonly} \maketitle \section{Introduction} New: CIL now has a Source Forge page: \ahreftop{http://sourceforge.net/projects/cil} {http://sourceforge.net/projects/cil}. CIL ({\bf C} {\bf I}ntermediate {\bf L}anguage) is a high-level representation along with a set of tools that permit easy analysis and source-to-source transformation of C programs. CIL is both lower-level than abstract-syntax trees, by clarifying ambiguous constructs and removing redundant ones, and also higher-level than typical intermediate languages designed for compilation, by maintaining types and a close relationship with the source program. The main advantage of CIL is that it compiles all valid C programs into a few core constructs with a very clean semantics. Also CIL has a syntax-directed type system that makes it easy to analyze and manipulate C programs. Furthermore, the CIL front-end is able to process not only ANSI-C programs but also those using Microsoft C or GNU C extensions. If you do not use CIL and want instead to use just a C parser and analyze programs expressed as abstract-syntax trees then your analysis will have to handle a lot of ugly corners of the language (let alone the fact that parsing C itself is not a trivial task). See \secref{simplec} for some examples of such extreme programs that CIL simplifies for you. In essence, CIL is a highly-structured, ``clean'' subset of C. CIL features a reduced number of syntactic and conceptual forms. For example, all looping constructs are reduced to a single form, all function bodies are given explicit {\tt return} statements, syntactic sugar like {\tt "->"} is eliminated and function arguments with array types become pointers. (For an extensive list of how CIL simplifies C programs, see \secref{cabs2cil}.) This reduces the number of cases that must be considered when manipulating a C program. CIL also separates type declarations from code and flattens scopes within function bodies. This structures the program in a manner more amenable to rapid analysis and transformation. CIL computes the types of all program expressions, and makes all type promotions and casts explicit. CIL supports all GCC and MSVC extensions except for nested functions and complex numbers. Finally, CIL organizes C's imperative features into expressions, instructions and statements based on the presence and absence of side-effects and control-flow. Every statement can be annotated with successor and predecessor information. Thus CIL provides an integrated program representation that can be used with routines that require an AST (e.g. type-based analyses and pretty-printers), as well as with routines that require a CFG (e.g., dataflow analyses). CIL also supports even lower-level representations (e.g., three-address code), see \secref{Extension}. CIL comes accompanied by a number of Perl scripts that perform generally useful operations on code: \begin{itemize} \item A \ahrefloc{sec-driver}{driver} which behaves as either the \t{gcc} or Microsoft VC compiler and can invoke the preprocessor followed by the CIL application. The advantage of this script is that you can easily use CIL and the analyses written for CIL with existing make files. \item A \ahrefloc {sec-merger}{whole-program merger} that you can use as a replacement for your compiler and it learns all the files you compile when you make a project and merges all of the preprocessed source files into a single one. This makes it easy to do whole-program analysis. \item A \ahrefloc{sec-patcher}{patcher} makes it easy to create modified copies of the system include files. The CIL driver can then be told to use these patched copies instead of the standard ones. \end{itemize} CIL has been tested very extensively. It is able to process the SPECINT95 benchmarks, the Linux kernel, GIMP and other open-source projects. All of these programs are compiled to the simple CIL and then passed to \t{gcc} and they still run! We consider the compilation of Linux a major feat especially since Linux contains many of the ugly GCC extensions (see \secref{ugly-gcc}). This adds to about 1,000,000 lines of code that we tested it on. It is also able to process the few Microsoft NT device drivers that we have had access to. CIL was tested against GCC's c-torture testsuite and (except for the tests involving complex numbers and inner functions, which CIL does not currently implement) CIL passes most of the tests. Specifically CIL fails 23 tests out of the 904 c-torture tests that it should pass. GCC itself fails 19 tests. A total of 1400 regression test cases are run automatically on each change to the CIL sources. CIL is relatively independent on the underlying machine and compiler. When you build it CIL will configure itself according to the underlying compiler. However, CIL has only been tested on Intel x86 using the gcc compiler on Linux and cygwin and using the MS Visual C compiler. (See below for specific versions of these compilers that we have used CIL for.) The largest application we have used CIL for is \ahreftop{../ccured/index.html}{CCured}, a compiler that compiles C code into type-safe code by analyzing your pointer usage and inserting runtime checks in the places that cannot be guaranteed statically to be type safe. You can also use CIL to ``compile'' code that uses GCC extensions (e.g. the Linux kernel) into standard C code. CIL also comes accompanies by a growing library of extensions (see \secref{Extension}). You can use these for your projects or as examples of using CIL. \t{PDF} versions of \ahref{CIL.pdf}{this manual} and the \ahref{CIL-API.pdf}{CIL API} are available. However, we recommend the \t{HTML} versions because the postprocessed code examples are easier to view. If you use CIL in your project, we would appreciate letting us know. If you want to cite CIL in your research writings, please refer to the paper ``CIL: Intermediate Language and Tools for Analysis and Transformation of C Programs'' by George C. Necula, Scott McPeak, S.P. Rahul and Westley Weimer, in ``Proceedings of Conference on Compilier Construction'', 2002. \section{Installation} You need the following tools to build CIL: \begin{itemize} \item A Unix-like shell environment (with bash, perl, make, mv, cp, etc.). On Windows, you will need cygwin with those packages. \item An ocaml compiler. You will need OCaml release 3.08 or higher to build CIL. CIL has been tested on Linux and on Windows (where it can behave as either Microsoft Visual C or gcc). On Windows, you can build CIL both with the cygwin version of ocaml (preferred) and with the Win32 version of ocaml. \item An underlying C compiler, which can be either gcc or Microsoft Visual C. \end{itemize} \begin{enumerate} \item Get the source code. \begin{itemize} \item {\em Official distribution} (Recommended): \begin{enumerate} \item Download the CIL \ahref{distrib}{distribution} (latest version is \ahrefurl{distrib/cil-\cilversion.tar.gz}). See the \secref{changes} for recent changes to the CIL distribution. \item Unzip and untar the source distribution. This will create a directory called \t{cil} whose structure is explained below. \\ \t{~~~~tar xvfz cil-\cilversion.tar.gz} \end{enumerate} \item {\em Subversion Repository}: \\ Alternately, you can download an up to the minute version of CIL from our Subversion repository at: \begin{verbatim} svn co svn://hal.cs.berkeley.edu/home/svn/projects/trunk/cil \end{verbatim} However, the Subversion version may be less stable than the released version. See the Changes section of doc/cil.tex to see what's changed since the last release. There may be changes that aren't yet documented in the .tex file or this website. For those who were using the CVS server before we switched to Subversion, revision 8603 in Subversion corresponds to the last CVS version. \end{itemize} \item Enter the \t{cil} directory and run the \t{configure} script and then GNU make to build the distribution. If you are on Windows, at least the \t{configure} step must be run from within \t{bash}. \\ \hsp\verb!cd cil!\\ \hsp\verb!./configure!\\ \hsp\verb!make!\\ \hsp\verb!make quicktest!\\ \item You should now find \t{cilly.asm.exe} in a subdirectory of \t{obj}. The name of the subdirectory is either \t{x86\_WIN32} if you are using \t{cygwin} on Windows or \t{x86\_LINUX} if you are using Linux (although you should be using instead the Perl wrapper \t{bin/cilly}). Note that we do not have an \t{install} make target and you should use Cil from the development directory. \item If you decide to use CIL, {\bf please} \ahref{mailto:[email protected]}{send us a note}. This will help recharge our batteries after a few years of development. And of course, do send us your bug reports as well. \end{enumerate} The \t{configure} script tries to find appropriate defaults for your system. You can control its actions by passing the following arguments: \begin{itemize} \item \t{CC=foo} Specifies the path for the \t{gcc} executable. By default whichever version is in the PATH is used. If \t{CC} specifies the Microsoft \t{cl} compiler, then that compiler will be set as the default one. Otherwise, the \t{gcc} compiler will be the default. \end{itemize} CIL requires an underlying C compiler and preprocessor. CIL depends on the underlying compiler and machine for the sizes and alignment of types. The installation procedure for CIL queries the underlying compiler for architecture and compiler dependent configuration parameters, such as the size of a pointer or the particular alignment rules for structure fields. (This means, of course, that you should re-run \t{./configure} when you move CIL to another machine.) We have tested CIL on the following compilers: \begin{itemize} \item On Windows, \t{cl} compiler version 12.00.8168 (MSVC 6), 13.00.9466 (MSVC .Net), and 13.10.3077 (MSVC .Net 2003). Run \t{cl} with no arguments to get the compiler version. \item On Windows, using \t{cygwin} and \t{gcc} version 2.95.3, 3.0, 3.2, 3.3, and 3.4. \item On Linux, using \t{gcc} version 2.95.3, 3.0, 3.2, 3.3, 4.0, and 4.1. \end{itemize} Others have successfully used CIL on x86 processors with Mac OS X, FreeBSD and OpenBSD; on amd64 processors with FreeBSD; on SPARC processors with Solaris; and on PowerPC processors with Mac OS X. If you make any changes to the build system in order to run CIL on your platform, please send us a patch. \subsection{Building CIL on Windows with Microsoft Visual C} Some users might want to build a standalone CIL executable on Windows (an executable that does not require cygwin.dll to run). You will need cygwin for the build process only. Here is how we do it \begin{enumerate} \item Start with a clean CIL directory \item Start a command-line window setup with the environment variables for Microsoft Visual Studio. You can do this by choosing Programs/Microsoft Visual Studio/Tools/Command Prompt. Check that you can run \t{cl}. \item Ensure that \t{ocamlc} refers to a Win32 version of ocaml. Run \t{ocamlc -v} and look at the path to the standard library. If you have several versions of ocaml, you must set the following variables: \begin{verbatim} set OCAMLWIN=C:/Programs/ocaml-win set OCAMLLIB=%OCAMLWIN%/lib set PATH=%OCAMLWIN%/bin;%PATH% set INCLUDE=%INCLUDE%;%OCAMLWIN%/inc set LIB=%LIB%;%OCAMLWIN%/lib;obj/x86_WIN32 \end{verbatim} \item Run \t{bash -c "./configure CC=cl"}. \item Run \t{bash -c "make WIN32=1 quickbuild"} \item Run \t{bash -c "make WIN32=1 NATIVECAML=1 cilly} \item Run \t{bash -c "make WIN32=1 bindistrib-nocheck} \end{enumerate} The above steps do not build the CIL library, but just the executable. The last step will create a subdirectory \t{TEMP\_cil-bindistrib} that contains everything that you need to run CIL on another machine. You will have to edit manually some of the files in the \t{bin} directory to replace \t{CILHOME}. The resulting CIL can be run with ActiveState Perl also. \section{Distribution Contents} The file \ahrefurl{distrib/cil-\cilversion.tar.gz} contains the complete source CIL distribution, consisting of the following files: \begin{tabular}{ll} Filename & Description \\ \t{Makefile.in} & \t{configure} source for the Makefile that builds CIL \\ \t{configure} & The configure script \\ \t{configure.in} & The \t{autoconf} source for \t{configure} \\ \t{config.guess}, \t{config.sub}, \t{install-sh} & stuff required by \t{configure} \\ \\ \t{doc/} & HTML documentation of the CIL API \\ \t{obj/} & Directory that will contain the compiled CIL modules and executables\\ \t{bin/cilly.in} & The \t{configure} source for a Perl script that can be invoked with the same arguments as either \t{gcc} or Microsoft Visual C and will convert the program to CIL, perform some simple transformations, emit it and compile it as usual. \\ \t{lib/CompilerStub.pm} & A Perl class that can be used to write code that impersonates a compiler. \t{cilly} uses it. \\ \t{lib/Merger.pm} & A subclass of \t{CompilerStub.pm} that can be used to merge source files into a single source file.\t{cilly} uses it. \\ \t{bin/patcher.in} & A Perl script that applies specified patches to standard include files.\\ \\ \t{src/check.ml,mli} & Checks the well-formedness of a CIL file \\ \t{src/cil.ml,mli} & Definition of CIL abstract syntax and utilities for manipulating it\\ \t{src/clist.ml,mli} & Utilities for efficiently managing lists that need to be concatenated often\\ \t{src/errormsg.ml,mli} & Utilities for error reporting \\ \t{src/ext/heapify.ml} & A CIL transformation that moves array local variables from the stack to the heap \\ \t{src/ext/logcalls.ml,mli} & A CIL transformation that logs every function call \\ \t{src/ext/sfi.ml} & A CIL transformation that can log every memory read and write \\ \t{src/frontc/clexer.mll} & The lexer \\ \t{src/frontc/cparser.mly} & The parser \\ \t{src/frontc/cabs.ml} & The abstract syntax \\ \t{src/frontc/cprint.ml} & The pretty printer for CABS \\ \t{src/frontc/cabs2cil.ml} & The elaborator to CIL \\ \t{src/main.ml} & The \t{cilly} application \\ \t{src/pretty.ml,mli} & Utilities for pretty printing \\ \t{src/rmtmps.ml,mli} & A CIL tranformation that removes unused types, variables and inlined functions \\ \t{src/stats.ml,mli} & Utilities for maintaining timing statistics \\ \t{src/testcil.ml} & A random test of CIL (against the resident C compiler)\\ \t{src/trace.ml,mli} & Utilities useful for printing debugging information\\ \\ \t{ocamlutil/} & Miscellaneous libraries that are not specific to CIL. \\ \t{ocamlutil/Makefile.ocaml} & A file that is included by \t{Makefile} \\ \t{ocamlutil/perfcount.c} & C code that links with src/stats.ml and reads Intel performance counters. \\ \\ \t{obj/@ARCHOS@/feature\_config.ml} & File generated by the Makefile describing which extra ``features'' to compile. See \secref{cil} \\ \t{obj/@ARCHOS@/machdep.ml} & File generated by the Makefile containing information about your architecture, such as the size of a pointer \\ \t{src/machdep.c} & C program that generates \t{machdep.ml} files \\ \end{tabular} \section{Compiling C to CIL}\label{sec-cabs2cil} In this section we try to describe a few of the many transformations that are applied to a C program to convert it to CIL. The module that implements this conversion is about 5000 lines of OCaml code. In contrast a simple program transformation that instruments all functions to keep a shadow stack of the true return address (thus preventing stack smashing) is only 70 lines of code. This example shows that the analysis is so much simpler because it has to handle only a few simple C constructs and also because it can leverage on CIL infrastructure such as visitors and pretty-printers. In no particular order these are a few of the most significant ways in which C programs are compiled into CIL: \begin{enumerate} \item CIL will eliminate all declarations for unused entities. This means that just because your hello world program includes \t{stdio.h} it does not mean that your analysis has to handle all the ugly stuff from \t{stdio.h}. \item Type specifiers are interpreted and normalized: \begin{cilcode}[global] int long signed x; signed long extern x; long static int long y; // Some code that uses these declaration, so that CIL does not remove them int main() { return x + y; } \end{cilcode} \item Anonymous structure and union declarations are given a name. \begin{cilcode}[global] struct { int x; } s; \end{cilcode} \item Nested structure tag definitions are pulled apart. This means that all structure tag definitions can be found by a simple scan of the globals. \begin{cilcode}[global] struct foo { struct bar { union baz { int x1; double x2; } u1; int y; } s1; int z; } f; \end{cilcode} \item All structure, union, enumeration definitions and the type definitions from inners scopes are moved to global scope (with appropriate renaming). This facilitates moving around of the references to these entities. \begin{cilcode}[global] int main() { struct foo { int x; } foo; { struct foo { double d; }; return foo.x; } } \end{cilcode} \item Prototypes are added for those functions that are called before being defined. Furthermore, if a prototype exists but does not specify the type of parameters that is fixed. But CIL will not be able to add prototypes for those functions that are neither declared nor defined (but are used!). \begin{cilcode}[global] int f(); // Prototype without arguments int f(double x) { return g(x); } int g(double x) { return x; } \end{cilcode} \item Array lengths are computed based on the initializers or by constant folding. \begin{cilcode}[global] int a1[] = {1,2,3}; int a2[sizeof(int) >= 4 ? 8 : 16]; \end{cilcode} \item Enumeration tags are computed using constant folding: \begin{cilcode}[global] int main() { enum { FIVE = 5, SIX, SEVEN, FOUR = FIVE - 1, EIGHT = sizeof(double) } x = FIVE; return x; } \end{cilcode} \item Initializers are normalized to include specific initialization for the missing elements: \begin{cilcode}[global] int a1[5] = {1,2,3}; struct foo { int x, y; } s1 = { 4 }; \end{cilcode} \item Initializer designators are interpreted and eliminated. Subobjects are properly marked with braces. CIL implements the whole ISO C99 specification for initializer (neither GCC nor MSVC do) and a few GCC extensions. \begin{cilcode}[global] struct foo { int x, y; int a[5]; struct inner { int z; } inner; } s = { 0, .inner.z = 3, .a[1 ... 2] = 5, 4, y : 8 }; \end{cilcode} \item String initializers for arrays of characters are processed \begin{cilcode}[global] char foo[] = "foo plus bar"; \end{cilcode} \item String constants are concatenated \begin{cilcode}[global] char *foo = "foo " " plus " " bar "; \end{cilcode} \item Initializers for local variables are turned into assignments. This is in order to separate completely the declarative part of a function body from the statements. This has the unfortunate effect that we have to drop the \t{const} qualifier from local variables ! \begin{cilcode}[local] int x = 5; struct foo { int f1, f2; } a [] = {1, 2, 3, 4, 5 }; \end{cilcode} \item Local variables in inner scopes are pulled to function scope (with appropriate renaming). Local scopes thus disappear. This makes it easy to find and operate on all local variables in a function. \begin{cilcode}[global] int x = 5; int main() { int x = 6; { int x = 7; return x; } return x; } \end{cilcode} \item Global declarations in local scopes are moved to global scope: \begin{cilcode}[global] int x = 5; int main() { int x = 6; { static int x = 7; return x; } return x; } \end{cilcode} \item Return statements are added for functions that are missing them. If the return type is not a base type then a \t{return} without a value is added. The guaranteed presence of return statements makes it easy to implement a transformation that inserts some code to be executed immediately before returning from a function. \begin{cilcode}[global] int foo() { int x = 5; } \end{cilcode} \item One of the most significant transformations is that expressions that contain side-effects are separated into statements. \begin{cilcode}[local] int x, f(int); return (x ++ + f(x)); \end{cilcode} Internally, the \t{x ++} statement is turned into an assignment which the pretty-printer prints like the original. CIL has only three forms of basic statements: assignments, function calls and inline assembly. \item Shortcut evaluation of boolean expressions and the \t{?:} operator are compiled into explicit conditionals: \begin{cilcode}[local] int x; int y = x ? 2 : 4; int z = x || y; // Here we duplicate the return statement if(x && y) { return 0; } else { return 1; } // To avoid excessive duplication, CIL uses goto's for // statement that have more than 5 instructions if(x && y || z) { x ++; y ++; z ++; x ++; y ++; return z; } \end{cilcode} \item GCC's conditional expression with missing operands are also compiled into conditionals: \begin{cilcode}[local] int f();; return f() ? : 4; \end{cilcode} \item All forms of loops (\t{while}, \t{for} and \t{do}) are compiled internally as a single \t{while(1)} looping construct with explicit \t{break} statement for termination. For simple \t{while} loops the pretty printer is able to print back the original: \begin{cilcode}[local] int x, y; for(int i = 0; i<5; i++) { if(i == 5) continue; if(i == 4) break; i += 2; } while(x < 5) { if(x == 3) continue; x ++; } \end{cilcode} \item GCC's block expressions are compiled away. (That's right there is an infinite loop in this code.) \begin{cilcode}[local] int x = 5, y = x; int z = ({ x++; L: y -= x; y;}); return ({ goto L; 0; }); \end{cilcode} \item CIL contains support for both MSVC and GCC inline assembly (both in one internal construct) \item CIL compiles away the GCC extension that allows many kinds of constructs to be used as lvalues: \begin{cilcode}[local] int x, y, z; return &(x ? y : z) - & (x ++, x); \end{cilcode} \item All types are computed and explicit casts are inserted for all promotions and conversions that a compiler must insert: \item CIL will turn old-style function definition (without prototype) into new-style definitions. This will make the compiler less forgiving when checking function calls, and will catch for example cases when a function is called with too few arguments. This happens in old-style code for the purpose of implementing variable argument functions. \item Since CIL sees the source after preprocessing the code after CIL does not contain the comments and the preprocessing directives. \item CIL will remove from the source file those type declarations, local variables and inline functions that are not used in the file. This means that your analysis does not have to see all the ugly stuff that comes from the header files: \begin{cilcode}[global] #include <stdio.h> typedef int unused_type; static char unused_static (void) { return 0; } int main() { int unused_local; printf("Hello world\n"); // Only printf will be kept from stdio.h } \end{cilcode} \end{enumerate} \section{How to Use CIL}\label{sec-cil}\cutname{cilly.html} There are two predominant ways to use CIL to write a program analysis or transformation. The first is to phrase your analysis as a module that is called by our existing driver. The second is to use CIL as a stand-alone library. We highly recommend that you use \t{cilly}, our driver. \subsection{Using \t{cilly}, the CIL driver} The most common way to use CIL is to write an Ocaml module containing your analysis and transformation, which you then link into our boilerplate driver application called \t{cilly}. \t{cilly} is a Perl script that processes and mimics \t{GCC} and \t{MSVC} command-line arguments and then calls \t{cilly.byte.exe} or \t{cilly.asm.exe} (CIL's Ocaml executable). An example of such module is \t{logwrites.ml}, a transformation that is distributed with CIL and whose purpose is to instrument code to print the addresses of memory locations being written. (We plan to release a C-language interface to CIL so that you can write your analyses in C instead of Ocaml.) See \secref{Extension} for a survey of other example modules. Assuming that you have written \t{/home/necula/logwrites.ml}, here is how you use it: \begin{enumerate} \item Modify \t{logwrites.ml} so that it includes a CIL ``feature descriptor'' like this: \begin{verbatim} let feature : featureDescr = { fd_name = "logwrites"; fd_enabled = ref false; fd_description = "generation of code to log memory writes"; fd_extraopt = []; fd_doit = (function (f: file) -> let lwVisitor = new logWriteVisitor in visitCilFileSameGlobals lwVisitor f) } \end{verbatim} The \t{fd\_name} field names the feature and its associated command-line arguments. The \t{fd\_enabled} field is a \t{bool ref}. ``\t{fd\_doit}'' will be invoked if \t{!fd\_enabled} is true after argument parsing, so initialize the ref cell to true if you want this feature to be enabled by default. When the user passes the \t{-{}-{}dologwrites} command-line option to \t{cilly}, the variable associated with the \t{fd\_enabled} flag is set and the \t{fd\_doit} function is called on the \t{Cil.file} that represents the merger (see \secref{merger}) of all C files listed as arguments. \item Invoke \t{configure} with the arguments \begin{verbatim} ./configure EXTRASRCDIRS=/home/necula EXTRAFEATURES=logwrites \end{verbatim} This step works if each feature is packaged into its own ML file, and the name of the entry point in the file is \t{feature}. An alternative way to specify the new features is to change the build files yourself, as explained below. You'll need to use this method if a single feature is split across multiple files. \begin{enumerate} \item Put \t{logwrites.ml} in the \t{src} or \t{src/ext} directory. This will make sure that \t{make} can find it. If you want to put it in some other directory, modify \t{Makefile.in} and add to \t{SOURCEDIRS} your directory. Alternately, you can create a symlink from \t{src} or \t{src/ext} to your file. \item Modify the \t{Makefile.in} and add your module to the \t{CILLY\_MODULES} or \t{CILLY\_LIBRARY\_MODULES} variables. The order of the modules matters. Add your modules somewhere after \t{cil} and before \t{main}. \item If you have any helper files for your module, add those to the makefile in the same way. e.g.: \begin{verbatim} CILLY_MODULES = $(CILLY_LIBRARY_MODULES) \ myutilities1 myutilities2 logwrites \ main \end{verbatim} % $ <- emacs hack Again, order is important: \t{myutilities2.ml} will be able to refer to Myutilities1 but not Logwrites. If you have any ocamllex or ocamlyacc files, add them to both \t{CILLY\_MODULES} and either \t{MLLS} or \t{MLYS}. \item Modify \t{main.ml} so that your new feature descriptor appears in the global list of CIL features. \begin{verbatim} let features : C.featureDescr list = [ Logcalls.feature; Oneret.feature; Heapify.feature1; Heapify.feature2; makeCFGFeature; Partial.feature; Simplemem.feature; Logwrites.feature; (* add this line to include the logwrites feature! *) ] @ Feature_config.features \end{verbatim} Features are processed in the order they appear on this list. Put your feature last on the list if you plan to run any of CIL's built-in features (such as makeCFGfeature) before your own. \end{enumerate} Standard code in \t{cilly} takes care of adding command-line arguments, printing the description, and calling your function automatically. Note: do not worry about introducing new bugs into CIL by adding a single line to the feature list. \item Now you can invoke the \t{cilly} application on a preprocessed file, or instead use the \t{cilly} driver which provides a convenient compiler-like interface to \t{cilly}. See \secref{driver} for details using \t{cilly}. Remember to enable your analysis by passing the right argument (e.g., \t{-{}-{}dologwrites}). \end{enumerate} \subsection{Using CIL as a library} CIL can also be built as a library that is called from your stand-alone application. Add \t{cil/src}, \t{cil/src/frontc}, \t{cil/obj/x86\_LINUX} (or \t{cil/obj/x86\_WIN32}) to your Ocaml project \t{-I} include paths. Building CIL will also build the library \t{cil/obj/*/cil.cma} (or \t{cil/obj/*/cil.cmxa}). You can then link your application against that library. You can call the \t{Frontc.parse: string -> unit -> Cil.file} function with the name of a file containing the output of the C preprocessor. The \t{Mergecil.merge: Cil.file list -> string -> Cil.file} function merges multiple files. You can then invoke your analysis function on the resulting \t{Cil.file} data structure. You might want to call \t{Rmtmps.removeUnusedTemps} first to clean up the prototypes and variables that are not used. Then you can call the function \t{Cil.dumpFile: cilPrinter -> out\_channel -> Cil.file -> unit} to print the file to a given output channel. A good \t{cilPrinter} to use is \t{defaultCilPrinter}. Check out \t{src/main.ml} and \t{bin/cilly} for other good ideas about high-level file processing. Again, we highly recommend that you just our \t{cilly} driver so that you can avoid spending time re-inventing the wheel to provide drop-in support for standard \t{makefile}s. Here is a concrete example of compiling and linking your project against CIL. Imagine that your program analysis or transformation is contained in the single file \t{main.ml}. \begin{verbatim} $ ocamlopt -c -I $(CIL)/obj/x86_LINUX/ main.ml $ ocamlopt -ccopt -L$(CIL)/obj/x86_LINUX/ -o main unix.cmxa str.cmxa \ $(CIL)/obj/x86_LINUX/cil.cmxa main.cmx \end{verbatim} % $ The first line compiles your analysis, the second line links it against CIL (as a library) and the Ocaml Unix library. For more information about compiling and linking Ocaml programs, see the Ocaml home page at \ahreftop{http://caml.inria.fr/ocaml/}{http://caml.inria.fr/ocaml/}. In the next section we give an overview of the API that you can use to write your analysis and transformation. \section{CIL API Documentation}\label{sec-api} The CIL API is documented in the file \t{src/cil.mli}. We also have an \ahref{api/index.html}{online documentation} extracted from \t{cil.mli} and other useful modules. We index below the main types that are used to represent C programs in CIL: \begin{itemize} \item \ahref{api/index\_types.html}{An index of all types} \item \ahref{api/index\_values.html}{An index of all values} \item \ciltyperef{file} is the representation of a file. \item \ciltyperef{global} is the representation of a global declaration or definitions. Values for \ahref{api/Cil.html\#VALemptyFunction}{operating on globals}. \item \ciltyperef{typ} is the representation of a type. Values for \ahref{api/Cil.html\#VALvoidType}{operating on types}. \item \ciltyperef{compinfo} is the representation of a structure or a union type \item \ciltyperef{fieldinfo} is the representation of a field in a structure or a union \item \ciltyperef{enuminfo} is the representation of an enumeration type. \item \ciltyperef{varinfo} is the representation of a variable \item \ciltyperef{fundec} is the representation of a function \item \ciltyperef{lval} is the representation of an lvalue. Values for \ahref{api/Cil.html\#VALmakeVarInfo}{operating on lvalues}. \item \ciltyperef{exp} is the representation of an expression without side-effects. Values for \ahref{api/Cil.html\#VALzero}{operating on expressions}. \item \ciltyperef{instr} is the representation of an instruction (with side-effects but without control-flow) \item \ciltyperef{stmt} is the representation of a control-flow statements. Values for \ahref{api/Cil.html\#VALmkStmt}{operating on statements}. \item \ciltyperef{attribute} is the representation of attributes. Values for \ahref{api/Cil.html\#TYPEattributeClass}{operating on attributes}. \end{itemize} \subsection{Using the visitor}\label{sec-visitor} One of the most useful tools exported by the CIL API is an implementation of the visitor pattern for CIL programs. The visiting engine scans depth-first the structure of a CIL program and at each node is queries a user-provided visitor structure whether it should do one of the following operations: \begin{itemize} \item Ignore this node and all its descendants \item Descend into all of the children and when done rebuild the node if any of the children have changed. \item Replace the subtree rooted at the node with another tree. \item Replace the subtree with another tree, then descend into the children and rebuild the node if necessary and then invoke a user-specified function. \item In addition to all of the above actions then visitor can specify that some instructions should be queued to be inserted before the current instruction or statement being visited. \end{itemize} By writing visitors you can customize the program traversal and transformation. One major limitation of the visiting engine is that it does not propagate information from one node to another. Each visitor must use its own private data to achieve this effect if necessary. Each visitor is an object that is an instance of a class of type \cilvisit{}. The most convenient way to obtain such classes is to specialize the \apiref{Cil.nopCilVisitor}{} class (which just traverses the tree doing nothing). Any given specialization typically overrides only a few of the methods. Take a look for example at the visitor defined in the module \t{logwrites.ml}. Another, more elaborate example of a visitor is the [copyFunctionVisitor] defined in \t{cil.ml}. Once you have defined a visitor you can invoke it with one of the functions: \begin{itemize} \item \cilvalref{visitCilFile} or \cilvalref{visitCilFileSameGlobals} - visit a file \item \cilvalref{visitCilGlobal} - visit a global \item \cilvalref{visitCilFunction} - visit a function definition \item \cilvalref{visitCilExp} - visit an expression \item \cilvalref{visitCilLval} - visit an lvalue \item \cilvalref{visitCilInstr} - visit an instruction \item \cilvalref{visitCilStmt} - visit a statement \item \cilvalref{visitCilType} - visit a type. Note that this does not visit the files of a composite type. use visitGlobal to visit the [GCompTag] that defines the fields. \end{itemize} Some transformations may want to use visitors to insert additional instructions before statements and instructions. To do so, pass a list of instructions to the \cilvalref{queueInstr} method of the specialized object. The instructions will automatically be inserted before that instruction in the transformed code. The \cilvalref{unqueueInstr} method should not normally be called by the user. \subsection{Interpreted Constructors and Deconstructors} Interpreted constructors and deconstructors are a facility for constructing and deconstructing CIL constructs using a pattern with holes that can be filled with a variety of kinds of elements. The pattern is a string that uses the C syntax to represent C language elements. For example, the following code: \begin{code} Formatcil.cType "void * const (*)(int x)" \end{verbatim}\end{code} is an alternative way to construct the internal representation of the type of pointer to function with an integer argument and a {void * const} as result: \begin{code} TPtr(TFun(TVoid [Attr("const", [])], [ ("x", TInt(IInt, []), []) ], false, []), []) \end{verbatim}\end{code} The advantage of the interpreted constructors is that you can use familiar C syntax to construct CIL abstract-syntax trees. You can construct this way types, lvalues, expressions, instructions and statements. The pattern string can also contain a number of placeholders that are replaced during construction with CIL items passed as additional argument to the construction function. For example, the \t{\%e:id} placeholder means that the argument labeled ``id'' (expected to be of form \t{Fe exp}) will supply the expression to replace the placeholder. For example, the following code constructs an increment instruction at location \t{loc}: \begin{code} Formatcil.cInstr "%v:x = %v:x + %e:something" loc [ ("something", Fe some_exp); ("x", Fv some_varinfo) ] \end{verbatim}\end{code} An alternative way to construct the same CIL instruction is: \begin{code} Set((Var some_varinfo, NoOffset), BinOp(PlusA, Lval (Var some_varinfo, NoOffset), some_exp, intType), loc) \end{verbatim}\end{code} See \ciltyperef{formatArg} for a definition of the placeholders that are understood. A dual feature is the interpreted deconstructors. This can be used to test whether a CIL construct has a certain form: \begin{code} Formatcil.dType "void * const (*)(int x)" t \end{verbatim}\end{code} will test whether the actual argument \t{t} is indeed a function pointer of the required type. If it is then the result is \t{Some []} otherwise it is \t{None}. Furthermore, for the purpose of the interpreted deconstructors placeholders in patterns match anything of the right type. For example, \begin{code} Formatcil.dType "void * (*)(%F:t)" t \end{verbatim}\end{code} will match any function pointer type, independent of the type and number of the formals. If the match succeeds the result is \t{Some [ FF forms ]} where \t{forms} is a list of names and types of the formals. Note that each member in the resulting list corresponds positionally to a placeholder in the pattern. The interpreted constructors and deconstructors do not support the complete C syntax, but only a substantial fragment chosen to simplify the parsing. The following is the syntax that is supported: \begin{verbatim} Expressions: E ::= %e:ID | %d:ID | %g:ID | n | L | ( E ) | Unop E | E Binop E | sizeof E | sizeof ( T ) | alignof E | alignof ( T ) | & L | ( T ) E Unary operators: Unop ::= + | - | ~ | %u:ID Binary operators: Binop ::= + | - | * | / | << | >> | & | ``|'' | ^ | == | != | < | > | <= | >= | %b:ID Lvalues: L ::= %l:ID | %v:ID Offset | * E | (* E) Offset | E -> ident Offset Offsets: Offset ::= empty | %o:ID | . ident Offset | [ E ] Offset Types: T ::= Type_spec Attrs Decl Type specifiers: Type_spec ::= void | char | unsigned char | short | unsigned short | int | unsigned int | long | unsigned long | %k:ID | float | double | struct %c:ID | union %c:ID Declarators: Decl ::= * Attrs Decl | Direct_decl Direct declarators: Direct_decl ::= empty | ident | ( Attrs Decl ) | Direct_decl [ Exp_opt ] | ( Attrs Decl )( Parameters ) Optional expressions Exp_opt ::= empty | E | %eo:ID Formal parameters Parameters ::= empty | ... | %va:ID | %f:ID | T | T , Parameters List of attributes Attrs ::= empty | %A:ID | Attrib Attrs Attributes Attrib ::= const | restrict | volatile | __attribute__ ( ( GAttr ) ) GCC Attributes GAttr ::= ident | ident ( AttrArg_List ) Lists of GCC Attribute arguments: AttrArg_List ::= AttrArg | %P:ID | AttrArg , AttrArg_List GCC Attribute arguments AttrArg ::= %p:ID | ident | ident ( AttrArg_List ) Instructions Instr ::= %i:ID ; | L = E ; | L Binop= E | Callres L ( Args ) Actual arguments Args ::= empty | %E:ID | E | E , Args Call destination Callres ::= empty | L = | %lo:ID Statements Stmt ::= %s:ID | if ( E ) then Stmt ; | if ( E ) then Stmt else Stmt ; | return Exp_opt | break ; | continue ; | { Stmt_list } | while (E ) Stmt | Instr_list Lists of statements Stmt_list ::= empty | %S:ID | Stmt Stmt_list | Type_spec Attrs Decl ; Stmt_list | Type_spec Attrs Decl = E ; Stmt_list | Type_spec Attrs Decl = L (Args) ; Stmt_list List of instructions Instr_list ::= Instr | %I:ID | Instr Instr_list \end{verbatim} Notes regarding the syntax: \begin{itemize} \item In the grammar description above non-terminals are written with uppercase initial \item All of the patterns consist of the \t{\%} character followed by one or two letters, followed by ``:'' and an indentifier. For each such pattern there is a corresponding constructor of the \ciltyperef{formatArg} type, whose name is the letter 'F' followed by the same one or two letters as in the pattern. That constructor is used by the user code to pass a \ciltyperef{formatArg} actual argument to the interpreted constructor and by the interpreted deconstructor to return what was matched for a pattern. \item If the pattern name is uppercase, it designates a list of the elements designated by the corresponding lowercase pattern. E.g. \%E designated lists of expressions (as in the actual arguments of a call). \item The two-letter patterns whose second letter is ``o'' designate an optional element. E.g. \%eo designates an optional expression (as in the length of an array). \item Unlike in calls to \t{printf}, the pattern \%g is used for strings. \item The usual precedence and associativity rules as in C apply \item The pattern string can contain newlines and comments, using both the \t{/* ... */} style as well as the \t{//} one. \item When matching a ``cast'' pattern of the form \t{( T ) E}, the deconstructor will match even expressions that do not have the actual cast but in that case the type is matched against the type of the expression. E.g. the patters \t{"(int)\%e"} will match any expression of type \t{int} whether it has an explicit cast or not. \item The \%k pattern is used to construct and deconstruct an integer type of any kind. \item Notice that the syntax of types and declaration are the same (in order to simplify the parser). This means that technically you can write a whole declaration instead of a type in the cast. In this case the name that you declare is ignored. \item In lists of formal parameters and lists of attributes, an empty list in the pattern matches any formal parameters or attributes. \item When matching types, uses of named types are unrolled to expose a real type before matching. \item The order of the attributes is ignored during matching. The the pattern for a list of attributes contains \%A then the resulting \t{formatArg} will be bound to {\bf all} attributes in the list. For example, the pattern \t{"const \%A"} matches any list of attributes that contains \t{const} and binds the corresponding placeholder to the entire list of attributes, including \t{const}. \item All instruction-patterns must be terminated by semicolon \item The autoincrement and autodecrement instructions are not supported. Also not supported are complex expressions, the \t{\&\&} and \t{||} shortcut operators, and a number of other more complex instructions or statements. In general, the patterns support only constructs that can be represented directly in CIL. \item The pattern argument identifiers are not used during deconstruction. Instead, the result contains a sequence of values in the same order as the appearance of pattern arguments in the pattern. \item You can mix statements with declarations. For each declaration a new temporary will be constructed (using a function you provive). You can then refer to that temporary by name in the rest of the pattern. \item The \t{\%v:} pattern specifier is optional. \end{itemize} The following function are defined in the \t{Formatcil} module for constructing and deconstructing: \begin{itemize} \item \formatcilvalref{cExp} constructs \ciltyperef{exp}. \item \formatcilvalref{cType} constructs \ciltyperef{typ}. \item \formatcilvalref{cLval} constructs \ciltyperef{lval}. \item \formatcilvalref{cInstr} constructs \ciltyperef{instr}. \item \formatcilvalref{cStmt} and \formatcilvalref{cStmts} construct \ciltyperef{stmt}. \item \formatcilvalref{dExp} deconstructs \ciltyperef{exp}. \item \formatcilvalref{dType} deconstructs \ciltyperef{typ}. \item \formatcilvalref{dLval} deconstructs \ciltyperef{lval}. \item \formatcilvalref{dInstr} deconstructs \ciltyperef{lval}. \end{itemize} Below is an example using interpreted constructors. This example generates the CIL representation of code that scans an array backwards and initializes every even-index element with an expression: \begin{code} Formatcil.cStmts loc "int idx = sizeof(array) / sizeof(array[0]) - 1; while(idx >= 0) { // Some statements to be run for all the elements of the array %S:init if(! (idx & 1)) array[idx] = %e:init_even; /* Do not forget to decrement the index variable */ idx = idx - 1; }" (fun n t -> makeTempVar myfunc ~name:n t) [ ("array", Fv myarray); ("init", FS [stmt1; stmt2; stmt3]); ("init_even", Fe init_expr_for_even_elements) ] \end{verbatim}\end{code} To write the same CIL statement directly in CIL would take much more effort. Note that the pattern is parsed only once and the result (a function that takes the arguments and constructs the statement) is memoized. \subsubsection{Performance considerations for interpreted constructors} Parsing the patterns is done with a LALR parser and it takes some time. To improve performance the constructors and deconstructors memoize the parsed patterns and will only compile a pattern once. Also all construction and deconstruction functions can be applied partially to the pattern string to produce a function that can be later used directly to construct or deconstruct. This function appears to be about two times slower than if the construction is done using the CIL constructors (without memoization the process would be one order of magnitude slower.) However, the convenience of interpreted constructor might make them a viable choice in many situations when performance is not paramount (e.g. prototyping). \subsection{Printing and Debugging support} The Modules \moduleref{Pretty} and \moduleref{Errormsg} contain respectively utilities for pretty printing and reporting errors and provide a convenient \t{printf}-like interface. Additionally, CIL defines for each major type a pretty-printing function that you can use in conjunction with the \moduleref{Pretty} interface. The following are some of the pretty-printing functions: \begin{itemize} \item \cilvalref{d\_exp} - print an expression \item \cilvalref{d\_type} - print a type \item \cilvalref{d\_lval} - print an lvalue \item \cilvalref{d\_global} - print a global \item \cilvalref{d\_stmt} - print a statment \item \cilvalref{d\_instr} - print an instruction \item \cilvalref{d\_init} - print an initializer \item \cilvalref{d\_attr} - print an attribute \item \cilvalref{d\_attrlist} - print a set of attributes \item \cilvalref{d\_loc} - print a location \item \cilvalref{d\_ikind} - print an integer kind \item \cilvalref{d\_fkind} - print a floating point kind \item \cilvalref{d\_const} - print a constant \item \cilvalref{d\_storage} - print a storage specifier \end{itemize} You can even customize the pretty-printer by creating instances of \cilprinter{}. Typically such an instance extends \cilvalref{defaultCilPrinter}. Once you have a customized pretty-printer you can use the following printing functions: \begin{itemize} \item \cilvalref{printExp} - print an expression \item \cilvalref{printType} - print a type \item \cilvalref{printLval} - print an lvalue \item \cilvalref{printGlobal} - print a global \item \cilvalref{printStmt} - print a statment \item \cilvalref{printInstr} - print an instruction \item \cilvalref{printInit} - print an initializer \item \cilvalref{printAttr} - print an attribute \item \cilvalref{printAttrs} - print a set of attributes \end{itemize} CIL has certain internal consistency invariants. For example, all references to a global variable must point to the same \t{varinfo} structure. This ensures that one can rename the variable by changing the name in the \t{varinfo}. These constraints are mentioned in the API documentation. There is also a consistency checker in file \t{src/check.ml}. If you suspect that your transformation is breaking these constraints then you can pass the \t{-{}-check} option to cilly and this will ensure that the consistency checker is run after each transformation. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5 \subsection{Attributes}\label{sec-attrib}\cutname{attributes.html} In CIL you can attach attributes to types and to names (variables, functions and fields). Attributes are represented using the type \ciltyperef{attribute}. An attribute consists of a name and a number of arguments (represented using the type \ciltyperef{attrparam}). Almost any expression can be used as an attribute argument. Attributes are stored in lists sorted by the name of the attribute. To maintain list ordering, use the functions \cilvalref{typeAttrs} to retrieve the attributes of a type and the functions \cilvalref{addAttribute} and \cilvalref{addAttributes} to add attributes. Alternatively you can use \cilvalref{typeAddAttributes} to add an attribute to a type (and return the new type). GCC already has extensive support for attributes, and CIL extends this support to user-defined attributes. A GCC attribute has the syntax: \begin{verbatim} gccattribute ::= __attribute__((attribute)) (Note the double parentheses) \end{verbatim} Since GCC and MSVC both support various flavors of each attribute (with or without leading or trailing \_) we first strip ALL leading and trailing \_ from the attribute name (but not the identified in [ACons] parameters in \ciltyperef{attrparam}). When we print attributes, for GCC we add two leading and two trailing \_; for MSVC we add just two leading \_. There is support in CIL so that you can control the printing of attributes (see \cilvalref{setCustomPrintAttribute} and \cilvalref{setCustomPrintAttributeScope}). This custom-printing support is now used to print the "const" qualifier as "\t{const}" and not as "\t{\_\_attribute\_\_((const))}". The attributes are specified in declarations. This is unfortunate since the C syntax for declarations is already quite complicated and after writing the parser and elaborator for declarations I am convinced that few C programmers understand it completely. Anyway, this seems to be the easiest way to support attributes. Name attributes must be specified at the very end of the declaration, just before the \t{=} for the initializer or before the \t{,} the separates a declaration in a group of declarations or just before the \t{;} that terminates the declaration. A name attribute for a function being defined can be specified just before the brace that starts the function body. For example (in the following examples \t{A1},...,\t{An} are type attributes and \t{N} is a name attribute (each of these uses the \t{\_\_attribute\_\_} syntax): \begin{code} int x N; int x N, * y N = 0, z[] N; extern void exit() N; int fact(int x) N { ... } \end{verbatim}\end{code} Type attributes can be specified along with the type using the following rules: \begin{enumerate} \item The type attributes for a base type (int, float, named type, reference to struct or union or enum) must be specified immediately following the type (actually it is Ok to mix attributes with the specification of the type, in between unsigned and int for example). For example: \begin{code} int A1 x N; /* A1 applies to the type int. An example is an attribute "even" restricting the type int to even values. */ struct foo A1 A2 x; // Both A1 and A2 apply to the struct foo type \end{verbatim}\end{code} \item The type attributes for a pointer type must be specified immediately after the * symbol. \begin{code} /* A pointer (A1) to an int (A2) */ int A2 * A1 x; /* A pointer (A1) to a pointer (A2) to a float (A3) */ float A3 * A2 * A1 x; \end{verbatim}\end{code} Note: The attributes for base types and for pointer types are a strict extension of the ANSI C type qualifiers (const, volatile and restrict). In fact CIL treats these qualifiers as attributes. \item The attributes for a function type or for an array type can be specified using parenthesized declarators. For example: \begin{code} /* A function (A1) from int (A2) to float (A3) */ float A3 (A1 f)(int A2); /* A pointer (A1) to a function (A2) that returns an int (A3) */ int A3 (A2 * A1 pfun)(void); /* An array (A1) of int (A2) */ int A2 (A1 x0)[] /* Array (A1) of pointers (A2) to functions (A3) that take an int (A4) and * return a pointer (A5) to int (A6) */ int A6 * A5 (A3 * A2 (A1 x1)[5])(int A4); /* A function (A4) that takes a float (A5) and returns a pointer (A6) to an * int (A7) */ extern int A7 * A6 (A4 x2)(float A5 x); /* A function (A1) that takes a int (A2) and that returns a pointer (A3) to * a function (A4) that takes a float (A5) and returns a pointer (A6) to an * int (A7) */ int A7 * A6 (A4 * A3 (A1 x3)(int A2 x))(float A5) { return & x2; } \end{verbatim}\end{code} \end{enumerate} Note: ANSI C does not allow the specification of type qualifiers for function and array types, although it allows for the parenthesized declarator. With just a bit of thought (looking at the first few examples above) I hope that the placement of attributes for function and array types will seem intuitive. This extension is not without problems however. If you want to refer just to a type (in a cast for example) then you leave the name out. But this leads to strange conflicts due to the parentheses that we introduce to scope the attributes. Take for example the type of x0 from above. It should be written as: \begin{code} int A2 (A1 )[] \end{verbatim}\end{code} But this will lead most C parsers into deep confusion because the parentheses around A1 will be confused for parentheses of a function designator. To push this problem around (I don't know a solution) whenever we are about to print a parenthesized declarator with no name but with attributes, we comment out the attributes so you can see them (for whatever is worth) without confusing the compiler. For example, here is how we would print the above type: \begin{code} int A2 /*(A1 )*/[] \end{verbatim}\end{code} \paragraph{Handling of predefined GCC attributes} GCC already supports attributes in a lot of places in declarations. The only place where we support attributes and GCC does not is right before the \{ that starts a function body. GCC classifies its attributes in attributes for functions, for variables and for types, although the latter category is only usable in definition of struct or union types and is not nearly as powerful as the CIL type attributes. We have made an effort to reclassify GCC attributes as name and type attributes (they only apply for function types). Here is what we came up with: \begin{itemize} \item GCC name attributes: section, constructor, destructor, unused, weak, no\_instrument\_function, noreturn, alias, no\_check\_memory\_usage, dllinport, dllexport, exception, model Note: the "noreturn" attribute would be more appropriately qualified as a function type attribute. But we classify it as a name attribute to make it easier to support a similarly named MSVC attribute. \item GCC function type attributes: fconst (printed as "const"), format, regparm, stdcall, cdecl, longcall I was not able to completely decipher the position in which these attributes must go. So, the CIL elaborator knows these names and applies the following rules: \begin{itemize} \item All of the name attributes that appear in the specifier part (i.e. at the beginning) of a declaration are associated with all declared names. \item All of the name attributes that appear at the end of a declarator are associated with the particular name being declared. \item More complicated is the handling of the function type attributes, since there can be more than one function in a single declaration (a function returning a pointer to a function). Lacking any real understanding of how GCC handles this, I attach the function type attribute to the "nearest" function. This means that if a pointer to a function is "nearby" the attribute will be correctly associated with the function. In truth I pray that nobody uses declarations as that of x3 above. \end{itemize} \end{itemize} \paragraph{Handling of predefined MSVC attributes} MSVC has two kinds of attributes, declaration modifiers to be printed before the storage specifier using the notation "\t{\_\_declspec(...)}" and a few function type attributes, printed almost as our CIL function type attributes. The following are the name attributes that are printed using \t{\_\_declspec} right before the storage designator of the declaration: thread, naked, dllimport, dllexport, noreturn The following are the function type attributes supported by MSVC: fastcall, cdecl, stdcall It is not worth going into the obscure details of where MSVC accepts these type attributes. The parser thinks it knows these details and it pulls these attributes from wherever they might be placed. The important thing is that MSVC will accept if we print them according to the rules of the CIL attributes ! %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{The CIL Driver}\label{sec-driver} We have packaged CIL as an application \t{cilly} that contains certain example modules, such as \t{logwrites.ml} (a module that instruments code to print the addresses of memory locations being written). Normally, you write another module like that, add command-line options and an invocation of your module in \t{src/main.ml}. Once you compile CIL you will obtain the file \t{obj/cilly.asm.exe}. We wrote a driver for this executable that makes it easy to invoke your analysis on existing C code with very little manual intervention. This driver is \t{bin/cilly} and is quite powerful. Note that the \t{cilly} script is configured during installation with the path where CIL resides. This means that you can move it to any place you want. A simple use of the driver is: \begin{verbatim} bin/cilly --save-temps -D HAPPY_MOOD -I myincludes hello.c -o hello \end{verbatim} \c{-{}-save-temps} tells CIL to save the resulting output files in the current directory. Otherwise, they'll be put in \t{/tmp} and deleted automatically. Not that this is the only CIL-specific flag in the list -- the other flags use \t{gcc}'s syntax. This performs the following actions: \begin{itemize} \item preprocessing using the -D and -I arguments with the resulting file left in \t{hello.i}, \item the invocation of the \t{cilly.asm} application which parses \t{hello.i} converts it to CIL and the pretty-prints it to \t{hello.cil.c} \item another round of preprocessing with the result placed in \t{hello.cil.i} \item the true compilation with the result in \t{hello.cil.o} \item a linking phase with the result in \t{hello} \end{itemize} Note that \t{cilly} behaves like the \t{gcc} compiler. This makes it easy to use it with existing \t{Makefiles}: \begin{verbatim} make CC="bin/cilly" LD="bin/cilly" \end{verbatim} \t{cilly} can also behave as the Microsoft Visual C compiler, if the first argument is \t{-{}-mode=MSVC}: \begin{verbatim} bin/cilly --mode=MSVC /D HAPPY_MOOD /I myincludes hello.c /Fe hello.exe \end{verbatim} (This in turn will pass a \t{-{}-MSVC} flag to the underlying \t{cilly.asm} process which will make it understand the Microsoft Visual C extensions) \t{cilly} can also behave as the archiver \t{ar}, if it is passed an argument \t{-{}-mode=AR}. Note that only the \t{cr} mode is supported (create a new archive and replace all files in there). Therefore the previous version of the archive is lost. Furthermore, \t{cilly} allows you to pass some arguments on to the underlying \t{cilly.asm} process. As a general rule all arguments that start with \t{-{}-} and that \t{cilly} itself does not process, are passed on. For example, \begin{verbatim} bin/cilly --dologwrites -D HAPPY_MOOD -I myincludes hello.c -o hello.exe \end{verbatim} will produce a file \t{hello.cil.c} that prints all the memory addresses written by the application. The most powerful feature of \t{cilly} is that it can collect all the sources in your project, merge them into one file and then apply CIL. This makes it a breeze to do whole-program analysis and transformation. All you have to do is to pass the \t{-{}-merge} flag to \t{cilly}: \begin{verbatim} make CC="bin/cilly --save-temps --dologwrites --merge" \end{verbatim} You can even leave some files untouched: \begin{verbatim} make CC="bin/cilly --save-temps --dologwrites --merge --leavealone=foo --leavealone=bar" \end{verbatim} This will merge all the files except those with the basename \t{foo} and \t{bar}. Those files will be compiled as usual and then linked in at the very end. The sequence of actions performed by \t{cilly} depends on whether merging is turned on or not: \begin{itemize} \item If merging is off \begin{enumerate} \item For every file \t{file.c} to compile \begin{enumerate} \item Preprocess the file with the given arguments to produce \t{file.i} \item Invoke \t{cilly.asm} to produce a \t{file.cil.c} \item Preprocess to \t{file.cil.i} \item Invoke the underlying compiler to produce \t{file.cil.o} \end{enumerate} \item Link the resulting objects \end{enumerate} \item If merging is on \begin{enumerate} \item For every file \t{file.c} to compile \begin{enumerate} \item Preprocess the file with the given arguments to produce \t{file.i} \item Save the preprocessed source as \t{file.o} \end{enumerate} \item When linking executable \t{hello.exe}, look at every object file that must be linked and see if it actually contains preprocessed source. Pass all those files to a special merging application (described in \secref{merger}) to produce \t{hello.exe\_comb.c} \item Invoke \t{cilly.asm} to produce a \t{hello.exe\_comb.cil.c} \item Preprocess to \t{hello.exe\_comb.cil.i} \item Invoke the underlying compiler to produce \t{hello.exe\_comb.cil.o} \item Invoke the actual linker to produce \t{hello.exe} \end{enumerate} \end{itemize} Note that files that you specify with \t{-{}-leavealone} are not merged and never presented to CIL. They are compiled as usual and then are linked in at the end. And a final feature of \t{cilly} is that it can substitute copies of the system's include files: \begin{verbatim} make CC="bin/cilly --includedir=myinclude" \end{verbatim} This will force the preprocessor to use the file \t{myinclude/xxx/stdio.h} (if it exists) whenever it encounters \t{\#include <stdio.h>}. The \t{xxx} is a string that identifies the compiler version you are using. This modified include files should be produced with the patcher script (see \secref{patcher}). \subsection{\t{cilly} Options} Among the options for the \t{cilly} you can put anything that can normally go in the command line of the compiler that \t{cilly} is impersonating. \t{cilly} will do its best to pass those options along to the appropriate subprocess. In addition, the following options are supported (a complete and up-to-date list can always be obtained by running \t{cilly -{}-help}): \begin{itemize} \item \t{-{}-mode=mode} This must be the first argument if present. It makes \t{cilly} behave as a given compiled. The following modes are recognized: \begin{itemize} \item GNUCC - the GNU C Compiler. This is the default. \item MSVC - the Microsoft Visual C compiler. Of course, you should pass only MSVC valid options in this case. \item AR - the archiver \t{ar}. Only the mode \t{cr} is supported and the original version of the archive is lost. \end{itemize} \item \t{-{}-help} Prints a list of the options supported. \item \t{-{}-verbose} Prints lots of messages about what is going on. \item \t{-{}-stages} Less than \t{-{}-verbose} but lets you see what \t{cilly} is doing. \item \t{-{}-merge} This tells \t{cilly} to first attempt to collect into one source file all of the sources that make your application, and then to apply \t{cilly.asm} on the resulting source. The sequence of actions in this case is described above and the merger itself is described in \secref{merger}. \item \t{-{}-leavealone=xxx}. Do not merge and do not present to CIL the files whose basename is "xxx". These files are compiled as usual and linked in at the end. \item \t{-{}-includedir=xxx}. Override the include files with those in the given directory. The given directory is the same name that was given an an argument to the patcher (see \secref{patcher}). In particular this means that that directory contains subdirectories named based on the current compiler version. The patcher creates those directories. \item \t{-{}-usecabs}. Do not CIL, but instead just parse the source and print its AST out. This should looked like the preprocessed file. This is useful when you suspect that the conversion to CIL phase changes the meaning of the program. \item \t{-{}-save-temps=xxx}. Temporary files are preserved in the xxx directory. For example, the output of CIL will be put in a file named \t{*.cil.c}. \item \t{-{}-save-temps}. Temporay files are preserved in the current directory. \end{itemize} \subsection{\t{cilly.asm} Options} \label{sec-cilly-asm-options} All of the options that start with \t{-{}-} and are not understood by \t{cilly} are passed on to \t{cilly.asm}. \t{cilly} also passes along to \t{cilly.asm} flags such as \t{-{}-MSVC} that both need to know about. The following options are supported. Many of these flags also have corresponding ``\t{-{}-no}*'' versions if you need to go back to the default, as in ``\t{-{}-nowarnall}''. \hspace*{2cm} {\bf General Options:} \begin{itemize} \item \t{-{}-version} output version information and exit \item \t{-{}-verbose} Print lots of random stuff. This is passed on from cilly \item \t{-{}-warnall} Show all warnings. \item \t{-{}-debug=xxx} turns on debugging flag xxx \item \t{-{}-nodebug=xxx} turns off debugging flag xxx \item \t{-{}-flush} Flush the output streams often (aids debugging). \item \t{-{}-check} Run a consistency check over the CIL after every operation. \item \t{-{}-strictcheck} Same as \t{-{}-check}, but it treats consistency problems as errors instead of warnings. \item \t{-{}-nocheck} turns off consistency checking of CIL. \item \t{-{}-noPrintLn} Don't output \#line directives in the output. \item \t{-{}-commPrintLn} Print \#line directives in the output, but put them in comments. \item \t{-{}-commPrintLnSparse} Like \t{-{}-commPrintLn} but print only new line numbers. \item \t{-{}-log=xxx} Set the name of the log file. By default stderr is used \item \t{-{}-MSVC} Enable MSVC compatibility. Default is GNU. %\item \t{-{}-testcil} test CIL using the given compiler \item \t{-{}-ignore-merge-conflicts} ignore merging conflicts. %\item \t{-{}-sliceGlobal} output is the slice of #pragma cilnoremove(sym) symbols %\item \t{-{}-tr <sys>}: subsystem to show debug printfs for %\item \t{-{}-pdepth=n}: set max print depth (default: 5) \item \t{-{}-extrafiles=filename}: the name of a file that contains a list of additional files to process, separated by whitespace. \item \t{-{}-stats} Print statistics about the running time of the parser, conversion to CIL, etc. Also prints memory-usage statistics. You can time parts of your own code as well. Calling (\t{Stats.time ``label'' func arg}) will evaluate \t{(func arg)} and remember how long this takes. If you call \t{Stats.time} repeatedly with the same label, CIL will report the aggregate time. If available, CIL uses the x86 performance counters for these stats. This is very precise, but results in ``wall-clock time.'' To report only user-mode time, find the call to \t{Stats.reset} in \t{main.ml}, and change it to \t{Stats.reset Stats.SoftwareTimer}. {\bf Lowering Options} \item \t{-{}-noLowerConstants} do not lower constant expressions. \item \t{-{}-noInsertImplicitCasts} do not insert implicit casts. \item \t{-{}-forceRLArgEval} Forces right to left evaluation of function arguments. %\item \t{-{}-nocil=n} Do not compile to CIL the global with the given index. \item \t{-{}-disallowDuplication} Prevent small chunks of code from being duplicated. \item \t{-{}-keepunused} Do not remove the unused variables and types. \item \t{-{}-rmUnusedInlines} Delete any unused inline functions. This is the default in MSVC mode. {\bf Output Options:} \item \t{-{}-printCilAsIs} Do not try to simplify the CIL when printing. Without this flag, CIL will attempt to produce prettier output by e.g. changing \t{while(1)} into more meaningful loops. \item \t{-{}-noWrap} do not wrap long lines when printing \item \t{-{}-out=xxx} the name of the output CIL file. \t{cilly} sets this for you. \item \t{-{}-mergedout=xxx} specify the name of the merged file \item \t{-{}-cabsonly=xxx} CABS output file name %% \item \t{-{}-printComments : print cabs tree structure in comments in cabs output %% \item \t{-{}-patchFile <fname>: name the file containing patching transformations %% \item \t{-{}-printPatched : print patched CABS files after patching, to *.patched %% \item \t{-{}-printProtos : print prototypes to safec.proto.h after parsing {\bf Selected features.} See \secref{Extension} for more information. \item \t{-{}-dologcalls}. Insert code in the processed source to print the name of functions as are called. Implemented in \t{src/ext/logcalls.ml}. \item \t{-{}-dologwrites}. Insert code in the processed source to print the address of all memory writes. Implemented in \t{src/ext/logwrites.ml}. \item \t{-{}-dooneRet}. Make each function have at most one 'return'. Implemented in \t{src/ext/oneret.ml}. \item \t{-{}-dostackGuard}. Instrument function calls and returns to maintain a separate stack for return addresses. Implemeted in \t{src/ext/heapify.ml}. \item \t{-{}-domakeCFG}. Make the program look more like a CFG. Implemented in \t{src/cil.ml}. \item \t{-{}-dopartial}. Do interprocedural partial evaluation and constant folding. Implemented in \t{src/ext/partial.ml}. \item \t{-{}-dosimpleMem}. Simplify all memory expressions. Implemented in \t{src/ext/simplemem.ml}. For an up-to-date list of available options, run \t{cilly.asm -{}-help}. \end{itemize} \subsection{Internal Options} \label{sec-cilly-internal-options} All of the \t{cilly.asm} options described above can be set programmatically -- see \t{src/ciloptions.ml} or the individual extensions to see how. Some options should be set before parsing to be effective. Additionally, a few CIL options have no command-line flag and can only be set programmatically. These options may be useful for certain analyses: \begin{itemize} \item \t{Cabs2cil.doCollapseCallCast}:This is false by default. Set to true to replicate the behavior of CIL 1.3.5 and earlier. When false, all casts in the program are made explicit using the \t{CastE} expression. Accordingly, the destination of a Call instruction will always have the same type as the function's return type. If true, the destination type of a Call may differ from the return type, so there is an implicit cast. This is useful for analyses involving \t{malloc}. Without this option, CIL converts ``\t{T* x = malloc(n);}'' into ``\t{void* tmp = malloc(n); T* x = (T*)tmp;}''. If you don't need all casts to be made explicit, you can set \t{Cabs2cil.doCollapseCallCast} to true so that CIL won't insert a temporary and you can more easily determine the allocation type from calls to \t{malloc}. \end{itemize} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Library of CIL Modules} \label{sec-Extension}\cutname{ext.html} We are developing a suite of modules that use CIL for program analyses and transformations that we have found useful. You can use these modules directly on your code, or generally as inspiration for writing similar modules. A particularly big and complex application written on top of CIL is CCured (\ahrefurl{../ccured/index.html}). \subsection{Control-Flow Graphs} \label{sec-cfg} The \ciltyperef{stmt} datatype includes fields for intraprocedural control-flow information: the predecessor and successor statements of the current statement. This information is not computed by default. If you want to use the control-flow graph, or any of the extensions in this section that require it, you have to explicitly ask CIL to compute the CFG using one of these two methods: \subsubsection{The CFG module (new in CIL 1.3.5)} The best way to compute the CFG is with the CFG module. Just invoke \cfgref{computeFileCFG} on your file. The \moduleref{Cfg} API describes the rest of actions you can take with this module, including computing the CFG for one function at a time, or printing the CFG in \t{dot} form. \subsubsection{Simplified control flow} CIL can reduce high-level C control-flow constructs like \t{switch} and \t{continue} to lower-level \t{goto}s. This completely eliminates some possible classes of statements from the program and may make the result easier to analyze (e.g., it simplifies data-flow analysis). You can invoke this transformation on the command line with \t{-{}-domakeCFG} or programatically with \cilvalref{prepareCFG}. After calling Cil.prepareCFG, you can use \cilvalref{computeCFGInfo} to compute the CFG information and find the successor and predecessor of each statement. For a concrete example, you can see how \t{cilly -{}-domakeCFG} transforms the following code (note the fall-through in case 1): \begin{cilcode}[global] --domakeCFG int foo (int predicate) { int x = 0; switch (predicate) { case 0: return 111; case 1: x = x + 1; case 2: return (x+3); case 3: break; default: return 222; } return 333; } \end{cilcode} \subsection{Data flow analysis framework} The \moduleref{Dataflow} module (click for the ocamldoc) contains a parameterized framework for forward and backward data flow analyses. You provide the transfer functions and this module does the analysis. You must compute control-flow information (\secref{cfg}) before invoking the Dataflow module. \subsection{Inliner} The file ext/inliner.ml contains a function inliner. \subsection{Dominators} The module \moduleref{Dominators} contains the computation of immediate dominators. It uses the \moduleref{Dataflow} module. \subsection{Points-to Analysis} The module \t{ptranal.ml} contains two interprocedural points-to analyses for CIL: \t{Olf} and \t{Golf}. \t{Olf} is the default. (Switching from \t{olf.ml} to \t{golf.ml} requires a change in \t{Ptranal} and a recompiling \t{cilly}.) The analyses have the following characteristics: \begin{itemize} \item Not based on C types (inferred pointer relationships are sound despite most kinds of C casts) \item One level of subtyping \item One level of context sensitivity (Golf only) \item Monomorphic type structures \item Field insensitive (fields of structs are conflated) \item Demand-driven (points-to queries are solved on demand) \item Handle function pointers \end{itemize} The analysis itself is factored into two components: \t{Ptranal}, which walks over the CIL file and generates constraints, and \t{Olf} or \t{Golf}, which solve the constraints. The analysis is invoked with the function \t{Ptranal.analyze\_file: Cil.file -> unit}. This function builds the points-to graph for the CIL file and stores it internally. There is currently no facility for clearing internal state, so \t{Ptranal.analyze\_file} should only be called once. %%% Interface for querying the points-to graph... The constructed points-to graph supports several kinds of queries, including alias queries (may two expressions be aliased?) and points-to queries (to what set of locations may an expression point?). %%% Main Interface The main interface with the alias analysis is as follows: \begin{itemize} \item \t{Ptranal.may\_alias: Cil.exp -> Cil.exp -> bool}. If \t{true}, the two expressions may have the same value. \item \t{Ptranal.resolve\_lval: Cil.lval -> (Cil.varinfo list)}. Returns the list of variables to which the given left-hand value may point. \item \t{Ptranal.resolve\_exp: Cil.exp -> (Cil.varinfo list)}. Returns the list of variables to which the given expression may point. \item \t{Ptranal.resolve\_funptr: Cil.exp -> (Cil.fundec list)}. Returns the list of functions to which the given expression may point. \end{itemize} %%% Controlling the analysis The precision of the analysis can be customized by changing the values of several flags: \begin{itemize} \item \t{Ptranal.no\_sub: bool ref}. If \t{true}, subtyping is disabled. Associated commandline option: {\bf -{}-ptr\_unify}. \item \t{Ptranal.analyze\_mono: bool ref}. (Golf only) If \t{true}, context sensitivity is disabled and the analysis is effectively monomorphic. Commandline option: {\bf -{}-ptr\_mono}. \item \t{Ptranal.smart\_aliases: bool ref}. (Golf only) If \t{true}, ``smart'' disambiguation of aliases is enabled. Otherwise, aliases are computed by intersecting points-to sets. This is an experimental feature. \item \t{Ptranal.model\_strings: bool ref}. Make the alias analysis model string constants by treating them as pointers to chars. Commandline option: {\bf -{}-ptr\_model\_strings} \item \t{Ptranal.conservative\_undefineds: bool ref}. Make the most pessimistic assumptions about globals if an undefined function is present. Such a function can write to every global variable. Commandline option: {\bf -{}-ptr\_conservative} \end{itemize} In practice, the best precision/efficiency tradeoff is achieved by setting \t{Ptranal.no\_sub} to \t{false}, \t{Ptranal.analyze\_mono} to \t{true}, and \t{Ptranal.smart\_aliases} to \t{false}. These are the default values of the flags. %%% Debug output There are also a few flags that can be used to inspect or serialize the results of the analysis. \begin{itemize} %%\item \t{Ptranal.ptrResults}. %% Commandline option: {\bf -{}-ptr\_results}. A no-op! %% %%\item \t{Ptranal.ptrTypes}. %% Commandline option: {\bf -{}-ptr\_types}. A no-op! %% \item \t{Ptranal.debug\_may\_aliases}. Print the may-alias relationship of each pair of expressions in the program. Commandline option: {\bf -{}-ptr\_may\_aliases}. \item \t{Ptranal.print\_constraints: bool ref}. If \t{true}, the analysis will print each constraint as it is generated. \item \t{Ptranal.print\_types: bool ref}. If \t{true}, the analysis will print the inferred type of each variable in the program. If \t{Ptranal.analyze\_mono} and \t{Ptranal.no\_sub} are both \t{true}, this output is sufficient to reconstruct the points-to graph. One nice feature is that there is a pretty printer for recursive types, so the print routine does not loop. \item \t{Ptranal.compute\_results: bool ref}. If \t{true}, the analysis will print out the points-to set of each variable in the program. This will essentially serialize the points-to graph. \end{itemize} \subsection{StackGuard} The module \t{heapify.ml} contains a transformation similar to the one described in ``StackGuard: Automatic Adaptive Detection and Prevention of Buffer-Overflow Attacks'', {\em Proceedings of the 7th USENIX Security Conference}. In essence it modifies the program to maintain a separate stack for return addresses. Even if a buffer overrun attack occurs the actual correct return address will be taken from the special stack. Although it does work, this CIL module is provided mainly as an example of how to perform a simple source-to-source program analysis and transformation. As an optimization only functions that contain a dangerous local array make use of the special return address stack. For a concrete example, you can see how \t{cilly -{}-dostackGuard} transforms the following dangerous code: \begin{cilcode}[global] --dostackGuard int dangerous() { char array[10]; scanf("%s",array); // possible buffer overrun! } int main () { return dangerous(); } \end{cilcode} \subsection{Heapify} The module \t{heapify.ml} also contains a transformation that moves all dangerous local arrays to the heap. This also prevents a number of buffer overruns. For a concrete example, you can see how \t{cilly -{}-doheapify} transforms the following dangerous code: \begin{cilcode}[global] --doheapify int dangerous() { char array[10]; scanf("%s",array); // possible buffer overrun! } int main () { return dangerous(); } \end{cilcode} \subsection{One Return} The module \t{oneret.ml} contains a transformation the ensures that all function bodies have at most one return statement. This simplifies a number of analyses by providing a canonical exit-point. For a concrete example, you can see how \t{cilly -{}-dooneRet} transforms the following code: \begin{cilcode}[global] --dooneRet int foo (int predicate) { if (predicate <= 0) { return 1; } else { if (predicate > 5) return 2; return 3; } } \end{cilcode} \subsection{Partial Evaluation and Constant Folding} The \t{partial.ml} module provides a simple interprocedural partial evaluation and constant folding data-flow analysis and transformation. This transformation always requires the \t{-{}-domakeCFG} option. It performs: \begin{itemize} \item Constant folding even of compiler-dependent constants as, for example \t{sizeof(T)}. \item \t{if}-statement simplification for conditional expressions that evaluate to a constant. The \t{if}-statement gets replaced with the taken branch. \item Call elimination for \begin{enumerate} \item\label{enum:partial-empty-proc} empty functions and \item\label{enum:partial-const-func} functions that return a constant. \end{enumerate} In case~\ref{enum:partial-empty-proc} the call disappears completely and in case~\ref{enum:partial-const-func} it is replaced by the constant the function returns. \end{itemize} Several commandline options control the behavior of the feature. \begin{itemize} \item \t{-{}-partial\_no\_global\_const}: Treat global constants as unknown values. This is the default. \item \t{-{}-partial\_global\_const}: Treat global constants as initialized. Let global constants participate in the partial evaluation. \item \t{-{}-partial\_root\_function} \i{function-name}: Name of the function where the simplification starts. Default: \t{main}. \item \t{-{}-partial\_use\_easy\_alias} Use Partial's built-in easy alias to analyze pointers. This is the default. \item \t{-{}-partial\_use\_ptranal\_alias} Use feature Ptranal to analyze pointers. Setting this option requires \t{-{}-doptranal}. \end{itemize} For a concrete example, you can see how \t{cilly -{}-domakeCFG -{}-dopartial} transforms the following code (note the eliminated \t{if}-branch and the partial optimization of \t{foo}): \begin{cilcode}[global] --domakeCFG --dopartial int foo(int x, int y) { int unknown; if (unknown) return y + 2; return x + 3; } int bar(void) { return -1; } int main(void) { int a, b, c; a = foo(5, 7) + foo(6, 7) + bar(); b = 4; c = b * b; if (b > c) return b - c; else return b + c; } \end{cilcode} \subsection{Reaching Definitions} The \t{reachingdefs.ml} module uses the dataflow framework and CFG information to calculate the definitions that reach each statement. After computing the CFG (\secref{cfg}) and calling \t{computeRDs} on a function declaration, \t{ReachingDef.stmtStartData} will contain a mapping from statement IDs to data about which definitions reach each statement. In particular, it is a mapping from statement IDs to a triple the first two members of which are used internally. The third member is a mapping from variable IDs to Sets of integer options. If the set contains \t{Some(i)}, then the definition of that variable with ID \t{i} reaches that statement. If the set contains \t{None}, then there is a path to that statement on which there is no definition of that variable. Also, if the variable ID is unmapped at a statement, then no definition of that variable reaches that statement. To summarize, reachingdefs.ml has the following interface: \begin{itemize} \item \t{computeRDs} -- Computes reaching definitions. Requires that CFG information has already been computed for each statement. \item \t{ReachingDef.stmtStartData} -- contains reaching definition data after \t{computeRDs} is called. \item \t{ReachingDef.defIdStmtHash} -- Contains a mapping from definition IDs to the ID of the statement in which the definition occurs. \item \t{getRDs} -- Takes a statement ID and returns reaching definition data for that statement. \item \t{instrRDs} -- Takes a list of instructions and the definitions that reach the first instruction, and for each instruction calculates the definitions that reach either into or out of that instruction. \item \t{rdVisitorClass} -- A subclass of nopCilVisitor that can be extended such that the current reaching definition data is available when expressions are visited through the \t{get\_cur\_iosh} method of the class. \end{itemize} \subsection{Available Expressions} The \t{availexps.ml} module uses the dataflow framework and CFG information to calculate something similar to a traditional available expressions analysis. After \t{computeAEs} is called following a CFG calculation (\secref{cfg}), \t{AvailableExps.stmtStartData} will contain a mapping from statement IDs to data about what expressions are available at that statement. The data for each statement is a mapping for each variable ID to the whole expression available at that point(in the traditional sense) which the variable was last defined to be. So, this differs from a traditional available expressions analysis in that only whole expressions from a variable definition are considered rather than all expressions. The interface is as follows: \begin{itemize} \item \t{computeAEs} -- Computes available expressions. Requires that CFG information has already been comptued for each statement. \item \t{AvailableExps.stmtStartData} -- Contains available expressions data for each statement after \t{computeAEs} has been called. \item \t{getAEs} -- Takes a statement ID and returns available expression data for that statement. \item \t{instrAEs} -- Takes a list of instructions and the availalbe expressions at the first instruction, and for each instruction calculates the expressions available on entering or exiting each instruction. \item \t{aeVisitorClass} -- A subclass of nopCilVisitor that can be extended such that the current available expressions data is available when expressions are visited through the \t{get\_cur\_eh} method of the class. \end{itemize} \subsection{Liveness Analysis} The \t{liveness.ml} module uses the dataflow framework and CFG information to calculate which variables are live at each program point. After \t{computeLiveness} is called following a CFG calculation (\secref{cfg}), \t{LiveFlow.stmtStartData} will contain a mapping for each statement ID to a set of \t{varinfo}s for varialbes live at that program point. The interface is as follows: \begin{itemize} \item \t{computeLiveness} -- Computes live variables. Requires that CFG information has already been computed for each statement. \item \t{LiveFlow.stmtStartData} -- Contains live variable data for each statement after \t{computeLiveness} has been called. \end{itemize} Also included in this module is a command line interface that will cause liveness data to be printed to standard out for a particular function or label. \begin{itemize} \item \t{-{}-doliveness} -- Instructs cilly to comptue liveness information and to print on standard out the variables live at the points specified by \t{-{}-live\_func} and \t{live\_label}. If both are ommitted, then nothing is printed. \item \t{-{}-live\_func} -- The name of the function whose liveness data is of interest. If \t{-{}-live\_label} is ommitted, then data for each statement is printed. \item \t{-{}-live\_label} -- The name of the label at which the liveness data will be printed. \end{itemize} \subsection{Dead Code Elimination} The module \t{deadcodeelim.ml} uses the reaching definitions analysis to eliminate assignment instructions whose results are not used. The interface is as follows: \begin{itemize} \item \t{elim\_dead\_code} -- Performs dead code elimination on a function. Requires that CFG information has already been computed (\secref{cfg}). \item \t{dce} -- Performs dead code elimination on an entire file. Requires that CFG information has already been computed. \end{itemize} \subsection{Simple Memory Operations} The \t{simplemem.ml} module allows CIL lvalues that contain memory accesses to be even futher simplified via the introduction of well-typed temporaries. After this transformation all lvalues involve at most one memory reference. For a concrete example, you can see how \t{cilly -{}-dosimpleMem} transforms the following code: \begin{cilcode}[global] --dosimpleMem int main () { int ***three; int **two; ***three = **two; } \end{cilcode} \subsection{Simple Three-Address Code} The \t{simplify.ml} module further reduces the complexity of program expressions and gives you a form of three-address code. After this transformation all expressions will adhere to the following grammar: \begin{verbatim} basic::= Const _ Addrof(Var v, NoOffset) StartOf(Var v, NoOffset) Lval(Var v, off), where v is a variable whose address is not taken and off contains only "basic" exp::= basic Lval(Mem basic, NoOffset) BinOp(bop, basic, basic) UnOp(uop, basic) CastE(t, basic) lval ::= Mem basic, NoOffset Var v, off, where v is a variable whose address is not taken and off contains only "basic" \end{verbatim} In addition, all \t{sizeof} and \t{alignof} forms are turned into constants. Accesses to arrays and variables whose address is taken are turned into "Mem" accesses. All field and index computations are turned into address arithmetic. For a concrete example, you can see how \t{cilly -{}-dosimplify} transforms the following code: \begin{cilcode}[global] --dosimplify int main() { struct mystruct { int a; int b; } m; int local; int arr[3]; int *ptr; ptr = &local; m.a = local + sizeof(m) + arr[2]; return m.a; } \end{cilcode} \subsection{Converting C to C++} The module canonicalize.ml performs several transformations to correct differences between C and C++, so that the output is (hopefully) valid C++ code. This may be incomplete --- certain fixes which are necessary for some programs are not yet implemented. Using the \t{-{}-doCanonicalize} option with CIL will perform the following changes to your program: \begin{enumerate} \item Any variables that use C++ keywords as identifiers are renamed. \item C allows global variables to have multiple declarations and multiple (equivalent) definitions. This transformation removes all but one declaration and all but one definition. \item \t{\_\_inline} is \#defined to \t{inline}, and \t{\_\_restrict} is \#defined to nothing. \item C allows function pointers with no specified arguments to be used on any argument list. To make C++ accept this code, we insert a cast from the function pointer to a type that matches the arguments. Of course, this does nothing to guarantee that the pointer actually has that type. \item Makes casts from int to enum types explicit. (CIL changes enum constants to int constants, but doesn't use a cast.) \end{enumerate} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Controlling CIL} In the process of converting a C file to CIL we drop the unused prototypes and even inline function definitions. This results in much smaller files. If you do not want this behavior then you must pass the \t{-{}-keepunused} argument to the CIL application. Alternatively you can put the following pragma in the code (instructing CIL to specifically keep the declarations and definitions of the function \t{func1} and variable \t{var2}, the definition of type \t{foo} and of structure \t{bar}): \begin{code} #pragma cilnoremove("func1", "var2", "type foo", "struct bar") \end{verbatim}\end{code} \section{GCC Extensions} The CIL parser handles most of the \t{gcc} \ahreftop{http://gcc.gnu.org/onlinedocs/gcc-3.0.2/gcc\_5.html#SEC67}{extensions} and compiles them to CIL. The following extensions are not handled (note that we are able to compile a large number of programs, including the Linux kernel, without encountering these): \begin{enumerate} \item Nested function definitions. \item Constructing function calls. \item Naming an expression's type. \item Complex numbers \item Hex floats \item Subscripts on non-lvalue arrays. \item Forward function parameter declarations \end{enumerate} The following extensions are handled, typically by compiling them away: \begin{enumerate} \item Attributes for functions, variables and types. In fact, we have a clear specification (see \secref{attrib}) of how attributes are interpreted. The specification extends that of \t{gcc}. \item Old-style function definitions and prototypes. These are translated to new-style. \item Locally-declared labels. As part of the translation to CIL, we generate new labels as needed. \item Labels as values and computed goto. This allows a program to take the address of a label and to manipulate it as any value and also to perform a computed goto. We compile this by assigning each label whose address is taken a small integer that acts as its address. Every computed \t{goto} in the body of the function is replaced with a \t{switch} statement. If you want to invoke the label from another function, you are on your own (the \t{gcc} documentation says the same.) \item Generalized lvalues. You can write code like \t{(a, b) += 5} and it gets translated to CIL. \item Conditionals with omitted operands. Things like \t{x ? : y} are translated to CIL. \item Double word integers. The type \t{long long} and the \t{LL} suffix on constants is understood. This is currently interpreted as 64-bit integers. \item Local arrays of variable length. These are converted to uses of \t{alloca}, the array variable is replaced with a pointer to the allocated array and the instances of \t{sizeof(a)} are adjusted to return the size of the array and not the size of the pointer. \item Non-constant local initializers. Like all local initializers these are compiled into assignments. \item Compound literals. These are also turned into assignments. \item Designated initializers. The CIL parser actually supports the full ISO syntax for initializers, which is more than both \t{gcc} and \t{MSVC}. I (George) think that this is the most complicated part of the C language and whoever designed it should be banned from ever designing languages again. \item Case ranges. These are compiled into separate cases. There is no code duplication, just a larger number of \t{case} statements. \item Transparent unions. This is a strange feature that allows you to define a function whose formal argument has a (tranparent) union type, but the argument is called as if it were the first element of the union. This is compiled away by saying that the type of the formal argument is that of the first field, and the first thing in the function body we copy the formal into a union. \item Inline assembly-language. The full syntax is supported and it is carried as such in CIL. \item Function names as strings. The identifiers \t{\_\_FUNCTION\_\_} and \t{\_\_PRETTY\_FUNCTION\_\_} are replaced with string literals. \item Keywords \t{typeof}, \t{alignof}, \t{inline} are supported. \end{enumerate} \section{CIL Limitations} There are several implementation details of CIL that might make it unusable or less than ideal for certain tasks: \begin{itemize} \item CIL operates after preprocessing. If you need to see comments, for example, you cannot use CIL. But you can use attributes and pragmas instead. And there is some support to help you patch the include files before they are seen by the preprocessor. For example, this is how we turn some \t{\#define}s that we don't like into function calls. \item CIL does transform the code in a non-trivial way. This is done in order to make most analyses easier. But if you want to see the code \t{e1, e2++} exactly as it appears in the code, then you should not use CIL. \item CIL removes all local scopes and moves all variables to function scope. It also separates a declaration with an initializer into a declaration plus an assignment. The unfortunate effect of this transformation is that local variables cannot have the \t{const} qualifier. \end{itemize} \section{Known Bugs and Limitations} \subsection{Code that CIL won't compile} \begin{itemize} \item We do not support tri-graph sequences (ISO 5.2.1.1). \item CIL cannot parse arbitrary \t{\#pragma} directives. Their syntax must follow gcc's attribute syntax to be understood. If you need a pragma that does not follow gcc syntax, add that pragma's name to \t{no\_parse\_pragma} in \t{src/frontc/clexer.mll} to indicate that CIL should treat that pragma as a monolithic string rather than try to parse its arguments. CIL cannot parse a line containing an empty \t{\#pragma}. \item CIL only parses \t{\#pragma} directives at the "top level", this is, outside of any enum, structure, union, or function definitions. If your compiler uses pragmas in places other than the top-level, you may have to preprocess the sources in a special way (sed, perl, etc.) to remove pragmas from these locations. \item CIL cannot parse the following code (fixing this problem would require extensive hacking of the LALR grammar): \begin{code} int bar(int ()); // This prototype cannot be parsed int bar(int x()); // If you add a name to the function, it works int bar(int (*)()); // This also works (and it is more appropriate) \end{verbatim}\end{code} \item CIL also cannot parse certain K\&R old-style prototypes with missing return type: \begin{code} g(); // This cannot be parsed int g(); // This is Ok \end{verbatim}\end{code} \item CIL does not understand some obscure combinations of type specifiers (``signed'' and ``unsigned'' applied to typedefs that themselves contain a sign specification; you could argue that this should not be allowed anyway): \begin{code} typedef signed char __s8; __s8 unsigned uchartest; // This is unsigned char for gcc \end{verbatim}\end{code} \item CIL does not support constant-folding of floating-point values, because it is difficult to simulate the behavior of various C floating-point implementations in Ocaml. Therefore, code such as this will not compile: \begin{code} int globalArray[(1.0 < 2.0) ? 5 : 50] \end{verbatim}\end{code} \item CIL uses Ocaml ints to represent the size of an object. Therefore, it can't compute the size of any object that is larger than $2^{30}$ bits (134 MB) on 32-bit computers, or $2^{62}$ bits on 64-bit computers. \end{itemize} \subsection{Code that behaves differently under CIL} \begin{itemize} \item GCC has a strange feature called ``extern inline''. Such a function can be defined twice: first with the ``extern inline'' specifier and the second time without it. If optimizations are turned off then the ``extern inline'' definition is considered a prototype (its body is ignored). If optimizations are turned on then the extern inline function is inlined at all of its occurrences from the point of its definition all the way to the point where the (optional) second definition appears. No body is generated for an extern inline function. A body is generated for the real definition and that one is used in the rest of the file. CIL will assume optimizations are on, and rename your extern inline function (and its uses) with the suffix \t{\_\_extinline}. This means that if you have two such definition, that do different things and the optimizations are not on, then the CIL version might compute a different answer ! Also, if you have multiple extern inline declarations then CIL will ignore but the first one. This is not so bad because GCC itself would not like it. \item The implementation of \t{bitsSizeOf} does not take into account the packing pragmas. However it was tested to be accurate on cygwin/gcc-2.95.3, Linux/gcc-2.95.3 and on Windows/MSVC. \item \t{-malign-double} is ignored. \item The statement \t{x = 3 + x ++} will perform the increment of \t{x} before the assignment, while \t{gcc} delays the increment after the assignment. It turned out that this behavior is much easier to implement than gcc's one, and either way is correct (since the behavior is unspecified in this case). Similarly, if you write \t{x = x ++;} then CIL will perform the increment before the assignment, whereas GCC and MSVC will perform it after the assignment. \item Because CIL uses 64-bit floating point numbers in its internal representation of floating point numbers, \t{long double} constants are parsed as if they were \t{double} constants. \end{itemize} \subsection{Effects of the CIL translation} \begin{itemize} \item CIL cleans up C code in various ways that may suppress compiler warnings. For example, CIL will add casts where they are needed while \t{gcc} might print a warning for the missing cast. It is not a goal of CIL to emit such warnings --- we support several versions of several different compilers, and mimicking the warnings of each is simply not possible. If you want to see compiler warnings, compile your program with your favorite compiler before using CIL. \item When you use variable-length arrays, CIL turns them into calls to \t{alloca}. This means that they are deallocated when the function returns and not when the local scope ends. Variable-length arrays are not supported as fields of a struct or union. \item In the new versions of \t{glibc} there is a function \t{\_\_builtin\_va\_arg} that takes a type as its second argument. CIL handles that through a slight trick. As it parses the function it changes a call like: \begin{verbatim} mytype x = __builtin_va_arg(marker, mytype) \end{verbatim} into \begin{verbatim} mytype x; __builtin_va_arg(marker, sizeof(mytype), &x); \end{verbatim} The latter form is used internally in CIL. However, the CIL pretty printer will try to emit the original code. Similarly, \t{\_\_builtin\_types\_compatible\_p(t1, t2)}, which takes types as arguments, is represented internally as \t{\_\_builtin\_types\_compatible\_p(sizeof t1, sizeof t2)}, but the sizeofs are removed when printing. \end{itemize} \section{Using the merger}\label{sec-merger}\cutname{merger.html} There are many program analyses that are more effective when done on the whole program. The merger is a tool that combines all of the C source files in a project into a single C file. There are two tasks that a merger must perform: \begin{enumerate} \item Detect what are all the sources that make a project and with what compiler arguments they are compiled. \item Merge all of the source files into a single file. \end{enumerate} For the first task the merger impersonates a compiler and a linker (both a GCC and a Microsoft Visual C mode are supported) and it expects to be invoked (from a build script or a Makefile) on all sources of the project. When invoked to compile a source the merger just preprocesses the source and saves the result using the name of the requested object file. By preprocessing at this time the merger is able to take into account variations in the command line arguments that affect preprocessing of different source files. When the merger is invoked to link a number of object files it collects the preprocessed sources that were stored with the names of the object files, and invokes the merger proper. Note that arguments that affect the compilation or linking must be the same for all source files. For the second task, the merger essentially concatenates the preprocessed sources with care to rename conflicting file-local declarations (we call this process alpha-conversion of a file). The merger also attempts to remove duplicate global declarations and definitions. Specifically the following actions are taken: \begin{itemize} \item File-scope names (\t{static} globals, names of types defined with \t{typedef}, and structure/union/enumeration tags) are given new names if they conflict with declarations from previously processed sources. The new name is formed by appending the suffix \t{\_\_\_n}, where \t{n} is a unique integer identifier. Then the new names are applied to their occurrences in the file. \item Non-static declarations and definitions of globals are never renamed. But we try to remove duplicate ones. Equality of globals is detected by comparing the printed form of the global (ignoring the line number directives) after the body has been alpha-converted. This process is intended to remove those declarations (e.g. function prototypes) that originate from the same include file. Similarly, we try to eliminate duplicate definitions of \t{inline} functions, since these occasionally appear in include files. \item The types of all global declarations with the same name from all files are compared for type isomorphism. During this process, the merger detects all those isomorphisms between structures and type definitions that are {\bf required} for the merged program to be legal. Such structure tags and typenames are coalesced and given the same name. \item Besides the structure tags and type names that are required to be isomorphic, the merger also tries to coalesce definitions of structures and types with the same name from different file. However, in this case the merger will not give an error if such definitions are not isomorphic; it will just use different names for them. \item In rare situations, it can happen that a file-local global in encountered first and it is not renamed, only to discover later when processing another file that there is an external symbol with the same name. In this case, a second pass is made over the merged file to rename the file-local symbol. \end{itemize} Here is an example of using the merger: The contents of \t{file1.c} is: \begin{code} struct foo; // Forward declaration extern struct foo *global; \end{verbatim}\end{code} The contents of \t{file2.c} is: \begin{code} struct bar { int x; struct bar *next; }; extern struct bar *global; struct foo { int y; }; extern struct foo another; void main() { } \end{verbatim}\end{code} There are several ways in which one might create an executable from these files: \begin{itemize} \item \begin{verbatim} gcc file1.c file2.c -o a.out \end{verbatim} \item \begin{verbatim} gcc -c file1.c -o file1.o gcc -c file2.c -o file2.o ld file1.o file2.o -o a.out \end{verbatim} \item \begin{verbatim} gcc -c file1.c -o file1.o gcc -c file2.c -o file2.o ar r libfile2.a file2.o gcc file1.o libfile2.a -o a.out \end{verbatim} \item \begin{verbatim} gcc -c file1.c -o file1.o gcc -c file2.c -o file2.o ar r libfile2.a file2.o gcc file1.o -lfile2 -o a.out \end{verbatim} \end{itemize} In each of the cases above you must replace all occurrences of \t{gcc} and \t{ld} with \t{cilly -{}-merge}, and all occurrences of \t{ar} with \t{cilly -{}-merge -{}-mode=AR}. It is very important that the \t{-{}-merge} flag be used throughout the build process. If you want to see the merged source file you must also pass the \t{-{}-keepmerged} flag to the linking phase. The result of merging file1.c and file2.c is: \begin{code} // from file1.c struct foo; // Forward declaration extern struct foo *global; // from file2.c struct foo { int x; struct foo *next; }; struct foo___1 { int y; }; extern struct foo___1 another; \end{verbatim}\end{code} \section{Using the patcher}\label{sec-patcher}\cutname{patcher.html} Occasionally we have needed to modify slightly the standard include files. So, we developed a simple mechanism that allows us to create modified copies of the include files and use them instead of the standard ones. For this purpose we specify a patch file and we run a program caller Patcher which makes modified copies of include files and applies the patch. The patcher is invoked as follows: \begin{verbatim} bin/patcher [options] Options: --help Prints this help message --verbose Prints a lot of information about what is being done --mode=xxx What tool to emulate: GNUCC - GNU CC MSVC - MS VC cl compiler --dest=xxx The destination directory. Will make one if it does not exist --patch=xxx Patch file (can be specified multiple times) --ppargs=xxx An argument to be passed to the preprocessor (can be specified multiple times) --ufile=xxx A user-include file to be patched (treated as \#include "xxx") --sfile=xxx A system-include file to be patched (treated as \#include <xxx>) --clean Remove all files in the destination directory --dumpversion Print the version name used for the current compiler All of the other arguments are passed to the preprocessor. You should pass enough arguments (e.g., include directories) so that the patcher can find the right include files to be patched. \end{verbatim} Based on the given \t{mode} and the current version of the compiler (which the patcher can print when given the \t{dumpversion} argument) the patcher will create a subdirectory of the \t{dest} directory (say \t{/usr/home/necula/cil/include}), such as: \begin{verbatim} /usr/home/necula/cil/include/gcc_2.95.3-5 \end{verbatim} In that file the patcher will copy the modified versions of the include files specified with the \t{ufile} and \t{sfile} options. Each of these options can be specified multiple times. The patch file (specified with the \t{patch} option) has a format inspired by the Unix \t{patch} tool. The file has the following grammar: \begin{verbatim} <<< flags patterns === replacement >>> \end{verbatim} The flags are a comma separated, case-sensitive, sequence of keywords or keyword = value. The following flags are supported: \begin{itemize} \item \t{file=foo.h} - will only apply the patch on files whose name is \t{foo.h}. \item \t{optional} - this means that it is Ok if the current patch does not match any of the processed files. \item \t{group=foo} - will add this patch to the named group. If this is not specified then a unique group is created to contain just the current patch. When all files specified in the command line have been patched, an error message is generated for all groups for whom no member patch was used. We use this mechanism to receive notice when the patch triggers are out-dated with respect to the new include files. \item \t{system=sysname} - will only consider this pattern on a given operating system. The ``sysname'' is reported by the ``\$\^O'' variable in Perl, except that Windows is always considered to have sysname ``cygwin.'' For Linux use ``linux'' (capitalization matters). \item \t{ateof} - In this case the patterns are ignored and the replacement text is placed at the end of the patched file. Use the \t{file} flag if you want to restrict the files in which this replacement is performed. \item \t{atsof} - The patterns are ignored and the replacement text is placed at the start of the patched file. Uf the \t{file} flag to restrict the application of this patch to a certain file. \item \t{disabled} - Use this flag if you want to disable the pattern. \end{itemize} The patterns can consist of several groups of lines separated by the \t{|||} marker. Each of these group of lines is a multi-line pattern that if found in the file will be replaced with the text given at the end of the block. The matching is space-insensitive. All of the markers \t{<<<}, \t{|||}, \t{===} and \t{>>>} must appear at the beginning of a line but they can be followed by arbitrary text (which is ignored). The replacement text can contain the special keyword \t{@\_\_pattern\_\_@}, which is substituted with the pattern that matched. \section{Debugging support}\label{sec-debugger} Most of the time we debug our code using the Errormsg module along with the pretty printer. But if you want to use the Ocaml debugger here is an easy way to do it. Say that you want to debug the invocation of cilly that arises out of the following command: \begin{verbatim} cilly -c hello.c \end{verbatim} You must follow the installation \ahref{../ccured/setup.html}{instructions} to install the Elist support files for ocaml and to extend your .emacs appropriately. Then from within Emacs you do \begin{verbatim} ALT-X my-camldebug \end{verbatim} This will ask you for the command to use for running the Ocaml debugger (initially the default will be ``ocamldebug'' or the last command you introduced). You use the following command: \begin{verbatim} cilly --ocamldebug -c hello.c \end{verbatim} This will run \t{cilly} as usual and invoke the Ocaml debugger when the cilly engine starts. The advantage of this way of invoking the debugger is that the directory search paths are set automatically and the right set or arguments is passed to the debugger. \section{Who Says C is Simple?}\label{sec-simplec} When I (George) started to write CIL I thought it was going to take two weeks. Exactly a year has passed since then and I am still fixing bugs in it. This gross underestimate was due to the fact that I thought parsing and making sense of C is simple. You probably think the same. What I did not expect was how many dark corners this language has, especially if you want to parse real-world programs such as those written for GCC or if you are more ambitious and you want to parse the Linux or Windows NT sources (both of these were written without any respect for the standard and with the expectation that compilers will be changed to accommodate the program). The following examples were actually encountered either in real programs or are taken from the ISO C99 standard or from the GCC's testcases. My first reaction when I saw these was: {\em Is this C?}. The second one was : {\em What the hell does it mean?}. If you are contemplating doing program analysis for C on abstract-syntax trees then your analysis ought to be able to handle these things. Or, you can use CIL and let CIL translate them into clean C code. % % Note: the cilcode environment is bogus. You should preprocess this source % with cilcode.pl !!! % % \subsection{Standard C} \begin{enumerate} \item Why does the following code return 0 for most values of \t{x}? (This should be easy.) \begin{cilcode}[local] int x; return x == (1 && x); \end{cilcode} \item Why does the following code return 0 and not -1? (Answer: because \t{sizeof} is unsigned, thus the result of the subtraction is unsigned, thus the shift is logical.) \begin{cilcode}[local] return ((1 - sizeof(int)) >> 32); \end{cilcode} \item Scoping rules can be tricky. This function returns 5. \begin{cilcode}[global] int x = 5; int f() { int x = 3; { extern int x; return x; } } \end{cilcode} \item Functions and function pointers are implicitly converted to each other. \begin{cilcode}[global] int (*pf)(void); int f(void) { pf = &f; // This looks ok pf = ***f; // Dereference a function? pf(); // Invoke a function pointer? (****pf)(); // Looks strange but Ok (***************f)(); // Also Ok } \end{cilcode} \item Initializer with designators are one of the hardest parts about ISO C. Neither MSVC or GCC implement them fully. GCC comes close though. What is the final value of \t{i.nested.y} and \t{i.nested.z}? (Answer: 2 and respectively 6). \begin{cilcode}[global] struct { int x; struct { int y, z; } nested; } i = { .nested.y = 5, 6, .x = 1, 2 }; \end{cilcode} \item This is from c-torture. This function returns 1. \begin{cilcode}[global] typedef struct { char *key; char *value; } T1; typedef struct { long type; char *value; } T3; T1 a[] = { { "", ((char *)&((T3) {1, (char *) 1})) } }; int main() { T3 *pt3 = (T3*)a[0].value; return pt3->value; } \end{cilcode} \item Another one with constructed literals. This one is legal according to the GCC documentation but somehow GCC chokes on (it works in CIL though). This code returns 2. \begin{cilcode}[local] return ((int []){1,2,3,4})[1]; \end{cilcode} \item In the example below there is one copy of ``bar'' and two copies of ``pbar'' (static prototypes at block scope have file scope, while for all other types they have block scope). \begin{cilcode}[global] int foo() { static bar(); static (*pbar)() = bar; } static bar() { return 1; } static (*pbar)() = 0; \end{cilcode} \item Two years after heavy use of CIL, by us and others, I discovered a bug in the parser. The return value of the following function depends on what precedence you give to casts and unary minus: \begin{cilcode}[global] unsigned long foo() { return (unsigned long) - 1 / 8; } \end{cilcode} The correct interpretation is \t{((unsigned long) - 1) / 8}, which is a relatively large number, as opposed to \t{(unsigned long) (- 1 / 8)}, which is 0. \end{enumerate} \subsection{GCC ugliness}\label{sec-ugly-gcc} \begin{enumerate} \item GCC has generalized lvalues. You can take the address of a lot of strange things: \begin{cilcode}[local] int x, y, z; return &(x ? y : z) - & (x++, x); \end{cilcode} \item GCC lets you omit the second component of a conditional expression. \begin{cilcode}[local] extern int f(); return f() ? : -1; // Returns the result of f unless it is 0 \end{cilcode} \item Computed jumps can be tricky. CIL compiles them away in a fairly clean way but you are on your own if you try to jump into another function this way. \begin{cilcode}[global] static void *jtab[2]; // A jump table static int doit(int x){ static int jtab_init = 0; if(!jtab_init) { // Initialize the jump table jtab[0] = &&lbl1; jtab[1] = &&lbl2; jtab_init = 1; } goto *jtab[x]; // Jump through the table lbl1: return 0; lbl2: return 1; } int main(void){ if (doit(0) != 0) exit(1); if (doit(1) != 1) exit(1); exit(0); } \end{cilcode} \item A cute little example that we made up. What is the returned value? (Answer: 1); \begin{cilcode}[local] return ({goto L; 0;}) && ({L: 5;}); \end{cilcode} \item \t{extern inline} is a strange feature of GNU C. Can you guess what the following code computes? \begin{cilcode}[global] extern inline foo(void) { return 1; } int firstuse(void) { return foo(); } // A second, incompatible definition of foo int foo(void) { return 2; } int main() { return foo() + firstuse(); } \end{cilcode} The answer depends on whether the optimizations are turned on. If they are then the answer is 3 (the first definition is inlined at all occurrences until the second definition). If the optimizations are off, then the first definition is ignore (treated like a prototype) and the answer is 4. CIL will misbehave on this example, if the optimizations are turned off (it always returns 3). \item GCC allows you to cast an object of a type T into a union as long as the union has a field of that type: \begin{cilcode}[global] union u { int i; struct s { int i1, i2; } s; }; union u x = (union u)6; int main() { struct s y = {1, 2}; union u z = (union u)y; } \end{cilcode} \item GCC allows you to use the \t{\_\_mode\_\_} attribute to specify the size of the integer instead of the standard \t{char}, \t{short} and so on: \begin{cilcode}[global] int __attribute__ ((__mode__ ( __QI__ ))) i8; int __attribute__ ((__mode__ ( __HI__ ))) i16; int __attribute__ ((__mode__ ( __SI__ ))) i32; int __attribute__ ((__mode__ ( __DI__ ))) i64; \end{cilcode} \item The ``alias'' attribute on a function declaration tells the linker to treat this declaration as another name for the specified function. CIL will replace the declaration with a trampoline function pointing to the specified target. \begin{cilcode}[global] static int bar(int x, char y) { return x + y; } //foo is considered another name for bar. int foo(int x, char y) __attribute__((alias("bar"))); \end{cilcode} \end{enumerate} \subsection{Microsoft VC ugliness} This compiler has few extensions, so there is not much to say here. \begin{enumerate} \item Why does the following code return 0 and not -1? (Answer: because of a bug in Microsoft Visual C. It thinks that the shift is unsigned just because the second operator is unsigned. CIL reproduces this bug when in MSVC mode.) \begin{code} return -3 >> (8 * sizeof(int)); \end{verbatim}\end{code} \item Unnamed fields in a structure seem really strange at first. It seems that Microsoft Visual C introduced this extension, then GCC picked it up (but in the process implemented it wrongly: in GCC the field \t{y} overlaps with \t{x}!). \begin{cilcode}[local] struct { int x; struct { int y, z; struct { int u, v; }; }; } a; return a.x + a.y + a.z + a.u + a.v; \end{cilcode} \end{enumerate} \section{Authors} The CIL parser was developed starting from Hugues Casse's \t{frontc} front-end for C although all the files from the \t{frontc} distribution have been changed very extensively. The intermediate language and the elaboration stage are all written from scratch. The main author is \ahref{mailto:[email protected]}{George Necula}, with significant contributions from \ahref{mailto:[email protected]}{Scott McPeak}, \ahref{mailto:[email protected]}{Westley Weimer}, \ahref{mailto:[email protected]}{Ben Liblit}, \ahreftop{http://www.cs.berkeley.edu/\~{}matth/}{Matt Harren}, Raymond To and Aman Bhargava. This work is based upon work supported in part by the National Science Foundation under Grants No. 9875171, 0085949 and 0081588, and gifts from Microsoft Research. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation or the other sponsors. \section{License} Copyright (c) 2001-2007, \begin{itemize} \item George C. Necula <[email protected]> \item Scott McPeak <[email protected]> \item Wes Weimer <[email protected]> \item Ben Liblit <[email protected]> \item Matt Harren <[email protected]> \end{itemize} All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. The names of the contributors may not be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. \section{Bug reports} We are certain that there are still some remaining bugs in CIL. If you find one please file a bug report in our Source Forge space \ahreftop{http://sourceforge.net/projects/cil} {http://sourceforge.net/projects/cil}. You can find there the latest announcements, a source distribution, bug report submission instructions and a mailing list: cil-users[at sign]lists.sourceforge.net. Please use this list to ask questions about CIL, as it will ensure your message is viewed by a broad audience. \section{Changes}\label{sec-changes}\cutname{changes.html} \begin{itemize} \item {\bf February 14, 2008}: Fixed a bug in temporary file creation. Thanks to J. Aaron Pendergrass for the patch. \item {\bf November 30, 2007}: Fixed a bbug in assignment to lvalues that depend on themselves. \item {\bf April 4, 2007}: Benjamin Monate fixed a bug in \moduleref{Cfg} for empty loop bodies. \item {\bf March 29, 2007}: Polyvios Pratikakis fixed a bug in \t{src/ext/pta/uref.ml}. \item {\bf March 15, 2007}: Added support for \t{\_\_attribute\_\_((aligned))} and \t{\_\_attribute\_\_((packed))}. \item {\bf March 7, 2007}: \t{typeOf(StartOf \_)} now preserves the attributes of the array. \item {\bf February 22, 2007}: Added an inliner (ext/inliner.ml) \item {\bf February 21, 2007}: We now constant-fold bitfield expressions. Thanks to Virgile Prevosto for the patch. \item {\bf February 13, 2007}: gcc preprocessor arguments passed using \t{-Wp} are now used only during initial preproccessing, not for the preprocessing after CIL. This fixes problems in the Linux makefiles with dependency generation. \item {\bf February 6, 2007}: Fixed \cilvalref{parseInt} for non-32 bit architectures. \item {\bf February 5, 2007}: {\bf Released version 1.3.6} (Subversion revision 9211) \item {\bf February 2, 2007}: Improved the way CIL gets configured for the actual definitions of \t{size\_t} and \t{wchar\_t}. \item {\bf February 1, 2007}: Fixed the parser to support the unused attribute on labels. For now, we just drop this attribute since Rmtmps will remove unused labels anyways. Thanks to Peter Hawkins for the patch. \item {\bf January 18, 2007}: Require the destination of a \t{Call} to have the same type as the function's return type, even if it means inserting a temporary. To get the old behavior, set \t{Cabs2cil.doCollapseCallCast} to true as described in \secref{cilly-internal-options}. \item {\bf January 17, 2007}: Fix for \t{\_\_builtin\_offsetof} when the field name is also a typedef name. \item {\bf January 17, 2007}: Fixed \cilvalref{loadBinaryFile} (Sourceforge bug \#1548894). You should only use loadBinaryFile if no other code has been loaded or generated in the current CIL process, since loadBinaryFile needs to load some global state. \item {\bf December 18, 2006}: The \t{-{}-stats} flag now gets the CPU speed at runtime rather than configure-time, so binary executables can be moved to different computers. \item {\bf December 14, 2006}: Fixed various warnings and errors on 64-bit architectures. \item {\bf November 26, 2006}: Christoph Spiel added ``\t{-{}-no}'' options to many of CIL's command-line flags. \item {\bf November 21, 2006}: Merged \t{gccBuiltins} and \t{msvcBuiltins} into a single table \cilvalref{builtinFunctions} that is initialized by \cilvalref{initCIL}. \item {\bf October 28, 2006}: Added the field \t{vdescr} to the \ciltyperef{varinfo} struct to remember what value is stored in certain CIL-introduced temporary variables. For example, if CIL adds a temporary to store the result of \t{foo(a,b)}, then the description will be ``foo(a,b)''. The new printer \cilvalref{descriptiveCilPrinter} will substitute descriptions for the names of temporaries. The result is not necessarily valid C, but it may let you produce more helpful error messages in your analysis tools: ``The value foo(a,b) may be tainted'' vs. ``The value \_\_cil\_tmp29 may be tainted.'' \item {\bf October 27, 2006}: Fixed a bug with duplicate entries in the statement list of Switch nodes, and forbade duplicate \t{default} cases. %% October 26, 2006: Moved the CIL source repository from CVS %% to Subversion. Subversion revision 8603 is the switchover point. \item {\bf October 12, 2006}: Added a new function \cilvalref{expToAttrParam} that attempts to convert an expression into a attribute parameter. \item {\bf October 12, 2006}: Added an attribute with the length of the array, when array types of formal arguments are converted to pointer types. \item {\bf September 29, 2006}: Benjamin Monate fixed a bug in compound local initializers that was causing duplicate code to be added. \item {\bf August 9, 2006}: Changed the patcher to print ``\t{\#line nnn}'' directives instead of ``\t{\# nnn}''. \item {\bf August 6, 2006}: Joseph Koshy patched \t{./configure} for FreeBSD on amd64. \item {\bf July 27, 2006}: CIL files now include the prototypes of builtin functions (such as \t{\_\_builtin\_va\_arg}). This preserves the invariant that every function call has a corresponding function or function prototype in the file. However, the prototypes of builtins are not printed in the output files. \item {\bf July 23, 2006}: Incorporated some fixes for the constant folding for lvalues, and fixed grammatical errors. Thanks to Christian Stork. \item {\bf July 23, 2006}: Changed the way ./configure works. We now generate the file Makefile.features to record the configuration features. This is because autoconf does not work properly with multiline substitutions. \item {\bf July 21, 2006}: Cleaned up the printing of some Lvals. Things that were printed as ``(*i)'' before are now printed simply as ``*i'' (no parentheses). However, this means that when you use pLval to print lvalues inside expressions, you must take care about parentheses yourself. Thanks to Benjamin Monate for pointing this out. \item {\bf July 21, 2006}: Added new hooks to the Usedef and Dataflow.BackwardsTransfer APIs. Code that uses these will need to be changed slightly. Also, updated the \moduleref{Cfg} code to handle noreturn functions. \item {\bf July 17, 2006}: Fix parsing of attributes on bitfields and empty attribute lists. Thanks to Peter Hawkins. \item {\bf July 10, 2006}: Fix Makefile problem for FreeBSD. Thanks to Joseph Koshy for the patch. \item {\bf June 25, 2006}: Extended the inline assembly to support named arguments, as added in gcc 3.0. This changes the types of the input and output lists from ``\t{(string * lval) list}'' to ``\t{(string option * string * lval) list}''. Some existing code will need to be modified accordingly. \item {\bf June 11, 2006}: Removed the function \t{Cil.foldLeftCompoundAll}. Use instead \cilvalref{foldLeftCompound} with \t{~implicit:true}. \item {\bf June 9, 2006}: Extended the definition of the cilVisitor for initializers to pass more information around. This might result in backward incompatibilities with code that uses the visitor for initializers. \item {\bf June 2, 2006}: Added \t{-{}-commPrintLnSparse} flag. \item {\bf June 1, 2006}: Christian Stork provided some fixes for the handling of variable argument functions. \item {\bf June 1, 2006}: Added support for x86 performance counters on 64-bit processors. Thanks to tbergen for the patch. \item {\bf May 23, 2006}: Benjamin Monate fixed a lexer bug when a preprocessed file is missing a final newline. \item {\bf May 23, 2006}: Fix for \t{typeof($e$)} when $e$ has type \t{void}. \item {\bf May 20, 2006}: {\bf Released version 1.3.5} (Subversion revision 8093) \item {\bf May 19, 2006}: \t{Makefile.cil.in}/\t{Makefile.cil} have been renamed \t{Makefile.in}/\t{Makefile}. And \t{maincil.ml} has been renamed \t{main.ml}. \item {\bf May 18, 2006}: Added a new module \moduleref{Cfg} to compute the control-flow graph. Unlike the older \cilvalref{computeCFGInfo}, the new version does not modify the code. \item {\bf May 18, 2006}: Added several new analyses: reaching definitions, available expressions, liveness analysis, and dead code elimination. See \secref{Extension}. \item {\bf May 2, 2006}: Added a flag \t{-{}-noInsertImplicitCasts}. When this flag is used, CIL code will only include casts inserted by the programmer. Implicit coercions are not changed to explicit casts. \item {\bf April 16, 2006}: Minor improvements to the \t{-{}-stats} flag (\secref{cilly-asm-options}). We now use Pentium performance counters by default, if your processor supports them. \item {\bf April 10, 2006}: Extended \t{machdep.c} to support microcontroller compilers where the struct alignment of integer types does not match the size of the type. Thanks to Nathan Cooprider for the patch. \item {\bf April 6, 2006}: Fix for global initializers of unions when the union field being initialized is not the first one, and for missing initializers of unions when the first field is not the largest field. \item {\bf April 6, 2006}: Fix for bitfields in the SFI module. \item {\bf April 6, 2006}: Various fixes for gcc attributes. \t{packed}, \t{section}, and \t{always\_inline} attributes are now parsed correctly. Also fixed printing of attributes on enum types. \item {\bf March 30, 2006}: Fix for \t{rmtemps.ml}, which deletes unused inline functions. When in \t{gcc} mode CIL now leaves all inline functions in place, since \t{gcc} treats these as externally visible. \item {\bf March 3, 2006}: Assume inline assembly instructions can fall through for the purposes of adding return statements. Thanks to Nathan Cooprider for the patch. \item {\bf February 27, 2006}: Fix for extern inline functions when the output of CIL is fed back into CIL. \item {\bf January 30, 2006}: Fix parsing of \t{switch} without braces. \item {\bf January 30, 2006}: Allow `\$' to appear in identifiers. \item {\bf January 13, 2006}: Added support for gcc's alias attribute on functions. See \secref{ugly-gcc}, item 8. \item {\bf December 9, 2005}: Christoph Spiel fixed the Golf and Olf modules so that Golf can be used with the points-to analysis. He also added performance fixes and cleaned up the documentation. \item {\bf December 1, 2005}: Major rewrite of the ext/callgraph module. \item {\bf December 1, 2005}: Preserve enumeration constants in CIL. Default is the old behavior to replace them with integers. \item {\bf November 30, 2005}: Added support for many GCC \t{\_\_builtin} functions. \item {\bf November 30, 2005}: Added the EXTRAFEATURES configure option, making it easier to add Features to the build process. \item {\bf November 23, 2005}: In MSVC mode do not remove any locals whose name appears as a substring in an inline assembly. \item {\bf November 23, 2005}: Do not add a return to functions that have the noreturn attribute. \item {\bf November 22, 2005}: {\bf Released version 1.3.4} \item {\bf November 21, 2005}: Performance and correctness fixes for the Points-to Analysis module. Thanks to Christoph Spiel for the patches. \item {\bf October 5, 2005}: CIL now builds on SPARC/Solaris. Thanks to Nick Petroni and Remco van Engelen for the patches. \item {\bf September 26, 2005}: CIL no longer uses the `\t{-I-}' flag by default when preprocessing with gcc. \item {\bf August 24, 2005}: Added a command-line option ``-{}-forceRLArgEval'' that forces function arguments to be evaluated right-to-left. This is the default behavior in unoptimized gcc and MSVC, but the order of evaluation is undefined when using optimizations, unless you apply this CIL transformation. This flag does not affect the order of evaluation of e.g. binary operators, which remains undefined. Thanks to Nathan Cooprider for the patch. \item {\bf August 9, 2005}: Fixed merging when there are more than 20 input files. \item {\bf August 3, 2005}: When merging, it is now an error to declare the same global variable twice with different initializers. \item {\bf July 27, 2005}: Fixed bug in transparent unions. \item {\bf July 27, 2005}: Fixed bug in collectInitializer. Thanks to Benjamin Monate for the patch. \item {\bf July 26, 2005}: Better support for extended inline assembly in gcc. \item {\bf July 26, 2005}: Added many more gcc \_\_builtin* functions to CIL. Most are treated as Call instructions, but a few are translated into expressions so that they can be used in global initializers. For example, ``\t{\_\_builtin\_offsetof(t, field)}'' is rewritten as ``\t{\&((t*)0)->field}'', the traditional way of calculating an offset. \item {\bf July 18, 2005}: Fixed bug in the constant folding of shifts when the second argument was negative or too large. \item {\bf July 18, 2005}: Fixed bug where casts were not always inserted in function calls. \item {\bf June 10, 2005}: Fixed bug in the code that makes implicit returns explicit. We weren't handling switch blocks correctly. \item {\bf June 1, 2005}: {\bf Released version 1.3.3} \item {\bf May 31, 2005}: Fixed handling of noreturn attribute for function pointers. \item {\bf May 30, 2005}: Fixed bugs in the handling of constructors in gcc. \item {\bf May 30, 2005}: Fixed bugs in the generation of global variable IDs. \item {\bf May 27, 2005}: Reimplemented the translation of function calls so that we can intercept some builtins. This is important for the uses of \_\_builtin\_constant\_p in constants. \item {\bf May 27, 2005}: Export the plainCilPrinter, for debugging. \item {\bf May 27, 2005}: Fixed bug with printing of const attribute for arrays. \item {\bf May 27, 2005}: Fixed bug in generation of type signatures. Now they should not contain expressions anymore, so you can use structural equality. This used to lead to Out\_of\_Memory exceptions. \item {\bf May 27, 2005}: Fixed bug in type comparisons using TBuiltin\_va\_list. \item {\bf May 27, 2005}: Improved the constant folding in array lengths and case expressions. \item {\bf May 27, 2005}: Added the \t{\_\_builtin\_frame\_address} to the set of gcc builtins. \item {\bf May 27, 2005}: Added the CIL project to SourceForge. \item {\bf April 23, 2005}: The cattr field was not visited. \item {\bf March 6, 2005}: Debian packaging support \item {\bf February 16, 2005}: Merger fixes. \item {\bf February 11, 2005}: Fixed a bug in \t{-{}-dopartial}. Thanks to Nathan Cooprider for this fix. \item {\bf January 31, 2005}: Make sure the input file is closed even if a parsing error is encountered. \item {\bf January 11, 2005}: {\bf Released version 1.3.2} \item {\bf January 11, 2005}: Fixed printing of integer constants whose integer kind is shorter than an int. \item {\bf January 11, 2005}: Added checks for negative size arrays and arrays too big. \item {\bf January 10, 2005}: Added support for GCC attribute ``volatile'' for tunctions (as a synonim for noreturn). \item {\bf January 10, 2005}: Improved the comparison of array sizes when comparing array types. \item {\bf January 10, 2005}: Fixed handling of shell metacharacters in the cilly command lione. \item {\bf January 10, 2005}: Fixed dropping of cast in initialization of local variable with the result of a function call. \item {\bf January 10, 2005}: Fixed some structural comparisons that were broken in the Ocaml 3.08. \item {\bf January 10, 2005}: Fixed the \t{unrollType} function to not forget attributes. \item {\bf January 10, 2005}: Better keeping track of locations of function prototypes and definitions. \item {\bf January 10, 2005}: Fixed bug with the expansion of enumeration constants in attributes. \item {\bf October 18, 2004}: Fixed a bug in cabsvisit.ml. CIl would wrap a BLOCK around a single atom unnecessarily. \item {\bf August 7, 2004}: {\bf Released version 1.3.1} \item {\bf August 4, 2004}: Fixed a bug in splitting of structs using \t{-{}-dosimplify} \item {\bf July 29, 2004}: Minor changes to the type typeSig (type signatures) to ensure that they do not contain types, so that you can do structural comparison without danger of nontermination. \item {\bf July 28, 2004}: Ocaml version 3.08 is required. Numerous small changes while porting to Ocaml 3.08. \item {\bf July 7, 2004}: {\bf Released version 1.2.6} \item {\bf July 2, 2004}: Character constants such as \t{'c'} should have type \t{int}, not \t{char}. Added a utility function \t{Cil.charConstToInt} that sign-extends chars greater than 128, if needed. \item {\bf July 2, 2004}: Fixed a bug that was casting values to int before applying the logical negation operator !. This caused problems for floats, and for integer types bigger than \t{int}. \item {\bf June 13, 2004}: Added the field \t{sallstmts} to a function description, to hold all statements in the function. \item {\bf June 13, 2004}: Added new extensions for data flow analyses, and for computing dominators. \item {\bf June 10, 2004}: Force initialization of CIL at the start of Cabs2cil. \item {\bf June 9, 2004}: Added support for GCC \t{\_\_attribute\_used\_\_} \item {\bf April 7, 2004}: {\bf Released version 1.2.5} \item {\bf April 7, 2004}: Allow now to run ./configure CC=cl and set the MSVC compiler to be the default. The MSVC driver will now select the default name of the .exe file like the CL compiler. \item {\bf April 7, 2004}: Fixed a bug in the driver. The temporary files are deleted by the Perl script before the CL compiler gets to them? \item {\bf April 7, 2004}: Added the - form of arguments to the MSVC driver. \item {\bf April 7, 2004}: Added a few more GCC-specific string escapes, (, [, \{, \%, E. \item {\bf April 7, 2004}: Fixed bug with continuation lines in MSVC. \item {\bf April 6, 2004}: Fixed embarassing bug in the parser: the precedence of casts and unary operators was switched. \item {\bf April 5, 2004}: Fixed a bug involving statements mixed between declarations containing initializers. Now we make sure that the initializers are run in the proper order with respect to the statements. \item {\bf April 5, 2004}: Fixed a bug in the merger. The merger was keeping separate alpha renaming talbes (namespaces) for variables and types. This means that it might end up with a type and a variable named the same way, if they come from different files, which breaks an important CIL invariant. \item {\bf March 11, 2004} : Fixed a bug in the Cil.copyFunction function. The new local variables were not getting fresh IDs. \item {\bf March 5, 2004}: Fixed a bug in the handling of static function prototypes in a block scope. They used to be renamed. Now we just consider them global. \item {\bf February 20, 2004}: {\bf Released version 1.2.4} \item {\bf February 15, 2004}: Changed the parser to allow extra semicolons after field declarations. \item {\bf February 14, 2004}: Changed the Errormsg functions: error, unimp, bug to not raise an exception. Instead they just set Errormsg.hadErrors. \item {\bf February 13, 2004}: Change the parsing of attributes to recognize enumeration constants. \item {\bf February 10, 2004}: In some versions of \t{gcc} the identifier {\_\{thread} is an identifier and in others it is a keyword. Added code during configuration to detect which is the case. \item {\bf January 7, 2004}: {\bf Released version 1.2.3} \item {\bf January 7, 2004}: Changed the alpha renamer to be less conservative. It will remember all versions of a name that were seen and will only create a new name if we have not seen one. \item {\bf December 30, 2003} : Extended the \t{cilly} command to understand better linker command options \t{-lfoo}. \item {\bf December 5, 2003}: Added markup commands to the pretty-printer module. Also, changed the ``@<'' left-flush command into ``@\^''. \item {\bf December 4, 2003}: Wide string literals are now handled directly by Cil (rather than being exploded into arrays). This is apparently handy for Microsoft Device Driver APIs that use intrinsic functions that require literal constant wide-string arguments. \item {\bf December 3, 2003}: Added support for structured exception handling extensions for the Microsoft compilers. \item {\bf December 1, 2003}: Fixed a Makefile bug in the generation of the Cil library (e.g., \t{cil.cma}) that was causing it to be unusable. Thanks to KEvin Millikin for pointing out this bug. \item {\bf November 26, 2003}: Added support for linkage specifications (extern ``C''). \item {\bf November 26, 2003}: Added the ocamlutil directory to contain some utilities shared with other projects. \item {\bf November 25, 2003}: {\bf Released version 1.2.2} \item {\bf November 24, 2003}: Fixed a bug that allowed a static local to conflict with a global with the same name that is declared later in the file. \item {\bf November 24, 2003}: Removed the \t{-{}-keep} option of the \t{cilly} driver and replaced it with \t{-{}-save-temps}. \item {\bf November 24, 2003}: Added printing of what CIL features are being run. \item {\bf November 24, 2003}: Fixed a bug that resulted in attributes being dropped for integer types. \item {\bf November 11, 2003}: Fixed a bug in the visitor for enumeration definitions. \item {\bf October 24, 2003}: Fixed a problem in the configuration script. It was not recognizing the Ocaml version number for beta versions. \item {\bf October 15, 2003}: Fixed a problem in version 1.2.1 that was preventing compilation on OCaml 3.04. \item {\bf September 17, 2003: Released version 1.2.1.} \item {\bf September 7, 2003}: Redesigned the interface for choosing \texttt{\#line} directive printing styles. Cil.printLn and Cil.printLnComment have been merged into Cil.lineDirectiveStyle. \item {\bf August 8, 2003}: Do not silently pad out functions calls with arguments to match the prototype. \item {\bf August 1, 2003}: A variety of fixes suggested by Steve Chamberlain: initializers for externs, prohibit float literals in enum, initializers for unsized arrays were not working always, an overflow problem in Ocaml, changed the processing of attributes before struct specifiers \item {\bf July 14, 2003}: Add basic support for GCC's "\_\_thread" storage qualifier. If given, it will appear as a "thread" attribute at the top of the type of the declared object. Treatment is very similar to "\_\_declspec(...)" in MSVC \item {\bf July 8, 2003}: Fixed some of the \_\_alignof computations. Fixed bug in the designated initializers for arrays (Array.get error). \item {\bf July 8, 2003}: Fixed infinite loop bug (Stack Overflow) in the visitor for \_\_alignof. \item {\bf July 8, 2003}: Fixed bug in the conversion to CIL. A function or array argument of the GCC \_\_typeof() was being converted to pointer type. Instead, it should be left alone, just like for sizeof. \item {\bf July 7, 2003}: New Escape module provides utility functions for escaping characters and strings in accordance with C lexical rules. \item {\bf July 2, 2003}: Relax CIL's rules for when two enumeration types are considered compatible. Previously CIL considered two enums to be compatible if they were the same enum. Now we follow the C99 standard. \item {\bf June 28, 2003}: In the Formatparse module, Eric Haugh found and fixed a bug in the handling of lvalues of the form ``lv->field.more''. \item {\bf June 28, 2003}: Extended the handling of gcc command lines arguments in the Perl scripts. \item {\bf June 23, 2003}: In Rmtmps module, simplified the API for customizing the root set. Clients may supply a predicate that returns true for each root global. Modifying various ``\texttt{referenced}'' fields directly is no longer supported. \item {\bf June 17, 2003}: Reimplement internal utility routine \t{Cil.escape\_char}. Faster and better. \item {\bf June 14, 2003}: Implemented support for \t{\_\_attribute\_\_s} appearing between "struct" and the struct tag name (also for unions and enums), since gcc supports this as documented in section 4.30 of the gcc (2.95.3) manual \item {\bf May 30, 2003}: Released the regression tests. \item {\bf May 28, 2003}: {\bf Released version 1.1.2} \item {\bf May 26, 2003}: Add the \t{simplify} module that compiles CIL expressions into simpler expressions, similar to those that appear in a 3-address intermediate language. \item {\bf May 26, 2003}: Various fixes and improvements to the pointer analysis modules. \item {\bf May 26, 2003}: Added optional consistency checking for transformations. \item {\bf May 25, 2003}: Added configuration support for big endian machines. Now \cilvalref{little\_endian} can be used to test whether the machine is little endian or not. \item {\bf May 22, 2003}: Fixed a bug in the handling of inline functions. The CIL merger used to turn these functions into ``static'', which is incorrect. \item {\bf May 22, 2003}: Expanded the CIL consistency checker to verify undesired sharing relationships between data structures. \item {\bf May 22, 2003}: Fixed bug in the \t{oneret} CIL module: it was mishandling certain labeled return statements. \item {\bf May 5, 2003}: {\bf Released version 1.0.11} \item {\bf May 5, 2003}: OS X (powerpc/darwin) support for CIL. Special thanks to Jeff Foster, Andy Begel and Tim Leek. \item {\bf April 30, 2003}: Better description of how to use CIL for your analysis. \item {\bf April 28, 2003}: Fixed a bug with \texttt{-{}-dooneRet} and \texttt{-{}-doheapify}. Thanks, Manos Renieris. \item {\bf April 16, 2003}: Reworked management of temporary/intermediate output files in Perl driver scripts. Default behavior is now to remove all such files. To keep intermediate files, use one of the following existing flags: \begin{itemize} \item \texttt{-{}-keepmerged} for the single-file merge of all sources \item \texttt{-{}-keep=<\textit{dir}>} for various other CIL and CCured output files \item \texttt{-{}-save-temps} for various gcc intermediate files; MSVC has no equivalent option \end{itemize} As part of this change, some intermediate files have changed their names slightly so that new suffixes are always preceded by a period. For example, CCured output that used to appear in ``\texttt{foocured.c}'' now appears in ``\texttt{foo.cured.c}''. \item {\bf April 7, 2003}: Changed the representation of the \cilvalref{GVar} global constructor. Now it is possible to update the initializer without reconstructing the global (which in turn it would require reconstructing the list of globals that make up a program). We did this because it is often tempting to use \cilvalref{visitCilFileSameGlobals} and the \cilvalref{GVar} was the only global that could not be updated in place. \item {\bf April 6, 2003}: Reimplemented parts of the cilly.pl script to make it more robust in the presence of complex compiler arguments. \item {\bf March 10, 2003}: {\bf Released version 1.0.9} \item {\bf March 10, 2003}: Unified and documented a large number of CIL Library Modules: oneret, simplemem, makecfg, heapify, stackguard, partial. Also documented the main client interface for the pointer analysis. \item {\bf February 18, 2003}: Fixed a bug in logwrites that was causing it to produce invalid C code on writes to bitfields. Thanks, David Park. \item {\bf February 15, 2003}: {\bf Released version 1.0.8} \item {\bf February 15, 2003}: PDF versions of the manual and API are available for those who would like to print them out. \item {\bf February 14, 2003}: CIL now comes bundled with alias analyses. \item {\bf February 11, 2003}: Added support for adding/removing options from \t{./configure}. \item {\bf February 3, 2003}: {\bf Released version 1.0.7} \item {\bf February 1, 2003}: Some bug fixes in the handling of variable argument functions in new versions of \t{gcc} And \t{glibc}. \item {\bf January 29, 2003}: Added the logical AND and OR operators. Exapanded the translation to CIL to handle more complicated initializers (including those that contain logical operators). \item {\bf January 28, 2003}: {\bf Released version 1.0.6} \item {\bf January 28, 2003}: Added support for the new handling of variable-argument functions in new versions of \t{glibc}. \item {\bf January 19, 2003}: Added support for declarations in interpreted constructors. Relaxed the semantics of the patterns for variables. \item {\bf January 17, 2003}: Added built-in prototypes for the gcc built-in functions. Changed the \t{pGlobal} method in the printers to print the carriage return as well. \item {\bf January 9, 2003}: Reworked lexer and parser's strategy for tracking source file names and line numbers to more closely match typical native compiler behavior. The visible CIL interface is unchanged. \item {\bf January 9, 2003}: Changed the interface to the alpha convertor. Now you can pass a list where it will record undo information that you can use to revert the changes that it makes to the scope tables. \item {\bf January 6, 2003}: {\bf Released version 1.0.5} \item {\bf January 4, 2003}: Changed the interface for the Formatcil module. Now the placeholders in the pattern have names. Also expanded the documentation of the Formatcil module. Now the placeholders in the pattern have names. \item {\bf January 3, 2003}: Extended the \t{rmtmps} module to also remove unused labels that are generated in the conversion to CIL. This reduces the number of warnings that you get from \t{cgcc} afterwards. \item {\bf December 17, 2002}: Fixed a few bugs in CIL related to the representation of string literals. The standard says that a string literal is an array. In CIL, a string literal has type pointer to character. This is Ok, except as an argument of sizeof. To support this exception, we have added to CIL the expression constructor SizeOfStr. This allowed us to fix bugs with computing \t{sizeof("foo bar")} and \t{sizeof((char*)"foo bar")} (the former is 8 and the latter is 4). \item {\bf December 8, 2002}: Fixed a few bugs in the lexer and parser relating to hex and octal escapes in string literals. Also fixed the dependencies between the lexer and parser. \item {\bf December 5, 2002}: Fixed visitor bugs that were causing some attributes not to be visited and some queued instructions to be dropped. \item {\bf December 3, 2002}: Added a transformation to catch stack overflows. Fixed the heapify transformation. \item {\bf October 14, 2002}: CIL is now available under the BSD license (see the License section or the file LICENSE). {\bf Released version 1.0.4} \item {\bf October 9, 2002}: More FreeBSD configuration changes, support for the GCC-ims {\tt \_\_signed} and {\tt \_\_volatile}. Thanks to Axel Simon for pointing out these problems. {\bf Released version 1.0.3} \item {\bf October 8, 2002}: FreeBSD configuration and porting fixes. Thanks to Axel Simon for pointing out these problems. \item {\bf September 10, 2002}: Fixed bug in conversion to CIL. Now we drop all ``const'' qualifiers from the types of locals, even from the fields of local structures or elements of arrays. \item {\bf September 7, 2002}: Extended visitor interface to distinguish visitng offsets inside lvalues from offsets inside initializer lists. \item {\bf September 7, 2002}: {\bf Released version 1.0.1} \item {\bf September 6, 2002}: Extended the patcher with the \t{ateof} flag. \item {\bf September 4, 2002}: Fixed bug in the elaboration to CIL. In some cases constant folding of \t{||} and \t{\&\&} was computed wrong. \item {\bf September 3, 2002}: Fixed the merger documentation. \item {\bf August 29, 2002}: {\bf Released version 1.0.0.} \item {\bf August 29, 2002}: Started numbering versions with a major nubmer, minor and revisions. Released version 1.0.0. \item {\bf August 25, 2002}: Fixed the implementation of the unique identifiers for global variables and composites. Now those identifiers are globally unique. \item {\bf August 24, 2002}: Added to the machine-dependent configuration the \t{sizeof{void}}. It is 1 on gcc and 0 on MSVC. Extended the implementation of \t{Cil.bitsSizeOf} to handle this (it was previously returning an error when trying to compute the size of \t{void}). \item {\bf August 24, 2002}: Changed the representation of structure and unions to distinguish between undefined structures and those that are defined to be empty (allowed on gcc). The sizeof operator is undefined for the former and returns 0 for the latter. \item {\bf August 22, 2002}: Apply a patch from Richard H. Y. to support FreeBSD installations. Thanks, Richard! \item {\bf August 12, 2002}: Fixed a bug in the translation of wide-character strings. Now this translation matches that of the underlying compiler. Changed the implementation of the compiler dependencies. \item {\bf May 25, 2002}: Added interpreted constructors and destructors. \item {\bf May 17, 2002}: Changed the representation of functions to move the ``inline'' information to the varinfo. This way we can print the ``inline'' even in declarations which is what gcc does. \item {\bf May 15, 2002}: Changed the visitor for initializers to make two tail-recursive passes (the second is a \t{List.rev} and only done if one of the initializers change). This prevents \t{Stack\_Overflow} for large initializers. Also improved the processing of initializers when converting to CIL. \item {\bf May 15, 2002}: Changed the front-end to allow the use of \t{MSVC} mode even on machines that do not have MSVC. The machine-dependent parameters for GCC will be used in that case. \item {\bf May 11, 2002}: Changed the representation of formals in function types. Now the function type is purely functional. \item {\bf May 4, 2002}: Added the function \cilvalref{visitCilFileSameGlobals} and changed \cilvalref{visitCilFile} to be tail recursive. This prevents stack overflow on huge files. \item {\bf February 28, 2002}: Changed the significance of the \t{CompoundInit} in \ciltyperef{init} to allow for missing initializers at the end of an array initializer. Added the API function \cilvalref{foldLeftCompoundAll}. \end{itemize} \end{document} % LocalWords: CIL intraprocedural datatype CIL's html Dataflow ocamldoc cilly % LocalWords: Dominators tbergen bitfield
{ "alphanum_fraction": 0.7344571369, "avg_line_length": 42.7089960887, "ext": "tex", "hexsha": "f50c1eaa9c0c0e05dc1ed427c2ca6ec89ef9288a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "4c9199378e847e22660daab5e5843805d4035d0a", "max_forks_repo_licenses": [ "Intel", "Unlicense" ], "max_forks_repo_name": "petr-muller/abductor", "max_forks_repo_path": "cil/doc/cil.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "4c9199378e847e22660daab5e5843805d4035d0a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Intel", "Unlicense" ], "max_issues_repo_name": "petr-muller/abductor", "max_issues_repo_path": "cil/doc/cil.tex", "max_line_length": 112, "max_stars_count": null, "max_stars_repo_head_hexsha": "4c9199378e847e22660daab5e5843805d4035d0a", "max_stars_repo_licenses": [ "Intel", "Unlicense" ], "max_stars_repo_name": "petr-muller/abductor", "max_stars_repo_path": "cil/doc/cil.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 43245, "size": 163789 }
% Default to the notebook output style % Inherit from the specified cell style. \documentclass[11pt]{article} \usepackage[T1]{fontenc} % Nicer default font (+ math font) than Computer Modern for most use cases \usepackage{mathpazo} % Basic figure setup, for now with no caption control since it's done % automatically by Pandoc (which extracts ![](path) syntax from Markdown). \usepackage{graphicx} % We will generate all images so they have a width \maxwidth. This means % that they will get their normal width if they fit onto the page, but % are scaled down if they would overflow the margins. \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth \else\Gin@nat@width\fi} \makeatother \let\Oldincludegraphics\includegraphics % Set max figure width to be 80% of text width, for now hardcoded. \renewcommand{\includegraphics}[1]{\Oldincludegraphics[width=.8\maxwidth]{#1}} % Ensure that by default, figures have no caption (until we provide a % proper Figure object with a Caption API and a way to capture that % in the conversion process - todo). \usepackage{caption} \DeclareCaptionLabelFormat{nolabel}{} \captionsetup{labelformat=nolabel} \usepackage{adjustbox} % Used to constrain images to a maximum size \usepackage{xcolor} % Allow colors to be defined \usepackage{enumerate} % Needed for markdown enumerations to work \usepackage{geometry} % Used to adjust the document margins \usepackage{amsmath} % Equations \usepackage{amssymb} % Equations \usepackage{textcomp} % defines textquotesingle % Hack from http://tex.stackexchange.com/a/47451/13684: \AtBeginDocument{% \def\PYZsq{\textquotesingle}% Upright quotes in Pygmentized code } \usepackage{upquote} % Upright quotes for verbatim code \usepackage{eurosym} % defines \euro \usepackage[mathletters]{ucs} % Extended unicode (utf-8) support \usepackage[utf8x]{inputenc} % Allow utf-8 characters in the tex document \usepackage{fancyvrb} % verbatim replacement that allows latex \usepackage{grffile} % extends the file name processing of package graphics % to support a larger range % The hyperref package gives us a pdf with properly built % internal navigation ('pdf bookmarks' for the table of contents, % internal cross-reference links, web links for URLs, etc.) \usepackage{hyperref} \usepackage{longtable} % longtable support required by pandoc >1.10 \usepackage{booktabs} % table support for pandoc > 1.12.2 \usepackage[inline]{enumitem} % IRkernel/repr support (it uses the enumerate* environment) \usepackage[normalem]{ulem} % ulem is needed to support strikethroughs (\sout) % normalem makes italics be italics, not underlines % Colors for the hyperref package \definecolor{urlcolor}{rgb}{0,.145,.698} \definecolor{linkcolor}{rgb}{.71,0.21,0.01} \definecolor{citecolor}{rgb}{.12,.54,.11} % ANSI colors \definecolor{ansi-black}{HTML}{3E424D} \definecolor{ansi-black-intense}{HTML}{282C36} \definecolor{ansi-red}{HTML}{E75C58} \definecolor{ansi-red-intense}{HTML}{B22B31} \definecolor{ansi-green}{HTML}{00A250} \definecolor{ansi-green-intense}{HTML}{007427} \definecolor{ansi-yellow}{HTML}{DDB62B} \definecolor{ansi-yellow-intense}{HTML}{B27D12} \definecolor{ansi-blue}{HTML}{208FFB} \definecolor{ansi-blue-intense}{HTML}{0065CA} \definecolor{ansi-magenta}{HTML}{D160C4} \definecolor{ansi-magenta-intense}{HTML}{A03196} \definecolor{ansi-cyan}{HTML}{60C6C8} \definecolor{ansi-cyan-intense}{HTML}{258F8F} \definecolor{ansi-white}{HTML}{C5C1B4} \definecolor{ansi-white-intense}{HTML}{A1A6B2} % commands and environments needed by pandoc snippets % extracted from the output of `pandoc -s` \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}} % Add ',fontsize=\small' for more characters per line \newenvironment{Shaded}{}{} \newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{{#1}}}} \newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.56,0.13,0.00}{{#1}}} \newcommand{\DecValTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}} \newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}} \newcommand{\FloatTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}} \newcommand{\CharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\StringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\CommentTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textit{{#1}}}} \newcommand{\OtherTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{{#1}}} \newcommand{\AlertTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{{#1}}}} \newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.02,0.16,0.49}{{#1}}} \newcommand{\RegionMarkerTok}[1]{{#1}} \newcommand{\ErrorTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{{#1}}}} \newcommand{\NormalTok}[1]{{#1}} % Additional commands for more recent versions of Pandoc \newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.53,0.00,0.00}{{#1}}} \newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.73,0.40,0.53}{{#1}}} \newcommand{\ImportTok}[1]{{#1}} \newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.73,0.13,0.13}{\textit{{#1}}}} \newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} \newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} \newcommand{\VariableTok}[1]{\textcolor[rgb]{0.10,0.09,0.49}{{#1}}} \newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{{#1}}}} \newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.40,0.40,0.40}{{#1}}} \newcommand{\BuiltInTok}[1]{{#1}} \newcommand{\ExtensionTok}[1]{{#1}} \newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.74,0.48,0.00}{{#1}}} \newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.49,0.56,0.16}{{#1}}} \newcommand{\InformationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} \newcommand{\WarningTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} % Define a nice break command that doesn't care if a line doesn't already % exist. \def\br{\hspace*{\fill} \\* } % Math Jax compatability definitions \def\gt{>} \def\lt{<} % Document parameters \title{SIAC\_Filtering} % Pygments definitions \makeatletter \def\PY@reset{\let\PY@it=\relax \let\PY@bf=\relax% \let\PY@ul=\relax \let\PY@tc=\relax% \let\PY@bc=\relax \let\PY@ff=\relax} \def\PY@tok#1{\csname PY@tok@#1\endcsname} \def\PY@toks#1+{\ifx\relax#1\empty\else% \PY@tok{#1}\expandafter\PY@toks\fi} \def\PY@do#1{\PY@bc{\PY@tc{\PY@ul{% \PY@it{\PY@bf{\PY@ff{#1}}}}}}} \def\PY#1#2{\PY@reset\PY@toks#1+\relax+\PY@do{#2}} \expandafter\def\csname PY@tok@w\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.73,0.73}{##1}}} \expandafter\def\csname PY@tok@c\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@cp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.74,0.48,0.00}{##1}}} \expandafter\def\csname PY@tok@k\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kt\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.69,0.00,0.25}{##1}}} \expandafter\def\csname PY@tok@o\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@ow\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.67,0.13,1.00}{##1}}} \expandafter\def\csname PY@tok@nb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@nf\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@nc\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@nn\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@ne\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.82,0.25,0.23}{##1}}} \expandafter\def\csname PY@tok@nv\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@no\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.53,0.00,0.00}{##1}}} \expandafter\def\csname PY@tok@nl\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.63,0.63,0.00}{##1}}} \expandafter\def\csname PY@tok@ni\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.60,0.60,0.60}{##1}}} \expandafter\def\csname PY@tok@na\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.49,0.56,0.16}{##1}}} \expandafter\def\csname PY@tok@nt\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@nd\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.67,0.13,1.00}{##1}}} \expandafter\def\csname PY@tok@s\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@sd\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@si\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.53}{##1}}} \expandafter\def\csname PY@tok@se\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.13}{##1}}} \expandafter\def\csname PY@tok@sr\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.53}{##1}}} \expandafter\def\csname PY@tok@ss\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@sx\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@m\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@gh\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}} \expandafter\def\csname PY@tok@gu\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.50,0.00,0.50}{##1}}} \expandafter\def\csname PY@tok@gd\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.63,0.00,0.00}{##1}}} \expandafter\def\csname PY@tok@gi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.63,0.00}{##1}}} \expandafter\def\csname PY@tok@gr\endcsname{\def\PY@tc##1{\textcolor[rgb]{1.00,0.00,0.00}{##1}}} \expandafter\def\csname PY@tok@ge\endcsname{\let\PY@it=\textit} \expandafter\def\csname PY@tok@gs\endcsname{\let\PY@bf=\textbf} \expandafter\def\csname PY@tok@gp\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}} \expandafter\def\csname PY@tok@go\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.53,0.53,0.53}{##1}}} \expandafter\def\csname PY@tok@gt\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.27,0.87}{##1}}} \expandafter\def\csname PY@tok@err\endcsname{\def\PY@bc##1{\setlength{\fboxsep}{0pt}\fcolorbox[rgb]{1.00,0.00,0.00}{1,1,1}{\strut ##1}}} \expandafter\def\csname PY@tok@kc\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kd\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kn\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kr\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@bp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@fm\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@vc\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@vg\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@vi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@vm\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@sa\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@sb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@sc\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@dl\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@s2\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@sh\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@s1\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@mb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@mf\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@mh\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@mi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@il\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@mo\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@ch\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@cm\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@cpf\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@c1\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@cs\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \def\PYZbs{\char`\\} \def\PYZus{\char`\_} \def\PYZob{\char`\{} \def\PYZcb{\char`\}} \def\PYZca{\char`\^} \def\PYZam{\char`\&} \def\PYZlt{\char`\<} \def\PYZgt{\char`\>} \def\PYZsh{\char`\#} \def\PYZpc{\char`\%} \def\PYZdl{\char`\$} \def\PYZhy{\char`\-} \def\PYZsq{\char`\'} \def\PYZdq{\char`\"} \def\PYZti{\char`\~} % for compatibility with earlier versions \def\PYZat{@} \def\PYZlb{[} \def\PYZrb{]} \makeatother % Exact colors from NB \definecolor{incolor}{rgb}{0.0, 0.0, 0.5} \definecolor{outcolor}{rgb}{0.545, 0.0, 0.0} % Prevent overflowing lines due to hard-to-break entities \sloppy % Setup hyperref package \hypersetup{ breaklinks=true, % so long urls are correctly broken across lines colorlinks=true, urlcolor=urlcolor, linkcolor=linkcolor, citecolor=citecolor, } % Slightly bigger margins than the latex defaults \geometry{verbose,tmargin=1in,bmargin=1in,lmargin=1in,rmargin=1in} \begin{document} \maketitle \section{Tutorial of SIAC Filtering}\label{tutorial-of-siac-filtering} This tutorial by Xiaozhou Li is licensed under a Creative Commons Attribution 4.0 International License.\\ All code examples are also licensed under the \href{http://opensource.org/licenses/MIT}{MIT license}. \subsection{What is SIAC Filtering}\label{what-is-siac-filtering} A standard definition of SIAC Filtering is a B-spline based convolution processing technique. The name SIAC means Smoothness-Increasing Accuracy-Conserving. The formulation can be written as \[ u_h^{\star}(x, T) = (K_h\star u_h)(x, T) = \int_{-\infty}^{\infty}K_h(x - \xi)u_h(\xi, T) d\xi, \] where \(K_h\) is the so-called SIAC filter. \subsection{A Review of B-spline}\label{a-review-of-b-spline} First, we recall the definition of B-splines given by de Boor \cite{Boor:2001}. \textbf{B-spline} Let \(\mathbf{t}:= (t_j)\) be a \textbf{nondecreasing sequence} of real numbers that create a so-called knot sequence. The \(j\)th B-spline of order \(\ell\) for the knot sequence \(\mathbf{t}\) is denoted by \(B_{j,\ell,\mathbf{t}}\) and is defined, for \(\ell=1\), by the rule \begin{equation} B_{j,1,\mathbf{t}}(x) = \left\{\begin{array}{ll} 1, & t_j \leq x < t_{j+1}; \\ 0, & \text{otherwise}. \end{array} \right. \end{equation} In particular, \(t_j = t_{j+1}\) leads to \(B_{j,1,\mathbf{t}} = 0.\) For \(\ell > 1\), \begin{align*} B_{j,\ell,\mathbf{t}}(x) = \omega_{j,k,\mathbf{t}}B_{j,\ell-1,\mathbf{t}} + (1 - \omega_{j+1,\ell,\mathbf{t}})B_{j+1,\ell-1,\mathbf{t}}, \end{align*} with \[ \omega_{j,\ell,\mathbf{t}}(x) = \frac{x-t_j}{t_{j+\ell-1}-t_{j}}.\] \begin{itemize} \tightlist \item The knot sequence \(\mathbf{t}\) also represents the so-called breaks of the B-spline. \item The B-spline in the region \([t_i, t_{i+1}),\, i = 0,\ldots,\ell-1\) is a polynomial of degree \(\ell-1\), but in the entire support \([t_0,t_{\ell}]\), the B-spline is a piecewise polynomial. \item When the knots \((t_j)\) are sampled in a symmetric and equidistant fashion, the B-spline is called a central B-spline. \end{itemize} \textbf{Central B-spline} A central B-spline of order \(\ell\) has a knot sequence that is uniformly spaced and symmetrically distributed \[\mathbf{t}=-\frac{\ell}{2},-\frac{\ell-2}{2},\cdots,\frac{\ell-2}{2},\frac{\ell}{2}.\] For convenience, we denote \(\psi_{\mathbf{t}}^{(\ell)}(x)\) to be the \(0^{th}\) B-spline of order \(\ell\) for the knot sequence \(\mathbf{t}\), \[\psi_{\mathbf{t}}^{(\ell)}(x) = B_{0,\ell,\mathbf{t}}(x).\] \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}1}]:} \PY{c+c1}{\PYZsh{} environment setting, before any codes} \PY{k+kn}{import} \PY{n+nn}{numpy} \PY{k}{as} \PY{n+nn}{np} \PY{k+kn}{import} \PY{n+nn}{numpy}\PY{n+nn}{.}\PY{n+nn}{polynomial}\PY{n+nn}{.}\PY{n+nn}{legendre} \PY{k}{as} \PY{n+nn}{npleg} \PY{o}{\PYZpc{}}\PY{k}{matplotlib} inline \PY{o}{\PYZpc{}}\PY{k}{config} InlineBackend.figure\PYZus{}format = \PYZsq{}retina\PYZsq{} \PY{k+kn}{import} \PY{n+nn}{matplotlib}\PY{n+nn}{.}\PY{n+nn}{pyplot} \PY{k}{as} \PY{n+nn}{plt} \PY{k+kn}{from} \PY{n+nn}{ipywidgets} \PY{k}{import} \PY{n}{interact}\PY{p}{,} \PY{n}{interactive}\PY{p}{,} \PY{n}{fixed}\PY{p}{,} \PY{n}{interact\PYZus{}manual} \PY{k+kn}{import} \PY{n+nn}{ipywidgets} \PY{k}{as} \PY{n+nn}{widgets} \PY{k+kn}{from} \PY{n+nn}{IPython}\PY{n+nn}{.}\PY{n+nn}{display} \PY{k}{import} \PY{n}{clear\PYZus{}output}\PY{p}{,} \PY{n}{display} \end{Verbatim} \subsubsection{Implementing of B-splines}\label{implementing-of-b-splines} Here, we adopt de Boor's algorithm B-spline to implement B-spline \begin{Shaded} \begin{Highlighting}[] \KeywordTok{def}\NormalTok{ bspline(x, order, T)} \end{Highlighting} \end{Shaded} where - \(x\): the evaluation point - \(order\): the order of B-spline (polynomial degree \(= order-1\)) - \(T\): \(T[0], \ldots, T[order]\) the nodes b-spline Note: - One has to be careful once the evaluation point \(x\) is located at the end nodes of the B-spline. It is the reason to introduce the \(tiny\) variable\$. \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}2}]:} \PY{k}{def} \PY{n+nf}{bspline}\PY{p}{(}\PY{n}{x}\PY{p}{,} \PY{n}{order}\PY{p}{,} \PY{n}{T}\PY{p}{)}\PY{p}{:} \PY{n}{tiny} \PY{o}{=} \PY{l+m+mf}{1.e\PYZhy{}13} \PY{k}{if} \PY{p}{(}\PY{n}{x} \PY{o}{\PYZlt{}} \PY{n}{T}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{o}{\PYZhy{}}\PY{n}{tiny} \PY{o+ow}{or} \PY{n}{x} \PY{o}{\PYZgt{}} \PY{n}{T}\PY{p}{[}\PY{n}{order}\PY{p}{]}\PY{o}{+}\PY{n}{tiny}\PY{p}{)}\PY{p}{:} \PY{k}{return} \PY{l+m+mf}{0.} \PY{k}{else}\PY{p}{:} \PY{k}{if} \PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{abs}\PY{p}{(}\PY{n}{x} \PY{o}{\PYZhy{}} \PY{n}{T}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{)}\PY{p}{)} \PY{o}{\PYZlt{}} \PY{n}{tiny}\PY{p}{:} \PY{n}{x} \PY{o}{=} \PY{n}{T}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \PY{o}{+} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{tiny} \PY{k}{if} \PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{abs}\PY{p}{(}\PY{n}{x} \PY{o}{\PYZhy{}} \PY{n}{T}\PY{p}{[}\PY{n}{order}\PY{p}{]}\PY{p}{)}\PY{p}{)} \PY{o}{\PYZlt{}} \PY{n}{tiny}\PY{p}{:} \PY{n}{x} \PY{o}{=} \PY{n}{T}\PY{p}{[}\PY{n}{order}\PY{p}{]} \PY{o}{\PYZhy{}} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{tiny} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{order}\PY{p}{)}\PY{p}{:} \PY{k}{if} \PY{p}{(}\PY{n}{x} \PY{o}{\PYZgt{}}\PY{o}{=} \PY{n}{T}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{o}{\PYZhy{}}\PY{n}{tiny} \PY{o+ow}{and} \PY{n}{x} \PY{o}{\PYZlt{}} \PY{n}{T}\PY{p}{[}\PY{n}{i}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]}\PY{o}{\PYZhy{}}\PY{n}{tiny}\PY{p}{)}\PY{p}{:} \PY{n}{left} \PY{o}{=} \PY{n}{i} \PY{k}{break} \PY{n}{B1} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{zeros}\PY{p}{(}\PY{n}{order}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)} \PY{n}{B1}\PY{p}{[}\PY{n}{left}\PY{p}{]} \PY{o}{=} \PY{l+m+mf}{1.} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,}\PY{n}{order}\PY{p}{)}\PY{p}{:} \PY{n}{B2} \PY{o}{=} \PY{n}{B1} \PY{k}{for} \PY{n}{j} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,}\PY{n}{order}\PY{o}{\PYZhy{}}\PY{n}{i}\PY{p}{)}\PY{p}{:} \PY{k}{if} \PY{p}{(}\PY{n}{B2}\PY{p}{[}\PY{n}{j}\PY{p}{]} \PY{o}{\PYZlt{}} \PY{n}{tiny}\PY{p}{)}\PY{p}{:} \PY{n}{termL} \PY{o}{=} \PY{l+m+mf}{0.} \PY{k}{else}\PY{p}{:} \PY{n}{termL} \PY{o}{=} \PY{p}{(}\PY{n}{x} \PY{o}{\PYZhy{}} \PY{n}{T}\PY{p}{[}\PY{n}{j}\PY{p}{]}\PY{p}{)}\PY{o}{/}\PY{p}{(}\PY{n}{T}\PY{p}{[}\PY{n}{i}\PY{o}{+}\PY{n}{j}\PY{p}{]} \PY{o}{\PYZhy{}} \PY{n}{T}\PY{p}{[}\PY{n}{j}\PY{p}{]}\PY{p}{)}\PY{o}{*}\PY{n}{B2}\PY{p}{[}\PY{n}{j}\PY{p}{]} \PY{k}{if} \PY{p}{(}\PY{n}{B2}\PY{p}{[}\PY{n}{j}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{\PYZlt{}} \PY{n}{tiny}\PY{p}{)}\PY{p}{:} \PY{n}{termR} \PY{o}{=} \PY{l+m+mf}{0.} \PY{k}{else}\PY{p}{:} \PY{n}{termR} \PY{o}{=} \PY{p}{(}\PY{n}{T}\PY{p}{[}\PY{n}{i}\PY{o}{+}\PY{n}{j}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{\PYZhy{}} \PY{n}{x}\PY{p}{)}\PY{o}{/}\PY{p}{(}\PY{n}{T}\PY{p}{[}\PY{n}{i}\PY{o}{+}\PY{n}{j}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{\PYZhy{}} \PY{n}{T}\PY{p}{[}\PY{n}{j}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{)}\PY{o}{*}\PY{n}{B2}\PY{p}{[}\PY{n}{j}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]} \PY{n}{B1}\PY{p}{[}\PY{n}{j}\PY{p}{]} \PY{o}{=} \PY{n}{termL} \PY{o}{+} \PY{n}{termR} \PY{k}{return} \PY{n}{B1}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}3}]:} \PY{k}{def} \PY{n+nf}{plot\PYZus{}bspline}\PY{p}{(}\PY{n}{order}\PY{p}{,}\PY{n}{T}\PY{p}{)}\PY{p}{:} \PY{n}{samples} \PY{o}{=} \PY{l+m+mi}{21} \PY{n}{tiny} \PY{o}{=} \PY{l+m+mf}{1.e\PYZhy{}13} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{size}\PY{p}{(}\PY{n}{T}\PY{p}{)}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{:} \PY{k}{if} \PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{abs}\PY{p}{(}\PY{n}{T}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{\PYZhy{}} \PY{n}{T}\PY{p}{[}\PY{n}{i}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{)} \PY{o}{\PYZgt{}} \PY{n}{tiny}\PY{p}{)}\PY{p}{:} \PY{n}{x} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{linspace}\PY{p}{(}\PY{n}{T}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,} \PY{n}{T}\PY{p}{[}\PY{n}{i}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{,} \PY{n}{samples}\PY{p}{)} \PY{n}{y} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{zeros}\PY{p}{(}\PY{n}{samples}\PY{p}{)} \PY{k}{for} \PY{n}{j} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{samples}\PY{p}{)}\PY{p}{:} \PY{n}{y}\PY{p}{[}\PY{n}{j}\PY{p}{]} \PY{o}{=} \PY{n}{bspline}\PY{p}{(}\PY{n}{x}\PY{p}{[}\PY{n}{j}\PY{p}{]}\PY{p}{,} \PY{n}{order}\PY{p}{,} \PY{n}{T}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{x}\PY{p}{,} \PY{n}{y}\PY{p}{,}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZhy{}.r}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}\PY{n}{linewidth}\PY{o}{=}\PY{l+m+mi}{2}\PY{p}{)} \PY{n}{order} \PY{o}{=} \PY{l+m+mi}{4} \PY{n}{T1} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{o}{\PYZhy{}}\PY{l+m+mi}{2}\PY{p}{,} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{]}\PY{p}{)} \PY{n}{T2} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{]}\PY{p}{)} \PY{n}{plot\PYZus{}bspline}\PY{p}{(}\PY{n}{order}\PY{p}{,}\PY{n}{T1}\PY{p}{)} \PY{n}{plot\PYZus{}bspline}\PY{p}{(}\PY{n}{order}\PY{p}{,}\PY{n}{T2}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{show}\PY{p}{(}\PY{p}{)} \end{Verbatim} \begin{center} \adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_7_0.png} \end{center} { \hspace*{\fill} \\} \subsection{Construction of the Simplest SIAC Filter}\label{construction-of-the-simplest-siac-filter} In the early literatures like \cite{Bramble:1977} and \cite{Cockburn:2003}, the classic filter is defined as \begin{equation} K^{(2k+1,k+1)}(x) = \sum\limits_{\gamma=0}^{2k}c^{(2k+1,k+1)}_{\gamma}\psi^{(k+1)}\left(x - x_\gamma\right), \end{equation} where \(x_\gamma = \gamma - k\) and \(\psi^{(k+1)}\) is the \((k+1)\)th order \textbf{central B-spline}. The scaled filter used for filtering is given by \[ K_h^{(2k+1,k+1)}(x) = \frac{1}{h}K^{(2k+1,k+1)}\left(\frac{x}{h}\right). \] Here, we follow the extensions in \cite{Li:2015} for a more general framework \begin{equation} K^{(r+1,\ell)}(x) = \sum\limits_{\gamma=0}^{r}c^{(r+1,\ell)}_{\gamma}\psi^{\ell}\left(x - \eta x_\gamma\right), \end{equation} where \(x_\gamma = \eta\left(\gamma - \frac{r}{2}\right)\) and \(\psi^{\ell}\) is the \(\ell\)th order \textbf{central B-spline}. Furthermore, \(r+1\) is the number of B-splines and \(\eta\) is the compression factor. \subsubsection{The nodes matrix}\label{the-nodes-matrix} To design a general framework to construct the filter, we introduce the concept of nodes matrix. \textbf{nodes matrix:} A nodes matrix, \(T\), is an \((r+1) \times (\ell+1)\) matrix such that the \(\gamma-\)th row, \(T[\gamma,:]\), of the matrix \(T\) is a nodes sequence with \(\ell+1\) elements that are used to create the B-spline \(\psi_{T[\gamma,:]}^{\ell}(x)\). The number of rows \(r+1\) is specified based on the number of B-splines used to construct the filter. As we mensioned before, the \(\ell\)th order central B-spline has the nodes sequence \[\left[-\frac{\ell}{2},-\frac{\ell-2}{2},\cdots,\frac{\ell-2}{2},\frac{\ell}{2}\right],\] which leads \[T[\gamma,:] = \eta x_\gamma + \left[-\frac{\ell}{2},-\frac{\ell-2}{2},\cdots,\frac{\ell-2}{2},\frac{\ell}{2}\right],\] or \[T[j, i] = i - \frac{\ell}{2} + \eta \left(j - \frac{r}{2}\right).\] \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}4}]:} \PY{k}{def} \PY{n+nf}{generate\PYZus{}nodes\PYZus{}T}\PY{p}{(}\PY{n}{num}\PY{p}{,} \PY{n}{order}\PY{p}{,} \PY{n}{compress}\PY{p}{)}\PY{p}{:} \PY{n}{T} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{zeros}\PY{p}{(}\PY{p}{[}\PY{n}{num}\PY{p}{,}\PY{n}{order}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{)} \PY{k}{for} \PY{n}{j} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{num}\PY{p}{)}\PY{p}{:} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{order}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{:} \PY{n}{T}\PY{p}{[}\PY{n}{j}\PY{p}{,}\PY{n}{i}\PY{p}{]} \PY{o}{=} \PY{n}{i} \PY{o}{\PYZhy{}} \PY{l+m+mf}{0.5}\PY{o}{*}\PY{n}{order} \PY{o}{+} \PY{n}{compress}\PY{o}{*}\PY{p}{(}\PY{n}{j} \PY{o}{\PYZhy{}} \PY{l+m+mf}{0.5}\PY{o}{*}\PY{p}{(}\PY{n}{num}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)} \PY{k}{return} \PY{n}{T} \PY{c+c1}{\PYZsh{}\PYZsh{}k = 1} \PY{c+c1}{\PYZsh{}\PYZsh{}print (generate\PYZus{}nodes\PYZus{}T(2*k+1, k+1, 1))} \end{Verbatim} \subsubsection{Coefficient}\label{coefficient} After defined the B-splines used to construct the filter, the only thing remains is to decide the coefficient, \(\left\{c^{(r+1,\ell)}_\gamma\right\}_{\gamma=0}^r\). The coefficients are decided by implementing the property that the filter reproduces polynomials by convolution up to degree \(r\), \begin{equation} K^{(r+1,\ell)} \star p = p, \quad p = 0, x, ..., x^{r}. \end{equation} Using the monomials as in the above equation we can obtain the following linear system for the filter coefficients: \begin{equation} \sum\limits_{\gamma=0}^{r} c_\gamma^{(r+1,\ell)}\int_{-\infty}^\infty \psi^{(\ell)}(\xi - x_\gamma)(x - \xi)^m d\xi = x^m, \,\, m = 0, 1,\ldots,r. \end{equation} In order to calculate the integration exactly, we use Gaussian quadrature with \(\lceil\frac{\ell+m+1}{2}\rceil\) quadrature points. As an example for \(k=1\) (\(r = 2k, \ell=k+1\)), we have \begin{equation} \label{eq-Matrix} \left[ \begin{array}{ccc} 1 & 1 & 1\\ x-1& x & x+1 \\ x^2+2x+\frac{7}{6} & x^2 + \frac{1}{6} & x^2-2x+\frac{7}{6} \end{array} \right] \left[ \begin{array}{c} c_0 \\ c_1 \\ c_2 \\ \end{array} \right] = \left[ \begin{array}{c} 1 \\ x \\ x^2 \\ \end{array} \right]. \end{equation} Since this linear system holds for all \(x\), we can simply set \(x=0\) and obtain the coefficients \([c_0, c_1, c_2]^{T} = [ -\frac{1}{12}, \frac{7}{6}, -\frac{1}{12}]^T\). \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}5}]:} \PY{k}{def} \PY{n+nf}{generate\PYZus{}coeff}\PY{p}{(}\PY{n}{num}\PY{p}{,} \PY{n}{order}\PY{p}{,} \PY{n}{T}\PY{p}{)}\PY{p}{:} \PY{n}{Gpn} \PY{o}{=} \PY{n+nb}{int}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{ceil}\PY{p}{(}\PY{l+m+mf}{0.5}\PY{o}{*}\PY{p}{(}\PY{n}{num}\PY{o}{+}\PY{n}{order}\PY{p}{)}\PY{p}{)}\PY{p}{)} \PY{n}{xg}\PY{p}{,} \PY{n}{wg} \PY{o}{=} \PY{n}{npleg}\PY{o}{.}\PY{n}{leggauss}\PY{p}{(}\PY{n}{Gpn}\PY{p}{)} \PY{n}{A} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{zeros}\PY{p}{(}\PY{p}{[}\PY{n}{num}\PY{p}{,}\PY{n}{num}\PY{p}{]}\PY{p}{)} \PY{n}{b} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{zeros}\PY{p}{(}\PY{p}{[}\PY{n}{num}\PY{p}{]}\PY{p}{)} \PY{n}{b}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \PY{o}{=} \PY{l+m+mf}{1.} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{num}\PY{p}{)}\PY{p}{:} \PY{k}{for} \PY{n}{j} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{num}\PY{p}{)}\PY{p}{:} \PY{k}{for} \PY{n}{l} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{order}\PY{p}{)}\PY{p}{:} \PY{n}{xm} \PY{o}{=} \PY{l+m+mf}{0.5}\PY{o}{*}\PY{p}{(}\PY{n}{T}\PY{p}{[}\PY{n}{j}\PY{p}{,}\PY{n}{l}\PY{p}{]} \PY{o}{+} \PY{n}{T}\PY{p}{[}\PY{n}{j}\PY{p}{,}\PY{n}{l}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{)} \PY{n}{xr} \PY{o}{=} \PY{l+m+mf}{0.5}\PY{o}{*}\PY{p}{(}\PY{n}{T}\PY{p}{[}\PY{n}{j}\PY{p}{,}\PY{n}{l}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{\PYZhy{}} \PY{n}{T}\PY{p}{[}\PY{n}{j}\PY{p}{,}\PY{n}{l}\PY{p}{]}\PY{p}{)} \PY{k}{for} \PY{n}{m} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{Gpn}\PY{p}{)}\PY{p}{:} \PY{n}{A}\PY{p}{[}\PY{n}{i}\PY{p}{,}\PY{n}{j}\PY{p}{]} \PY{o}{+}\PY{o}{=} \PY{n}{xr}\PY{o}{*}\PY{n}{wg}\PY{p}{[}\PY{n}{m}\PY{p}{]}\PY{o}{*}\PY{n}{bspline}\PY{p}{(}\PY{n}{xm}\PY{o}{+}\PY{n}{xr}\PY{o}{*}\PY{n}{xg}\PY{p}{[}\PY{n}{m}\PY{p}{]}\PY{p}{,} \PYZbs{} \PY{n}{order}\PY{p}{,} \PY{n}{T}\PY{p}{[}\PY{n}{j}\PY{p}{,}\PY{p}{:}\PY{p}{]}\PY{p}{)}\PY{o}{*} \PYZbs{} \PY{p}{(}\PY{o}{\PYZhy{}}\PY{n}{xr}\PY{o}{*}\PY{n}{xg}\PY{p}{[}\PY{n}{m}\PY{p}{]}\PY{o}{\PYZhy{}}\PY{n}{xm}\PY{p}{)}\PY{o}{*}\PY{o}{*}\PY{n}{i} \PY{n}{c} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{linalg}\PY{o}{.}\PY{n}{solve}\PY{p}{(}\PY{n}{A}\PY{p}{,}\PY{n}{b}\PY{p}{)} \PY{k}{return} \PY{n}{c} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}6}]:} \PY{k}{def} \PY{n+nf}{filter}\PY{p}{(}\PY{n}{x}\PY{p}{,} \PY{n}{num}\PY{p}{,} \PY{n}{order}\PY{p}{,} \PY{n}{T}\PY{p}{,} \PY{n}{c}\PY{p}{)}\PY{p}{:} \PY{n+nb}{sum} \PY{o}{=} \PY{l+m+mf}{0.} \PY{k}{for} \PY{n}{j} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{num}\PY{p}{)}\PY{p}{:} \PY{n+nb}{sum} \PY{o}{+}\PY{o}{=} \PY{n}{c}\PY{p}{[}\PY{n}{j}\PY{p}{]}\PY{o}{*}\PY{n}{bspline}\PY{p}{(}\PY{n}{x}\PY{p}{,} \PY{n}{order}\PY{p}{,} \PY{n}{T}\PY{p}{[}\PY{n}{j}\PY{p}{,}\PY{p}{:}\PY{p}{]}\PY{p}{)} \PY{k}{return} \PY{n+nb}{sum} \PY{k}{def} \PY{n+nf}{plot\PYZus{}filter}\PY{p}{(}\PY{n}{num}\PY{p}{,} \PY{n}{order}\PY{p}{,} \PY{n}{T}\PY{p}{,} \PY{n}{c}\PY{p}{)}\PY{p}{:} \PY{n}{samples} \PY{o}{=} \PY{l+m+mi}{1001} \PY{n}{x} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{linspace}\PY{p}{(}\PY{n}{T}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{,}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{,} \PY{n}{T}\PY{p}{[}\PY{n}{num}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{,}\PY{n}{order}\PY{p}{]}\PY{p}{,} \PY{n}{samples}\PY{p}{)} \PY{n}{y} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{zeros}\PY{p}{(}\PY{n}{samples}\PY{p}{)} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{samples}\PY{p}{)}\PY{p}{:} \PY{n}{y}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{=} \PY{n+nb}{filter}\PY{p}{(}\PY{n}{x}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,} \PY{n}{num}\PY{p}{,} \PY{n}{order}\PY{p}{,} \PY{n}{T}\PY{p}{,} \PY{n}{c}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{x}\PY{p}{,}\PY{n}{y}\PY{p}{,}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZhy{}k}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}\PY{n}{linewidth}\PY{o}{=}\PY{l+m+mi}{2}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}7}]:} \PY{k}{def} \PY{n+nf}{original\PYZus{}filter}\PY{p}{(}\PY{n}{degree}\PY{p}{)}\PY{p}{:} \PY{n}{num} \PY{o}{=} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{degree} \PY{o}{+} \PY{l+m+mi}{1} \PY{n}{order} \PY{o}{=} \PY{n}{degree} \PY{o}{+} \PY{l+m+mi}{1} \PY{n}{T} \PY{o}{=} \PY{n}{generate\PYZus{}nodes\PYZus{}T}\PY{p}{(}\PY{n}{num}\PY{p}{,} \PY{n}{order}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{)} \PY{n}{c} \PY{o}{=} \PY{n}{generate\PYZus{}coeff}\PY{p}{(}\PY{n}{num}\PY{p}{,} \PY{n}{order}\PY{p}{,} \PY{n}{T}\PY{p}{)} \PY{n+nb}{print} \PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Filter Coefficients: }\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{c}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+se}{\PYZbs{}n}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{c+c1}{\PYZsh{}plt.figure(figsize=(10,6))} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{num}\PY{p}{)}\PY{p}{:} \PY{n}{plot\PYZus{}bspline}\PY{p}{(}\PY{n}{order}\PY{p}{,} \PY{n}{T}\PY{p}{[}\PY{n}{i}\PY{p}{,}\PY{p}{:}\PY{p}{]}\PY{p}{)} \PY{n}{plot\PYZus{}filter}\PY{p}{(}\PY{n}{num}\PY{p}{,} \PY{n}{order}\PY{p}{,} \PY{n}{T}\PY{p}{,} \PY{n}{c}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{xlabel}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZdl{}x\PYZdl{}}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}\PY{n}{fontsize}\PY{o}{=}\PY{l+m+mi}{16}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{ylabel}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZdl{}K(x)\PYZdl{}}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}\PY{n}{fontsize}\PY{o}{=}\PY{l+m+mi}{16}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{xticks}\PY{p}{(}\PY{n}{fontsize}\PY{o}{=}\PY{l+m+mi}{12}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{yticks}\PY{p}{(}\PY{n}{fontsize}\PY{o}{=}\PY{l+m+mi}{12}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{title}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Original Filter, \PYZdl{}k = }\PY{l+s+s1}{\PYZsq{}}\PY{o}{+}\PY{n+nb}{str}\PY{p}{(}\PY{n}{degree}\PY{p}{)}\PY{o}{+}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZdl{}}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}\PY{n}{fontsize}\PY{o}{=}\PY{l+m+mi}{18}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{show}\PY{p}{(}\PY{p}{)} \PY{c+c1}{\PYZsh{} to show on nbviewer online, comment out for running on a real server } \PY{n}{original\PYZus{}filter}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] Filter Coefficients: [-0.08333333 1.16666667 -0.08333333] \end{Verbatim} \begin{center} \adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_14_1.png} \end{center} { \hspace*{\fill} \\} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}8}]:} \PY{n}{w} \PY{o}{=} \PY{n}{interactive}\PY{p}{(}\PY{n}{original\PYZus{}filter}\PY{p}{,} \PY{n}{degree}\PY{o}{=}\PY{n}{widgets}\PY{o}{.}\PY{n}{IntSlider}\PY{p}{(}\PY{n+nb}{min}\PY{o}{=}\PY{l+m+mi}{1}\PY{p}{,}\PY{n+nb}{max}\PY{o}{=}\PY{l+m+mi}{5}\PY{p}{,}\PY{n}{value}\PY{o}{=}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)} \PY{n}{display}\PY{p}{(}\PY{n}{w}\PY{p}{)} \end{Verbatim} \begin{verbatim} interactive(children=(IntSlider(value=1, description='degree', max=5, min=1), Output()), _dom_classes=('widget-interact',)) \end{verbatim} \subsection{Filtering}\label{filtering} \subsubsection{DG Solution}\label{dg-solution} In this tutorial, we do not include the DG codes. Instead, we will input the DG solution, then apply the filtering. In the default setting, the DG solution has form \begin{equation} u_h(x) = \sum\limits_{j=1}^N\sum\limits_{l = 0}^k u_{l,j}\phi_j^l(x), \end{equation} where \(\left\{\phi_j^l\right\}_{l=0}^k\) is the standard Legendre polynomial on the element \(I_j = [x_{j-\frac{1}{2}},x_{j+\frac{1}{2}}]\). Coefficients \(\left\{u_{l,j}\right\}\) are provided by data file, \(DGsolution\_k?\_N?.dat\), with column-major order (\(l\)-major). This data is computed for the linear equation \begin{equation*} u_t + u_x = 0 \end{equation*} on the uniform mesh for domain \([0,1]\) with exact solution \[u(x,T) = \sin(2\pi(x-T))\] at \(T = 1\). First of all, we define a function to read the DG data and return the coefficients \(u[l,j]\) \begin{Shaded} \begin{Highlighting}[] \KeywordTok{def}\NormalTok{ Input_DG(EleNum, degree)} \end{Highlighting} \end{Shaded} where - \(N\): the number of elements - \(k\): the polynomial degree of DG basis \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}9}]:} \PY{k}{def} \PY{n+nf}{Input\PYZus{}DG}\PY{p}{(}\PY{n}{EleNum}\PY{p}{,} \PY{n}{degree}\PY{p}{)}\PY{p}{:} \PY{n}{u} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{loadtxt}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{DGdata/DGsolution\PYZus{}k}\PY{l+s+s2}{\PYZdq{}}\PY{o}{+}\PY{n+nb}{str}\PY{p}{(}\PY{n}{degree}\PY{p}{)}\PY{o}{+}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{\PYZus{}N}\PY{l+s+s2}{\PYZdq{}}\PY{o}{+}\PY{n+nb}{str}\PY{p}{(}\PY{n}{EleNum}\PY{p}{)}\PY{o}{+}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{.dat}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{k}{return} \PY{n}{np}\PY{o}{.}\PY{n}{transpose}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{reshape}\PY{p}{(}\PY{n}{u}\PY{p}{,}\PY{p}{[}\PY{n}{EleNum}\PY{p}{,}\PY{n}{degree}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{)}\PY{p}{)} \end{Verbatim} Before we apply the filtering, let us have a look of the error of DG solutions. To do so, we have to define the Legendre basis function \begin{Shaded} \begin{Highlighting}[] \KeywordTok{def}\NormalTok{ LegendreBasis(x,degree)} \end{Highlighting} \end{Shaded} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}10}]:} \PY{k}{def} \PY{n+nf}{LegendreBasis}\PY{p}{(}\PY{n}{x}\PY{p}{,} \PY{n}{degree}\PY{p}{)}\PY{p}{:} \PY{n}{coeff} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{zeros}\PY{p}{(}\PY{n}{degree}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)} \PY{n}{coeff}\PY{p}{[}\PY{n}{degree}\PY{p}{]} \PY{o}{=} \PY{l+m+mf}{1.} \PY{k}{return} \PY{n}{npleg}\PY{o}{.}\PY{n}{legval}\PY{p}{(}\PY{n}{x}\PY{p}{,} \PY{n}{coeff}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}11}]:} \PY{k}{def} \PY{n+nf}{print\PYZus{}OrderTable}\PY{p}{(}\PY{n}{number\PYZus{}coarse}\PY{p}{,} \PY{n}{erri}\PY{p}{,} \PY{n}{err2}\PY{p}{)}\PY{p}{:} \PY{n+nb}{print} \PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{ N Linf norm order L2 norm order}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{size}\PY{p}{(}\PY{n}{erri}\PY{p}{)}\PY{p}{)}\PY{p}{:} \PY{n}{N} \PY{o}{=} \PY{n}{number\PYZus{}coarse}\PY{o}{*}\PY{l+m+mi}{2}\PY{o}{*}\PY{o}{*}\PY{n}{i} \PY{k}{if} \PY{p}{(}\PY{n}{i} \PY{o}{==} \PY{l+m+mi}{0}\PY{p}{)}\PY{p}{:} \PY{n+nb}{print} \PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+si}{\PYZpc{}3d}\PY{l+s+s2}{ }\PY{l+s+si}{\PYZpc{}7.2e}\PY{l+s+s2}{ \PYZhy{}\PYZhy{} }\PY{l+s+si}{\PYZpc{}7.2e}\PY{l+s+s2}{ \PYZhy{}\PYZhy{}}\PY{l+s+s2}{\PYZdq{}} \PY{o}{\PYZpc{}} \PYZbs{} \PY{p}{(}\PY{n}{N}\PY{p}{,} \PY{n}{erri}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,} \PY{n}{err2}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{)}\PY{p}{)} \PY{k}{else}\PY{p}{:} \PY{n+nb}{print} \PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+si}{\PYZpc{}3d}\PY{l+s+s2}{ }\PY{l+s+si}{\PYZpc{}7.2e}\PY{l+s+s2}{ }\PY{l+s+si}{\PYZpc{}4.2f}\PY{l+s+s2}{ }\PY{l+s+si}{\PYZpc{}7.2e}\PY{l+s+s2}{ }\PY{l+s+si}{\PYZpc{}4.2f}\PY{l+s+s2}{\PYZdq{}} \PY{o}{\PYZpc{}} \PYZbs{} \PY{p}{(}\PY{n}{N}\PY{p}{,} \PYZbs{} \PY{n}{erri}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,} \PY{n}{np}\PY{o}{.}\PY{n}{log}\PY{p}{(}\PY{n}{erri}\PY{p}{[}\PY{n}{i}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{]}\PY{o}{/}\PY{n}{erri}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{)}\PY{o}{/}\PY{n}{np}\PY{o}{.}\PY{n}{log}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{)}\PY{p}{,}\PYZbs{} \PY{n}{err2}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,} \PY{n}{np}\PY{o}{.}\PY{n}{log}\PY{p}{(}\PY{n}{err2}\PY{p}{[}\PY{n}{i}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{]}\PY{o}{/}\PY{n}{err2}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{)}\PY{o}{/}\PY{n}{np}\PY{o}{.}\PY{n}{log}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{)}\PY{p}{)}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}12}]:} \PY{k}{def} \PY{n+nf}{DG\PYZus{}order}\PY{p}{(}\PY{n}{number\PYZus{}coarse}\PY{p}{,} \PY{n}{degree}\PY{p}{)}\PY{p}{:} \PY{n}{Gpn} \PY{o}{=} \PY{l+m+mi}{6} \PY{n}{xg}\PY{p}{,} \PY{n}{wg} \PY{o}{=} \PY{n}{npleg}\PY{o}{.}\PY{n}{leggauss}\PY{p}{(}\PY{n}{Gpn}\PY{p}{)} \PY{n}{err2} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{zeros}\PY{p}{(}\PY{l+m+mi}{4}\PY{p}{)} \PY{n}{erri} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{zeros}\PY{p}{(}\PY{l+m+mi}{4}\PY{p}{)} \PY{n}{style} \PY{o}{=} \PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{b.}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{g\PYZhy{}.}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{r\PYZhy{}\PYZhy{}}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{c\PYZhy{}}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{4}\PY{p}{)}\PY{p}{:} \PY{n}{N} \PY{o}{=} \PY{n}{number\PYZus{}coarse}\PY{o}{*}\PY{l+m+mi}{2}\PY{o}{*}\PY{o}{*}\PY{p}{(}\PY{n}{i}\PY{p}{)} \PY{n}{u} \PY{o}{=} \PY{n}{Input\PYZus{}DG}\PY{p}{(}\PY{n}{N}\PY{p}{,} \PY{n}{degree}\PY{p}{)} \PY{n}{x} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{linspace}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{,} \PY{n}{N}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)} \PY{n}{dx} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{diff}\PY{p}{(}\PY{n}{x}\PY{p}{)} \PY{c+c1}{\PYZsh{}\PYZsh{} plotting} \PY{n}{plot\PYZus{}x} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{empty}\PY{p}{(}\PY{n}{N}\PY{o}{*}\PY{n}{Gpn}\PY{p}{)} \PY{n}{plot\PYZus{}y} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{empty}\PY{p}{(}\PY{n}{N}\PY{o}{*}\PY{n}{Gpn}\PY{p}{)} \PY{k}{for} \PY{n}{j} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{N}\PY{p}{)}\PY{p}{:} \PY{k}{for} \PY{n}{m} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{Gpn}\PY{p}{)}\PY{p}{:} \PY{n}{uh} \PY{o}{=} \PY{l+m+mf}{0.} \PY{n}{xj} \PY{o}{=} \PY{p}{(}\PY{n}{x}\PY{p}{[}\PY{n}{j}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]}\PY{o}{+}\PY{n}{x}\PY{p}{[}\PY{n}{j}\PY{p}{]}\PY{p}{)}\PY{o}{/}\PY{l+m+mi}{2} \PY{o}{+} \PY{n}{dx}\PY{p}{[}\PY{n}{j}\PY{p}{]}\PY{o}{*}\PY{n}{xg}\PY{p}{[}\PY{n}{m}\PY{p}{]}\PY{o}{/}\PY{l+m+mi}{2} \PY{k}{for} \PY{n}{l} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{degree}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{:} \PY{n}{uh} \PY{o}{+}\PY{o}{=} \PY{n}{u}\PY{p}{[}\PY{n}{l}\PY{p}{,}\PY{n}{j}\PY{p}{]}\PY{o}{*}\PY{n}{LegendreBasis}\PY{p}{(}\PY{n}{xg}\PY{p}{[}\PY{n}{m}\PY{p}{]}\PY{p}{,} \PY{n}{l}\PY{p}{)} \PY{n}{err} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{abs}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{sin}\PY{p}{(}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{np}\PY{o}{.}\PY{n}{pi}\PY{o}{*}\PY{n}{xj}\PY{p}{)}\PY{o}{\PYZhy{}}\PY{n}{uh}\PY{p}{)} \PY{n}{err2}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{+}\PY{o}{=} \PY{l+m+mf}{0.5}\PY{o}{*}\PY{n}{dx}\PY{p}{[}\PY{n}{j}\PY{p}{]}\PY{o}{*}\PY{n}{wg}\PY{p}{[}\PY{n}{m}\PY{p}{]}\PY{o}{*}\PY{n}{err}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2} \PY{n}{erri}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{=} \PY{n+nb}{max}\PY{p}{(}\PY{n}{erri}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,} \PY{n}{err}\PY{p}{)} \PY{n}{plot\PYZus{}x}\PY{p}{[}\PY{n}{j}\PY{o}{*}\PY{n}{Gpn}\PY{o}{+}\PY{n}{m}\PY{p}{]} \PY{o}{=} \PY{n}{xj} \PY{n}{plot\PYZus{}y}\PY{p}{[}\PY{n}{j}\PY{o}{*}\PY{n}{Gpn}\PY{o}{+}\PY{n}{m}\PY{p}{]} \PY{o}{=} \PY{n}{err} \PY{n}{err2}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{n}{err2}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{o}{/}\PY{l+m+mf}{1.}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{plot\PYZus{}x}\PY{p}{,} \PY{n}{plot\PYZus{}y}\PY{p}{,} \PY{n}{style}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,}\PY{n}{linewidth}\PY{o}{=}\PY{l+m+mf}{2.0}\PY{p}{,} \PY{n}{label}\PY{o}{=}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{N = }\PY{l+s+s2}{\PYZdq{}} \PY{o}{+} \PY{n+nb}{str}\PY{p}{(}\PY{n}{N}\PY{p}{)}\PY{p}{)} \PY{n}{print\PYZus{}OrderTable}\PY{p}{(}\PY{n}{number\PYZus{}coarse}\PY{p}{,} \PY{n}{erri}\PY{p}{,} \PY{n}{err2}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{yscale}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{log}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{ax} \PY{o}{=} \PY{n}{plt}\PY{o}{.}\PY{n}{legend}\PY{p}{(}\PY{n}{fontsize}\PY{o}{=}\PY{l+m+mi}{14}\PY{p}{)} \PY{n}{ax}\PY{o}{.}\PY{n}{draw\PYZus{}frame}\PY{p}{(}\PY{k+kc}{False}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{xlim}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{)} \PY{c+c1}{\PYZsh{}plt.ylim(1.e\PYZhy{}6,1.e0)} \PY{n}{plt}\PY{o}{.}\PY{n}{xlabel}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{x}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}\PY{n}{fontsize}\PY{o}{=}\PY{l+m+mi}{18}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{ylabel}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{|error|}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}\PY{n}{fontsize}\PY{o}{=}\PY{l+m+mi}{18}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{show}\PY{p}{(}\PY{p}{)} \PY{c+c1}{\PYZsh{} to show on nbviewer online, comment out for running on a real server } \PY{n}{DG\PYZus{}order}\PY{p}{(}\PY{l+m+mi}{20}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] N Linf norm order L2 norm order 20 3.67e-04 -- 1.07e-04 -- 40 4.62e-05 2.99 1.34e-05 3.00 80 5.78e-06 3.00 1.67e-06 3.00 160 7.23e-07 3.00 2.09e-07 3.00 \end{Verbatim} \begin{center} \adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_22_1.png} \end{center} { \hspace*{\fill} \\} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}13}]:} \PY{n}{w} \PY{o}{=} \PY{n}{interactive}\PY{p}{(}\PY{n}{DG\PYZus{}order}\PY{p}{,} \PY{n}{number\PYZus{}coarse}\PY{o}{=}\PY{n}{fixed}\PY{p}{(}\PY{l+m+mi}{20}\PY{p}{)}\PY{p}{,} \PY{n}{degree}\PY{o}{=}\PY{n}{widgets}\PY{o}{.}\PY{n}{IntSlider}\PY{p}{(}\PY{n+nb}{min}\PY{o}{=}\PY{l+m+mi}{1}\PY{p}{,}\PY{n+nb}{max}\PY{o}{=}\PY{l+m+mi}{4}\PY{p}{,}\PY{n}{value}\PY{o}{=}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)} \PY{n}{display}\PY{p}{(}\PY{n}{w}\PY{p}{)} \end{Verbatim} \begin{verbatim} interactive(children=(IntSlider(value=1, description='degree', max=4, min=1), Output()), _dom_classes=('widget-interact',)) \end{verbatim} \textbf{Note:} Actually, one can use numpy function \begin{Shaded} \begin{Highlighting}[] \NormalTok{ numpy.polynomial.legendre.legval(x, u[}\DecValTok{0}\NormalTok{:degree}\OperatorTok{+}\DecValTok{1}\NormalTok{,j])} \end{Highlighting} \end{Shaded} to evaluate \(u_h\) directly. In fact, as a post-processing technique, there is no need to know the basis functions information of the DG solution. One only needs to know the evaluation of \(u_h(x)\). Of course, in order to calculate the convolution properly, we still need the mesh information of the DG method. \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}14}]:} \PY{k}{def} \PY{n+nf}{point\PYZus{}location}\PY{p}{(}\PY{n}{x}\PY{p}{,} \PY{n}{grid}\PY{p}{)}\PY{p}{:} \PY{n}{tiny} \PY{o}{=} \PY{l+m+mf}{1.e\PYZhy{}10} \PY{c+c1}{\PYZsh{} assume periodic condition} \PY{n}{x} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{mod}\PY{p}{(}\PY{n}{x}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{)} \PY{k}{for} \PY{n}{n} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{size}\PY{p}{(}\PY{n}{grid}\PY{p}{)}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{:} \PY{k}{if} \PY{p}{(}\PY{n}{grid}\PY{p}{[}\PY{n}{n}\PY{p}{]} \PY{o}{\PYZhy{}} \PY{n}{tiny} \PY{o}{\PYZlt{}}\PY{o}{=} \PY{n}{x} \PY{o+ow}{and} \PY{n}{x} \PY{o}{\PYZlt{}}\PY{o}{=} \PY{n}{grid}\PY{p}{[}\PY{n}{n}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{\PYZhy{}}\PY{n}{tiny}\PY{p}{)}\PY{p}{:} \PY{k}{return} \PY{n}{n} \PY{k}{return} \PY{n}{np}\PY{o}{.}\PY{n}{size}\PY{p}{(}\PY{n}{grid}\PY{p}{)}\PY{o}{\PYZhy{}}\PY{l+m+mi}{2} \PY{k}{def} \PY{n+nf}{u\PYZus{}DG}\PY{p}{(}\PY{n}{x}\PY{p}{,} \PY{n}{element\PYZus{}index}\PY{p}{,} \PY{n}{u}\PY{p}{)}\PY{p}{:} \PY{k}{return} \PY{n}{npleg}\PY{o}{.}\PY{n}{legval}\PY{p}{(}\PY{n}{x}\PY{p}{,} \PY{n}{u}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{element\PYZus{}index}\PY{p}{]}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}15}]:} \PY{c+c1}{\PYZsh{} to show on nbviewer online, comment out for running on a real server } \PY{n}{DG\PYZus{}order}\PY{p}{(}\PY{l+m+mi}{20}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] N Linf norm order L2 norm order 20 3.67e-04 -- 1.07e-04 -- 40 4.62e-05 2.99 1.34e-05 3.00 80 5.78e-06 3.00 1.67e-06 3.00 160 7.23e-07 3.00 2.09e-07 3.00 \end{Verbatim} \begin{center} \adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_26_1.png} \end{center} { \hspace*{\fill} \\} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}16}]:} \PY{n}{w} \PY{o}{=} \PY{n}{interactive}\PY{p}{(}\PY{n}{DG\PYZus{}order}\PY{p}{,} \PY{n}{number\PYZus{}coarse}\PY{o}{=}\PY{n}{fixed}\PY{p}{(}\PY{l+m+mi}{20}\PY{p}{)}\PY{p}{,} \PY{n}{degree}\PY{o}{=}\PY{n}{widgets}\PY{o}{.}\PY{n}{IntSlider}\PY{p}{(}\PY{n+nb}{min}\PY{o}{=}\PY{l+m+mi}{1}\PY{p}{,}\PY{n+nb}{max}\PY{o}{=}\PY{l+m+mi}{4}\PY{p}{,}\PY{n}{value}\PY{o}{=}\PY{l+m+mi}{2}\PY{p}{)}\PY{p}{)} \PY{n}{display}\PY{p}{(}\PY{n}{w}\PY{p}{)} \end{Verbatim} \begin{verbatim} interactive(children=(IntSlider(value=2, description='degree', max=4, min=1), Output()), _dom_classes=('widget-interact',)) \end{verbatim} \subsubsection{Evaluation of the Convolution}\label{evaluation-of-the-convolution} \[ u_h^{\star}(x, T) = (K_H\star u_h)(x, T) = \int_{-\infty}^{\infty}K_H(x - \xi)u_h(\xi, T) d\xi, \] The basic operation used in SIAC filtering is convolution of the DG solution against a B-spline based filter. Here, we explicitly point out the steps to efficient evaluation of the convolution operator. In the one-dimensional case, denote \(\{I_j\}_{j=1}^N\) be the mesh of the DG solution. * To calculate the integration, we first need to specify the support range. The support size of the filter is decided by its nodes matrix \(T\) that \(\text{K(x)} = \left[\min{T}, \max{T}\right]\). Therefore, to evaluate the filtered solution at the point \(x \in I_j\), we have \begin{equation} \begin{split} u_h^\star(x) & = \frac{1}{H}\int_{-\infty}^\infty K^{(r,\ell)}\left(\frac{x - \xi}{H}\right)u_h(\xi) d\xi \\ & = \frac{1}{H}\int_{x - H\cdot\max\{T\}}^{x- H\cdot\min\{T\}}K^{(r,\ell)}\left(\frac{x-\xi}{H}\right)u_h(\xi) d\xi \end{split} \end{equation} \begin{itemize} \tightlist \item The above integration is calculated by Gauss quadrature with at least \(k+1\) quadrature points. However, both the DG solution and the filter are piecewise polynomials. Therefore, we have to divide the integration interval, \(\left[x - H\cdot\max\{T\}, x- H\cdot\min\{T\}\right]\) into many subintervals, such that both the DG solution and the filter are polynomials on each subinterval. \begin{itemize} \tightlist \item First, find the DG elements that involved in the integration, that \[A(x) = \left\{i:\quad I_i \cap \left[x - H\cdot\max\{T\}, x- H\cdot\min\{T\}\right] \neq \varnothing \right\},\] then we write the integration as\\ \[u_h^\star(x) = \frac{1}{H}\sum\limits_{i \in A(x)} \int_{I_{i}}K^{(r,\ell)}\left(\frac{x-\xi}{H}\right)u_h(\xi) d\xi. \] \item Then we divide the elements \(I_i\) into several subintervals that \(I_i = \bigcup\limits_{\alpha=1}^{n_{i}}I_{i}^\alpha\) according to the breaks of the filter such that on each subinterval \(I_{i}^\alpha\) the filter is a polynomial, it leads \begin{equation} \begin{split} & \int_{I_{i}}K^{(r,\ell)}\left(\frac{x-\xi}{H}\right)u_h(\xi) d\xi \\ = &\, \sum\limits_{\alpha=1}^{n_{i}}\int_{I_{i}^\alpha}K^{(r,\ell)}\left(\frac{x-\xi}{H}\right)u_h(\xi) d\xi. \end{split} \end{equation} \item Finally, we can apply the Gauss quadrature to calculate the integration on subintervals \(I_{i}^\alpha\). \end{itemize} \end{itemize} \textbf{Note:} * For uniform meshes, we usually choose the uniform element size \(h\) as the filter scaling, \(H = h\). Therefore, we only need to divide each element \(I_{i}\) into two subintervals. * For nonuniform meshes, the scaling and the number of subintervals is dependent on the mesh. To speed up the filtering process, sometimes it is possible to use inexact integration. However, first step is necessary. * In multi-dimensions, the filter is a tensor product of the one-dimensional filters. The implementation of the multi-dimensional SIAC filter over rectangular meshes is the same. For triangular meshes, the principles are the same and one can find the details in another \href{}{notebook}. \subsubsection{Step by Step Python Implementation}\label{step-by-step-python-implementation} In previous section, we have discussed the algorithm Now, let us implementing the filtering step by step. \begin{itemize} \item \textbf{Filter Nodes} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{def}\NormalTok{ generate_filter_nodes(T):} \OperatorTok{---} \ControlFlowTok{return}\NormalTok{ filter_nodes} \end{Highlighting} \end{Shaded} the \(filter\_nodes\) stores all the nodes of the filter, which is the union of the nodes of all used B-splines. \begin{itemize} \tightlist \item Be aware that the nodes may be float numbers, it is not safe to use np.unique directly. \end{itemize} \end{itemize} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}17}]:} \PY{k}{def} \PY{n+nf}{generate\PYZus{}filter\PYZus{}nodes}\PY{p}{(}\PY{n}{T}\PY{p}{)}\PY{p}{:} \PY{n}{tiny} \PY{o}{=} \PY{l+m+mf}{1.e\PYZhy{}13} \PY{n}{filter\PYZus{}nodes} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{n}{np}\PY{o}{.}\PY{n}{min}\PY{p}{(}\PY{n}{T}\PY{p}{)}\PY{p}{,} \PY{n}{np}\PY{o}{.}\PY{n}{max}\PY{p}{(}\PY{n}{T}\PY{p}{)}\PY{p}{]}\PY{p}{)} \PY{k}{for} \PY{n}{j} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{size}\PY{p}{(}\PY{n}{T}\PY{p}{,}\PY{n}{axis}\PY{o}{=}\PY{l+m+mi}{0}\PY{p}{)}\PY{p}{)}\PY{p}{:} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{size}\PY{p}{(}\PY{n}{T}\PY{p}{,}\PY{n}{axis}\PY{o}{=}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)}\PY{p}{:} \PY{k}{for} \PY{n}{m} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{size}\PY{p}{(}\PY{n}{filter\PYZus{}nodes}\PY{p}{)}\PY{p}{)}\PY{p}{:} \PY{k}{if} \PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{abs}\PY{p}{(}\PY{n}{filter\PYZus{}nodes}\PY{p}{[}\PY{n}{m}\PY{p}{]} \PY{o}{\PYZhy{}} \PY{n}{T}\PY{p}{[}\PY{n}{j}\PY{p}{,}\PY{n}{i}\PY{p}{]}\PY{p}{)} \PY{o}{\PYZlt{}} \PY{n}{tiny}\PY{p}{)}\PY{p}{:} \PY{k}{break} \PY{k}{if} \PY{p}{(}\PY{n}{filter\PYZus{}nodes}\PY{p}{[}\PY{n}{m}\PY{p}{]} \PY{o}{\PYZhy{}} \PY{n}{T}\PY{p}{[}\PY{n}{j}\PY{p}{,}\PY{n}{i}\PY{p}{]} \PY{o}{\PYZgt{}} \PY{n}{tiny}\PY{p}{)}\PY{p}{:} \PY{n}{filter\PYZus{}nodes} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{insert}\PY{p}{(}\PY{n}{filter\PYZus{}nodes}\PY{p}{,} \PY{n}{m}\PY{p}{,} \PY{n}{T}\PY{p}{[}\PY{n}{j}\PY{p}{,}\PY{n}{i}\PY{p}{]}\PY{p}{)} \PY{k}{break} \PY{k}{return} \PY{n}{filter\PYZus{}nodes} \end{Verbatim} \begin{itemize} \item \textbf{Involved DG Elements} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{def}\NormalTok{ generate_index_Ax(x, grid, T, H):} \OperatorTok{---} \ControlFlowTok{return}\NormalTok{ Ax} \end{Highlighting} \end{Shaded} This function will return the index of the elements which are involved for filtering point \(x\), \[A(x) = \left\{i:\quad I_i \cap \left[x - H\cdot\max\{T\}, x- H\cdot\min\{T\}\right] \neq \varnothing \right\}.\] \begin{itemize} \tightlist \item If \(x\) is near the boundary, the above support interval may not totally belong to the DG domain. In this situation, the index returned will processed by periodic assumption. \end{itemize} \end{itemize} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}18}]:} \PY{k}{def} \PY{n+nf}{generate\PYZus{}index\PYZus{}Ax}\PY{p}{(}\PY{n}{x}\PY{p}{,} \PY{n}{grid}\PY{p}{,} \PY{n}{T}\PY{p}{,} \PY{n}{H}\PY{p}{)}\PY{p}{:} \PY{n}{left\PYZus{}index} \PY{o}{=} \PY{n}{point\PYZus{}location}\PY{p}{(}\PY{n}{x} \PY{o}{\PYZhy{}} \PY{n}{H}\PY{o}{*}\PY{n}{np}\PY{o}{.}\PY{n}{max}\PY{p}{(}\PY{n}{T}\PY{p}{)}\PY{p}{,} \PY{n}{grid}\PY{p}{)} \PY{n}{right\PYZus{}index} \PY{o}{=} \PY{n}{point\PYZus{}location}\PY{p}{(}\PY{n}{x} \PY{o}{\PYZhy{}} \PY{n}{H}\PY{o}{*}\PY{n}{np}\PY{o}{.}\PY{n}{min}\PY{p}{(}\PY{n}{T}\PY{p}{)}\PY{p}{,} \PY{n}{grid}\PY{p}{)} \PY{k}{if} \PY{p}{(}\PY{n}{left\PYZus{}index} \PY{o}{\PYZlt{}}\PY{o}{=} \PY{n}{right\PYZus{}index}\PY{p}{)}\PY{p}{:} \PY{n}{Ax} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{arange}\PY{p}{(}\PY{n}{left\PYZus{}index}\PY{p}{,} \PY{n}{right\PYZus{}index}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)} \PY{k}{else}\PY{p}{:} \PY{c+c1}{\PYZsh{}near boundary} \PY{n}{Ax} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{concatenate}\PY{p}{(}\PY{p}{[}\PY{n}{np}\PY{o}{.}\PY{n}{arange}\PY{p}{(}\PY{n}{left\PYZus{}index}\PY{p}{,} \PY{n}{np}\PY{o}{.}\PY{n}{size}\PY{p}{(}\PY{n}{grid}\PY{p}{)}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{,}\PY{n}{np}\PY{o}{.}\PY{n}{arange}\PY{p}{(}\PY{n}{right\PYZus{}index}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{]}\PY{p}{)} \PY{k}{return} \PY{n}{Ax} \end{Verbatim} \begin{itemize} \item \textbf{Subintervals for Quadrature} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{def}\NormalTok{ generate_subintervals_local(i, grid, filter_nodes):} \OperatorTok{---} \ControlFlowTok{return}\NormalTok{ interfaces} \end{Highlighting} \end{Shaded} With the given DG element index \(i\), this function will divide element \(I_i\) into subintervals according to the \(filter_nodes\). It returns the interfaces of all the subintervals which are also the subintervals for using Gauss quadrature to compute the integration. \end{itemize} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}19}]:} \PY{k}{def} \PY{n+nf}{generate\PYZus{}subintervals\PYZus{}local}\PY{p}{(}\PY{n}{i}\PY{p}{,} \PY{n}{grid}\PY{p}{,} \PY{n}{filter\PYZus{}nodes}\PY{p}{)}\PY{p}{:} \PY{n}{interfaces} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{n}{grid}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,}\PY{n}{grid}\PY{p}{[}\PY{n}{i}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{]}\PY{p}{)} \PY{k}{for} \PY{n}{j} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{size}\PY{p}{(}\PY{n}{filter\PYZus{}nodes}\PY{p}{)}\PY{p}{)}\PY{p}{:} \PY{k}{if} \PY{p}{(}\PY{n}{grid}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{\PYZlt{}} \PY{n}{filter\PYZus{}nodes}\PY{p}{[}\PY{n}{j}\PY{p}{]} \PY{o+ow}{and} \PY{n}{filter\PYZus{}nodes}\PY{p}{[}\PY{n}{j}\PY{p}{]} \PY{o}{\PYZlt{}} \PY{n}{grid}\PY{p}{[}\PY{n}{i}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{)}\PY{p}{:} \PY{n}{interfaces} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{insert}\PY{p}{(}\PY{n}{interfaces}\PY{p}{,} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{filter\PYZus{}nodes}\PY{p}{[}\PY{n}{j}\PY{p}{]}\PY{p}{)} \PY{k}{return} \PY{n}{interfaces} \end{Verbatim} \begin{itemize} \item \textbf{Computing the Convolution} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{def}\NormalTok{ Gauss_convolution_filter_DG(xj, T, c, filter_nodes, H, grid, u, xg, wg):} \OperatorTok{---} \ControlFlowTok{return}\NormalTok{ ustar} \end{Highlighting} \end{Shaded} Now, we have all the ingredients to calculate the convolution of the filter and the DG solution, let us put them together. \begin{itemize} \tightlist \item One important reminder is that the support interval of the convolution operator has been processed periodically, so the evaluation of the filter value need to be processed in the same way too. \item The filter coefficients \(c\) and nodes \(filter\_nodes\) can be computed from the node matrix \(T\) directly. However, they are fixed in this tutorial, so one can calculate them before and store the values. \item Also, the gauss quadrature points \(xg\) and weight \(wg\) can be either provided by outside of the function, or defined inside the function. Whatever, to calculate the integration exactly, the points number should \(\geq degree+1\). \end{itemize} \end{itemize} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}20}]:} \PY{k}{def} \PY{n+nf}{Gauss\PYZus{}convolution\PYZus{}filter\PYZus{}DG}\PY{p}{(}\PY{n}{xj}\PY{p}{,} \PY{n}{T}\PY{p}{,} \PY{n}{c}\PY{p}{,} \PY{n}{filter\PYZus{}nodes}\PY{p}{,} \PY{n}{H}\PY{p}{,} \PY{n}{grid}\PY{p}{,} \PY{n}{u}\PY{p}{,} \PY{n}{xg}\PY{p}{,} \PY{n}{wg}\PY{p}{)}\PY{p}{:} \PY{n}{Ax} \PY{o}{=} \PY{n}{generate\PYZus{}index\PYZus{}Ax}\PY{p}{(}\PY{n}{xj}\PY{p}{,} \PY{n}{grid}\PY{p}{,} \PY{n}{T}\PY{p}{,} \PY{n}{H}\PY{p}{)} \PY{n}{filter\PYZus{}nodes\PYZus{}local} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{mod}\PY{p}{(}\PY{n}{xj} \PY{o}{\PYZhy{}} \PY{n}{H}\PY{o}{*}\PY{n}{filter\PYZus{}nodes}\PY{p}{,}\PY{l+m+mi}{1}\PY{p}{)} \PY{n}{num} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{size}\PY{p}{(}\PY{n}{T}\PY{p}{,} \PY{n}{axis}\PY{o}{=}\PY{l+m+mi}{0}\PY{p}{)} \PY{n}{order} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{size}\PY{p}{(}\PY{n}{T}\PY{p}{,} \PY{n}{axis}\PY{o}{=}\PY{l+m+mi}{1}\PY{p}{)}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1} \PY{n}{N} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{size}\PY{p}{(}\PY{n}{grid}\PY{p}{)}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1} \PY{n}{ustar} \PY{o}{=} \PY{l+m+mf}{0.} \PY{k}{for} \PY{n}{j} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{size}\PY{p}{(}\PY{n}{Ax}\PY{p}{)}\PY{p}{)}\PY{p}{:} \PY{n}{i} \PY{o}{=} \PY{n}{Ax}\PY{p}{[}\PY{n}{j}\PY{p}{]} \PY{n}{h} \PY{o}{=} \PY{n}{grid}\PY{p}{[}\PY{n}{i}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{\PYZhy{}} \PY{n}{grid}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{n}{interfaces} \PY{o}{=} \PY{n}{generate\PYZus{}subintervals\PYZus{}local}\PY{p}{(}\PY{n}{i}\PY{p}{,} \PY{n}{grid}\PY{p}{,} \PY{n}{filter\PYZus{}nodes\PYZus{}local}\PY{p}{)} \PY{k}{for} \PY{n}{alpha} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{size}\PY{p}{(}\PY{n}{interfaces}\PY{p}{)}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{:} \PY{n}{mid} \PY{o}{=} \PY{l+m+mf}{0.5}\PY{o}{*}\PY{p}{(}\PY{n}{interfaces}\PY{p}{[}\PY{n}{alpha}\PY{p}{]} \PY{o}{+} \PY{n}{interfaces}\PY{p}{[}\PY{n}{alpha}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{)} \PY{n}{scale} \PY{o}{=} \PY{l+m+mf}{0.5}\PY{o}{*}\PY{p}{(}\PY{n}{interfaces}\PY{p}{[}\PY{n}{alpha}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{\PYZhy{}} \PY{n}{interfaces}\PY{p}{[}\PY{n}{alpha}\PY{p}{]}\PY{p}{)} \PY{k}{for} \PY{n}{mm} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{size}\PY{p}{(}\PY{n}{xg}\PY{p}{)}\PY{p}{)}\PY{p}{:} \PY{n}{quad\PYZus{}x} \PY{o}{=} \PY{n}{mid} \PY{o}{+} \PY{n}{scale}\PY{o}{*}\PY{n}{xg}\PY{p}{[}\PY{n}{mm}\PY{p}{]} \PY{n}{filter\PYZus{}x} \PY{o}{=} \PY{p}{(}\PY{n}{xj} \PY{o}{\PYZhy{}} \PY{n}{quad\PYZus{}x}\PY{p}{)}\PY{o}{/}\PY{n}{H} \PY{k}{if} \PY{p}{(}\PY{n}{filter\PYZus{}x} \PY{o}{\PYZlt{}} \PY{n}{np}\PY{o}{.}\PY{n}{min}\PY{p}{(}\PY{n}{T}\PY{p}{)}\PY{p}{)}\PY{p}{:} \PY{n}{filter\PYZus{}x} \PY{o}{+}\PY{o}{=} \PY{n}{N} \PY{k}{elif} \PY{p}{(}\PY{n}{filter\PYZus{}x} \PY{o}{\PYZgt{}} \PY{n}{np}\PY{o}{.}\PY{n}{max}\PY{p}{(}\PY{n}{T}\PY{p}{)}\PY{p}{)}\PY{p}{:} \PY{n}{filter\PYZus{}x} \PY{o}{+}\PY{o}{=} \PY{o}{\PYZhy{}}\PY{n}{N} \PY{n}{ustar} \PY{o}{+}\PY{o}{=} \PY{n}{scale}\PY{o}{*}\PY{n}{wg}\PY{p}{[}\PY{n}{mm}\PY{p}{]}\PY{o}{*}\PY{n+nb}{filter}\PY{p}{(}\PY{n}{filter\PYZus{}x}\PY{p}{,} \PY{n}{num}\PY{p}{,} \PY{n}{order}\PY{p}{,} \PY{n}{T}\PY{p}{,} \PY{n}{c}\PY{p}{)}\PY{o}{*}\PYZbs{} \PY{n}{u\PYZus{}DG}\PY{p}{(}\PY{p}{(}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{quad\PYZus{}x} \PY{o}{\PYZhy{}} \PY{p}{(}\PY{n}{grid}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{o}{+}\PY{n}{grid}\PY{p}{[}\PY{n}{i}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{)}\PY{p}{)}\PY{o}{/}\PY{n}{h}\PY{p}{,} \PY{n}{i}\PY{p}{,} \PY{n}{u}\PY{p}{)} \PY{k}{return} \PY{l+m+mi}{1}\PY{o}{/}\PY{n}{H}\PY{o}{*}\PY{n}{ustar} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}21}]:} \PY{k}{def} \PY{n+nf}{generate\PYZus{}filter}\PY{p}{(}\PY{n}{degree}\PY{p}{)}\PY{p}{:} \PY{n}{num} \PY{o}{=} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{degree}\PY{o}{+}\PY{l+m+mi}{1} \PY{n}{order} \PY{o}{=} \PY{n}{degree}\PY{o}{+}\PY{l+m+mi}{1} \PY{n}{T} \PY{o}{=} \PY{n}{generate\PYZus{}nodes\PYZus{}T}\PY{p}{(}\PY{n}{num}\PY{p}{,} \PY{n}{order}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{)} \PY{n}{c} \PY{o}{=} \PY{n}{generate\PYZus{}coeff}\PY{p}{(}\PY{n}{num}\PY{p}{,} \PY{n}{order}\PY{p}{,} \PY{n}{T}\PY{p}{)} \PY{n}{filter\PYZus{}nodes} \PY{o}{=} \PY{n}{generate\PYZus{}filter\PYZus{}nodes}\PY{p}{(}\PY{n}{T}\PY{p}{)} \PY{k}{return} \PY{n}{T}\PY{p}{,} \PY{n}{c}\PY{p}{,} \PY{n}{filter\PYZus{}nodes} \end{Verbatim} \subsubsection{Filtered Solution}\label{filtered-solution} Now, we have implemented the filtering, let us see the performance. \textbf{Note:} Due to the precision issue, in this tutorial, the filtered results for \(N = 160\) with \(\mathbb{P}^3\) polynomials and \(N = 80, 160\) with \($\mathbb{P}^4\) polynomials are contaminated. Since NumPy does not provide a data type with more precision than C long doubles, we will not addressed this issue in this tutorial. However, we believe it is already enough for a demonstration purpose. \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}22}]:} \PY{k}{def} \PY{n+nf}{Filter\PYZus{}order}\PY{p}{(}\PY{n}{number\PYZus{}coarse}\PY{p}{,} \PY{n}{degree}\PY{p}{)}\PY{p}{:} \PY{c+c1}{\PYZsh{} gauss points for plotting, computing norm error } \PY{n}{Gpn} \PY{o}{=} \PY{l+m+mi}{6} \PY{n}{xg}\PY{p}{,} \PY{n}{wg} \PY{o}{=} \PY{n}{npleg}\PY{o}{.}\PY{n}{leggauss}\PY{p}{(}\PY{n}{Gpn}\PY{p}{)} \PY{c+c1}{\PYZsh{} gauss points for computing convolution, the number should \PYZgt{}= degree+1} \PY{n}{Gpn2} \PY{o}{=} \PY{n}{degree}\PY{o}{+}\PY{l+m+mi}{1} \PY{n}{xg2}\PY{p}{,} \PY{n}{wg2} \PY{o}{=} \PY{n}{npleg}\PY{o}{.}\PY{n}{leggauss}\PY{p}{(}\PY{n}{Gpn2}\PY{p}{)} \PY{n}{levels}\PY{o}{=}\PY{l+m+mi}{4}\PY{p}{;} \PY{n}{err2} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{zeros}\PY{p}{(}\PY{l+m+mi}{4}\PY{p}{)}\PY{p}{;} \PY{n}{erri} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{zeros}\PY{p}{(}\PY{l+m+mi}{4}\PY{p}{)}\PY{p}{;} \PY{c+c1}{\PYZsh{} plotting style} \PY{n}{style} \PY{o}{=} \PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{b.}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{g\PYZhy{}.}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{r\PYZhy{}\PYZhy{}}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{c\PYZhy{}}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{c+c1}{\PYZsh{} generating the filter} \PY{n}{T}\PY{p}{,} \PY{n}{c}\PY{p}{,} \PY{n}{filter\PYZus{}nodes} \PY{o}{=} \PY{n}{generate\PYZus{}filter}\PY{p}{(}\PY{n}{degree}\PY{p}{)} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{levels}\PY{p}{)}\PY{p}{:} \PY{n}{N} \PY{o}{=} \PY{n}{number\PYZus{}coarse}\PY{o}{*}\PY{l+m+mi}{2}\PY{o}{*}\PY{o}{*}\PY{p}{(}\PY{n}{i}\PY{p}{)} \PY{n}{u} \PY{o}{=} \PY{n}{Input\PYZus{}DG}\PY{p}{(}\PY{n}{N}\PY{p}{,} \PY{n}{degree}\PY{p}{)} \PY{n}{x} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{linspace}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{,} \PY{n}{N}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)} \PY{n}{dx} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{diff}\PY{p}{(}\PY{n}{x}\PY{p}{)} \PY{c+c1}{\PYZsh{} plotting} \PY{n}{plot\PYZus{}x} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{empty}\PY{p}{(}\PY{n}{N}\PY{o}{*}\PY{n}{Gpn}\PY{p}{)}\PY{p}{;} \PY{n}{plot\PYZus{}y} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{empty}\PY{p}{(}\PY{n}{N}\PY{o}{*}\PY{n}{Gpn}\PY{p}{)}\PY{p}{;} \PY{k}{for} \PY{n}{j} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{N}\PY{p}{)}\PY{p}{:} \PY{n}{H} \PY{o}{=} \PY{n}{dx}\PY{p}{[}\PY{n}{j}\PY{p}{]} \PY{k}{for} \PY{n}{m} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{Gpn}\PY{p}{)}\PY{p}{:} \PY{n}{xj} \PY{o}{=} \PY{p}{(}\PY{n}{x}\PY{p}{[}\PY{n}{j}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]}\PY{o}{+}\PY{n}{x}\PY{p}{[}\PY{n}{j}\PY{p}{]}\PY{p}{)}\PY{o}{/}\PY{l+m+mi}{2} \PY{o}{+} \PY{n}{dx}\PY{p}{[}\PY{n}{j}\PY{p}{]}\PY{o}{*}\PY{n}{xg}\PY{p}{[}\PY{n}{m}\PY{p}{]}\PY{o}{/}\PY{l+m+mi}{2} \PY{n}{ustar} \PY{o}{=} \PY{n}{Gauss\PYZus{}convolution\PYZus{}filter\PYZus{}DG}\PY{p}{(}\PY{n}{xj}\PY{p}{,} \PY{n}{T}\PY{p}{,} \PY{n}{c}\PY{p}{,} \PY{n}{filter\PYZus{}nodes}\PY{p}{,} \PY{n}{H}\PY{p}{,} \PY{n}{x}\PY{p}{,} \PY{n}{u}\PY{p}{,} \PY{n}{xg2}\PY{p}{,} \PY{n}{wg2}\PY{p}{)} \PY{n}{err} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{abs}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{sin}\PY{p}{(}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{np}\PY{o}{.}\PY{n}{pi}\PY{o}{*}\PY{n}{xj}\PY{p}{)}\PY{o}{\PYZhy{}}\PY{n}{ustar}\PY{p}{)} \PY{n}{err2}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{+}\PY{o}{=} \PY{l+m+mf}{0.5}\PY{o}{*}\PY{n}{dx}\PY{p}{[}\PY{n}{j}\PY{p}{]}\PY{o}{*}\PY{n}{wg}\PY{p}{[}\PY{n}{m}\PY{p}{]}\PY{o}{*}\PY{n}{err}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2} \PY{n}{erri}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{=} \PY{n+nb}{max}\PY{p}{(}\PY{n}{erri}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,} \PY{n}{err}\PY{p}{)} \PY{c+c1}{\PYZsh{} plotting} \PY{n}{plot\PYZus{}x}\PY{p}{[}\PY{n}{j}\PY{o}{*}\PY{n}{Gpn}\PY{o}{+}\PY{n}{m}\PY{p}{]} \PY{o}{=} \PY{n}{xj}\PY{p}{;} \PY{n}{plot\PYZus{}y}\PY{p}{[}\PY{n}{j}\PY{o}{*}\PY{n}{Gpn}\PY{o}{+}\PY{n}{m}\PY{p}{]} \PY{o}{=} \PY{n}{err} \PY{n}{err2}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{n}{err2}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{o}{/}\PY{l+m+mf}{1.}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{plot\PYZus{}x}\PY{p}{,} \PY{n}{plot\PYZus{}y}\PY{p}{,} \PY{n}{style}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,}\PY{n}{linewidth}\PY{o}{=}\PY{l+m+mf}{2.0}\PY{p}{,} \PY{n}{label}\PY{o}{=}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{N = }\PY{l+s+s2}{\PYZdq{}} \PY{o}{+} \PY{n+nb}{str}\PY{p}{(}\PY{n}{N}\PY{p}{)}\PY{p}{)} \PY{n}{print\PYZus{}OrderTable}\PY{p}{(}\PY{n}{number\PYZus{}coarse}\PY{p}{,} \PY{n}{erri}\PY{p}{,} \PY{n}{err2}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{yscale}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{log}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{legend}\PY{p}{(}\PY{n}{loc}\PY{o}{=}\PY{l+m+mi}{3}\PY{p}{,}\PY{n}{frameon}\PY{o}{=}\PY{k+kc}{False}\PY{p}{,} \PY{n}{fontsize}\PY{o}{=}\PY{l+m+mi}{14}\PY{p}{)}\PY{p}{;} \PY{n}{plt}\PY{o}{.}\PY{n}{xlim}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{)}\PY{p}{;} \PY{c+c1}{\PYZsh{}plt.ylim(1.e\PYZhy{}6,1.e0)} \PY{n}{plt}\PY{o}{.}\PY{n}{xlabel}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{x}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}\PY{n}{fontsize}\PY{o}{=}\PY{l+m+mi}{18}\PY{p}{)}\PY{p}{;} \PY{n}{plt}\PY{o}{.}\PY{n}{ylabel}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{|error|}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}\PY{n}{fontsize}\PY{o}{=}\PY{l+m+mi}{18}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{show}\PY{p}{(}\PY{p}{)} \PY{c+c1}{\PYZsh{} to show on nbviewer online, comment out for running on a real server } \PY{n}{Filter\PYZus{}order}\PY{p}{(}\PY{l+m+mi}{20}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] N Linf norm order L2 norm order 20 5.81e-06 -- 4.10e-06 -- 40 1.33e-07 5.45 9.42e-08 5.44 80 3.39e-09 5.30 2.40e-09 5.30 160 9.38e-11 5.18 6.63e-11 5.18 \end{Verbatim} \begin{center} \adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_41_1.png} \end{center} { \hspace*{\fill} \\} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}23}]:} \PY{n}{w} \PY{o}{=} \PY{n}{interactive}\PY{p}{(}\PY{n}{Filter\PYZus{}order}\PY{p}{,} \PY{n}{number\PYZus{}coarse}\PY{o}{=}\PY{n}{fixed}\PY{p}{(}\PY{l+m+mi}{20}\PY{p}{)}\PY{p}{,} \PY{n}{degree}\PY{o}{=}\PY{n}{widgets}\PY{o}{.}\PY{n}{IntSlider}\PY{p}{(}\PY{n+nb}{min}\PY{o}{=}\PY{l+m+mi}{1}\PY{p}{,}\PY{n+nb}{max}\PY{o}{=}\PY{l+m+mi}{4}\PY{p}{,}\PY{n}{value}\PY{o}{=}\PY{l+m+mi}{2}\PY{p}{)}\PY{p}{)} \PY{n}{display}\PY{p}{(}\PY{n}{w}\PY{p}{)} \end{Verbatim} \begin{verbatim} interactive(children=(IntSlider(value=2, description='degree', max=4, min=1), Output()), _dom_classes=('widget-interact',)) \end{verbatim} \subsubsection{DG Solution vs. Filtered Solution}\label{dg-solution-vs.-filtered-solution} During previous sections, one can already see the advantages of the filtered solution over the DG solution. Here, to give a easier view, we put the results together for comparison. \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}24}]:} \PY{k}{def} \PY{n+nf}{DGvsFilter\PYZus{}order}\PY{p}{(}\PY{n}{number\PYZus{}coarse}\PY{p}{,} \PY{n}{degree}\PY{p}{)}\PY{p}{:} \PY{c+c1}{\PYZsh{} gauss points for plotting, computing norm error } \PY{n}{Gpn} \PY{o}{=} \PY{l+m+mi}{6} \PY{n}{xg}\PY{p}{,} \PY{n}{wg} \PY{o}{=} \PY{n}{npleg}\PY{o}{.}\PY{n}{leggauss}\PY{p}{(}\PY{n}{Gpn}\PY{p}{)} \PY{c+c1}{\PYZsh{} gauss points for computing convolution, the number should \PYZgt{}= degree+1} \PY{n}{Gpn2} \PY{o}{=} \PY{n}{degree}\PY{o}{+}\PY{l+m+mi}{1} \PY{n}{xg2}\PY{p}{,} \PY{n}{wg2} \PY{o}{=} \PY{n}{npleg}\PY{o}{.}\PY{n}{leggauss}\PY{p}{(}\PY{n}{Gpn2}\PY{p}{)} \PY{n}{levels}\PY{o}{=}\PY{l+m+mi}{4}\PY{p}{;} \PY{n}{err2\PYZus{}dg} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{zeros}\PY{p}{(}\PY{l+m+mi}{4}\PY{p}{)}\PY{p}{;} \PY{n}{erri\PYZus{}dg} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{zeros}\PY{p}{(}\PY{l+m+mi}{4}\PY{p}{)}\PY{p}{;} \PY{n}{err2\PYZus{}filter} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{zeros}\PY{p}{(}\PY{l+m+mi}{4}\PY{p}{)}\PY{p}{;} \PY{n}{erri\PYZus{}filter} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{zeros}\PY{p}{(}\PY{l+m+mi}{4}\PY{p}{)} \PY{c+c1}{\PYZsh{} plotting style} \PY{n}{style} \PY{o}{=} \PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{b.}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{g\PYZhy{}.}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{r\PYZhy{}\PYZhy{}}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{c\PYZhy{}}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{n}{fig} \PY{o}{=} \PY{n}{plt}\PY{o}{.}\PY{n}{figure}\PY{p}{(}\PY{n}{figsize}\PY{o}{=}\PY{p}{(}\PY{l+m+mi}{10}\PY{p}{,}\PY{l+m+mi}{4}\PY{p}{)}\PY{p}{)} \PY{n}{fig}\PY{o}{.}\PY{n}{tight\PYZus{}layout}\PY{p}{(}\PY{p}{)} \PY{n}{ax1} \PY{o}{=} \PY{n}{fig}\PY{o}{.}\PY{n}{add\PYZus{}subplot}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,}\PY{l+m+mi}{2}\PY{p}{,}\PY{l+m+mi}{1}\PY{p}{)} \PY{n}{ax2} \PY{o}{=} \PY{n}{fig}\PY{o}{.}\PY{n}{add\PYZus{}subplot}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,}\PY{l+m+mi}{2}\PY{p}{,}\PY{l+m+mi}{2}\PY{p}{)} \PY{c+c1}{\PYZsh{} generating the filter} \PY{n}{T}\PY{p}{,} \PY{n}{c}\PY{p}{,} \PY{n}{filter\PYZus{}nodes} \PY{o}{=} \PY{n}{generate\PYZus{}filter}\PY{p}{(}\PY{n}{degree}\PY{p}{)} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{levels}\PY{p}{)}\PY{p}{:} \PY{n}{N} \PY{o}{=} \PY{n}{number\PYZus{}coarse}\PY{o}{*}\PY{l+m+mi}{2}\PY{o}{*}\PY{o}{*}\PY{p}{(}\PY{n}{i}\PY{p}{)} \PY{n}{u} \PY{o}{=} \PY{n}{Input\PYZus{}DG}\PY{p}{(}\PY{n}{N}\PY{p}{,} \PY{n}{degree}\PY{p}{)} \PY{n}{x} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{linspace}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{,} \PY{n}{N}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)} \PY{n}{dx} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{diff}\PY{p}{(}\PY{n}{x}\PY{p}{)} \PY{c+c1}{\PYZsh{} plotting} \PY{n}{plot\PYZus{}x} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{empty}\PY{p}{(}\PY{n}{N}\PY{o}{*}\PY{n}{Gpn}\PY{p}{)}\PY{p}{;} \PY{n}{plot\PYZus{}dg} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{empty}\PY{p}{(}\PY{n}{N}\PY{o}{*}\PY{n}{Gpn}\PY{p}{)}\PY{p}{;} \PY{n}{plot\PYZus{}filtered} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{empty}\PY{p}{(}\PY{n}{N}\PY{o}{*}\PY{n}{Gpn}\PY{p}{)} \PY{k}{for} \PY{n}{j} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{N}\PY{p}{)}\PY{p}{:} \PY{n}{H} \PY{o}{=} \PY{n}{dx}\PY{p}{[}\PY{n}{j}\PY{p}{]} \PY{k}{for} \PY{n}{m} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{Gpn}\PY{p}{)}\PY{p}{:} \PY{n}{xj} \PY{o}{=} \PY{p}{(}\PY{n}{x}\PY{p}{[}\PY{n}{j}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]}\PY{o}{+}\PY{n}{x}\PY{p}{[}\PY{n}{j}\PY{p}{]}\PY{p}{)}\PY{o}{/}\PY{l+m+mi}{2} \PY{o}{+} \PY{n}{dx}\PY{p}{[}\PY{n}{j}\PY{p}{]}\PY{o}{*}\PY{n}{xg}\PY{p}{[}\PY{n}{m}\PY{p}{]}\PY{o}{/}\PY{l+m+mi}{2} \PY{n}{uh} \PY{o}{=} \PY{n}{u\PYZus{}DG}\PY{p}{(}\PY{n}{xg}\PY{p}{[}\PY{n}{m}\PY{p}{]}\PY{p}{,} \PY{n}{j}\PY{p}{,} \PY{n}{u}\PY{p}{)} \PY{n}{ustar} \PY{o}{=} \PY{n}{Gauss\PYZus{}convolution\PYZus{}filter\PYZus{}DG}\PY{p}{(}\PY{n}{xj}\PY{p}{,} \PY{n}{T}\PY{p}{,} \PY{n}{c}\PY{p}{,} \PY{n}{filter\PYZus{}nodes}\PY{p}{,} \PY{n}{H}\PY{p}{,} \PY{n}{x}\PY{p}{,} \PY{n}{u}\PY{p}{,} \PY{n}{xg2}\PY{p}{,} \PY{n}{wg2}\PY{p}{)} \PY{n}{err\PYZus{}dg} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{abs}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{sin}\PY{p}{(}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{np}\PY{o}{.}\PY{n}{pi}\PY{o}{*}\PY{n}{xj}\PY{p}{)}\PY{o}{\PYZhy{}}\PY{n}{uh}\PY{p}{)} \PY{n}{err\PYZus{}filter} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{abs}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{sin}\PY{p}{(}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{np}\PY{o}{.}\PY{n}{pi}\PY{o}{*}\PY{n}{xj}\PY{p}{)}\PY{o}{\PYZhy{}}\PY{n}{ustar}\PY{p}{)} \PY{n}{err2\PYZus{}dg}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{+}\PY{o}{=} \PY{l+m+mf}{0.5}\PY{o}{*}\PY{n}{dx}\PY{p}{[}\PY{n}{j}\PY{p}{]}\PY{o}{*}\PY{n}{wg}\PY{p}{[}\PY{n}{m}\PY{p}{]}\PY{o}{*}\PY{n}{err\PYZus{}dg}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2} \PY{n}{erri\PYZus{}dg}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{=} \PY{n+nb}{max}\PY{p}{(}\PY{n}{erri\PYZus{}dg}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,} \PY{n}{err\PYZus{}dg}\PY{p}{)} \PY{n}{err2\PYZus{}filter}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{+}\PY{o}{=} \PY{l+m+mf}{0.5}\PY{o}{*}\PY{n}{dx}\PY{p}{[}\PY{n}{j}\PY{p}{]}\PY{o}{*}\PY{n}{wg}\PY{p}{[}\PY{n}{m}\PY{p}{]}\PY{o}{*}\PY{n}{err\PYZus{}filter}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2} \PY{n}{erri\PYZus{}filter}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{=} \PY{n+nb}{max}\PY{p}{(}\PY{n}{erri\PYZus{}filter}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,} \PY{n}{err\PYZus{}filter}\PY{p}{)} \PY{c+c1}{\PYZsh{} plotting} \PY{n}{plot\PYZus{}x}\PY{p}{[}\PY{n}{j}\PY{o}{*}\PY{n}{Gpn}\PY{o}{+}\PY{n}{m}\PY{p}{]} \PY{o}{=} \PY{n}{xj}\PY{p}{;} \PY{n}{plot\PYZus{}dg}\PY{p}{[}\PY{n}{j}\PY{o}{*}\PY{n}{Gpn}\PY{o}{+}\PY{n}{m}\PY{p}{]} \PY{o}{=} \PY{n}{err\PYZus{}dg}\PY{p}{;} \PY{n}{plot\PYZus{}filtered}\PY{p}{[}\PY{n}{j}\PY{o}{*}\PY{n}{Gpn}\PY{o}{+}\PY{n}{m}\PY{p}{]} \PY{o}{=} \PY{n}{err\PYZus{}filter}\PY{p}{;} \PY{n}{err2\PYZus{}dg}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{n}{err2\PYZus{}dg}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{o}{/}\PY{l+m+mf}{1.}\PY{p}{)} \PY{n}{erri\PYZus{}dg}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{n}{erri\PYZus{}dg}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{o}{/}\PY{l+m+mf}{1.}\PY{p}{)} \PY{k}{if} \PY{p}{(}\PY{n}{i} \PY{o}{==} \PY{l+m+mi}{0}\PY{p}{)}\PY{p}{:} \PY{n}{y\PYZus{}max} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{max}\PY{p}{(}\PY{n}{plot\PYZus{}dg}\PY{p}{)} \PY{k}{if} \PY{p}{(}\PY{n}{i} \PY{o}{==} \PY{n}{levels}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{:} \PY{n}{y\PYZus{}min} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{min}\PY{p}{(}\PY{n}{plot\PYZus{}filtered}\PY{p}{)} \PY{n}{ax1}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{plot\PYZus{}x}\PY{p}{,} \PY{n}{plot\PYZus{}dg}\PY{p}{,} \PY{n}{style}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,}\PY{n}{linewidth}\PY{o}{=}\PY{l+m+mf}{2.0}\PY{p}{,} \PY{n}{label}\PY{o}{=}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{N = }\PY{l+s+s2}{\PYZdq{}} \PY{o}{+} \PY{n+nb}{str}\PY{p}{(}\PY{n}{N}\PY{p}{)}\PY{p}{)} \PY{n}{ax2}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{plot\PYZus{}x}\PY{p}{,} \PY{n}{plot\PYZus{}filtered}\PY{p}{,} \PY{n}{style}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,}\PY{n}{linewidth}\PY{o}{=}\PY{l+m+mf}{2.0}\PY{p}{,} \PY{n}{label}\PY{o}{=}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{N = }\PY{l+s+s2}{\PYZdq{}} \PY{o}{+} \PY{n+nb}{str}\PY{p}{(}\PY{n}{N}\PY{p}{)}\PY{p}{)} \PY{n+nb}{print} \PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{DG error: }\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{print\PYZus{}OrderTable}\PY{p}{(}\PY{n}{number\PYZus{}coarse}\PY{p}{,} \PY{n}{erri\PYZus{}dg}\PY{p}{,} \PY{n}{err2\PYZus{}dg}\PY{p}{)} \PY{n+nb}{print} \PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+se}{\PYZbs{}n}\PY{l+s+s2}{Filtered error: }\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{print\PYZus{}OrderTable}\PY{p}{(}\PY{n}{number\PYZus{}coarse}\PY{p}{,} \PY{n}{erri\PYZus{}filter}\PY{p}{,} \PY{n}{err2\PYZus{}filter}\PY{p}{)} \PY{n}{ax1}\PY{o}{.}\PY{n}{set\PYZus{}yscale}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{log}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{ax2}\PY{o}{.}\PY{n}{set\PYZus{}yscale}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{log}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{ax1}\PY{o}{.}\PY{n}{legend}\PY{p}{(}\PY{n}{frameon}\PY{o}{=}\PY{k+kc}{False}\PY{p}{,}\PY{n}{fontsize}\PY{o}{=}\PY{l+m+mi}{12}\PY{p}{)} \PY{n}{ax2}\PY{o}{.}\PY{n}{legend}\PY{p}{(}\PY{n}{frameon}\PY{o}{=}\PY{k+kc}{False}\PY{p}{,}\PY{n}{fontsize}\PY{o}{=}\PY{l+m+mi}{12}\PY{p}{)} \PY{n}{ax1}\PY{o}{.}\PY{n}{set\PYZus{}xlim}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{)}\PY{p}{;} \PY{n}{ax1}\PY{o}{.}\PY{n}{set\PYZus{}ylim}\PY{p}{(}\PY{n}{y\PYZus{}min}\PY{p}{,}\PY{n}{y\PYZus{}max}\PY{p}{)} \PY{n}{ax2}\PY{o}{.}\PY{n}{set\PYZus{}xlim}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{)}\PY{p}{;} \PY{n}{ax2}\PY{o}{.}\PY{n}{set\PYZus{}ylim}\PY{p}{(}\PY{n}{y\PYZus{}min}\PY{p}{,}\PY{n}{y\PYZus{}max}\PY{p}{)} \PY{n}{ax1}\PY{o}{.}\PY{n}{set\PYZus{}xlabel}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{x}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}\PY{n}{fontsize}\PY{o}{=}\PY{l+m+mi}{16}\PY{p}{)}\PY{p}{;} \PY{n}{ax1}\PY{o}{.}\PY{n}{set\PYZus{}ylabel}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{|error|}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}\PY{n}{fontsize}\PY{o}{=}\PY{l+m+mi}{16}\PY{p}{)} \PY{n}{ax2}\PY{o}{.}\PY{n}{set\PYZus{}xlabel}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{x}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}\PY{n}{fontsize}\PY{o}{=}\PY{l+m+mi}{16}\PY{p}{)}\PY{p}{;} \PY{n}{ax2}\PY{o}{.}\PY{n}{set\PYZus{}ylabel}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{|error|}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}\PY{n}{fontsize}\PY{o}{=}\PY{l+m+mi}{16}\PY{p}{)} \PY{n}{ax1}\PY{o}{.}\PY{n}{set\PYZus{}title}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{DG errors, degree = }\PY{l+s+si}{\PYZpc{}1d}\PY{l+s+s1}{\PYZsq{}} \PY{o}{\PYZpc{}} \PY{n}{degree}\PY{p}{,} \PY{n}{fontsize}\PY{o}{=}\PY{l+m+mi}{16}\PY{p}{)} \PY{n}{ax2}\PY{o}{.}\PY{n}{set\PYZus{}title}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Filtered errors, degree = }\PY{l+s+si}{\PYZpc{}1d}\PY{l+s+s1}{\PYZsq{}} \PY{o}{\PYZpc{}} \PY{n}{degree}\PY{p}{,} \PY{n}{fontsize}\PY{o}{=}\PY{l+m+mi}{16}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{show}\PY{p}{(}\PY{p}{)} \PY{c+c1}{\PYZsh{} to show on nbviewer online, comment out for running on a real server } \PY{n}{DGvsFilter\PYZus{}order}\PY{p}{(}\PY{l+m+mi}{20}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] DG error: N Linf norm order L2 norm order 20 1.92e-02 -- 1.07e-04 -- 40 6.80e-03 1.50 1.34e-05 3.00 80 2.40e-03 1.50 1.67e-06 3.00 160 8.50e-04 1.50 2.09e-07 3.00 Filtered error: N Linf norm order L2 norm order 20 5.81e-06 -- 1.68e-11 -- 40 1.33e-07 5.45 8.87e-15 10.89 80 3.39e-09 5.30 5.74e-18 10.59 160 9.38e-11 5.18 4.39e-21 10.35 \end{Verbatim} \begin{center} \adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_44_1.png} \end{center} { \hspace*{\fill} \\} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}25}]:} \PY{n}{w} \PY{o}{=} \PY{n}{interactive}\PY{p}{(}\PY{n}{DGvsFilter\PYZus{}order}\PY{p}{,} \PY{n}{number\PYZus{}coarse}\PY{o}{=}\PY{n}{fixed}\PY{p}{(}\PY{l+m+mi}{20}\PY{p}{)}\PY{p}{,} \PY{n}{degree}\PY{o}{=}\PY{n}{widgets}\PY{o}{.}\PY{n}{IntSlider}\PY{p}{(}\PY{n+nb}{min}\PY{o}{=}\PY{l+m+mi}{1}\PY{p}{,}\PY{n+nb}{max}\PY{o}{=}\PY{l+m+mi}{4}\PY{p}{,}\PY{n}{value}\PY{o}{=}\PY{l+m+mi}{2}\PY{p}{)}\PY{p}{)} \PY{n}{display}\PY{p}{(}\PY{n}{w}\PY{p}{)} \end{Verbatim} \begin{verbatim} interactive(children=(IntSlider(value=2, description='degree', max=4, min=1), Output()), _dom_classes=('widget-interact',)) \end{verbatim} \subsubsection{Efficiency}\label{efficiency} This tutorial is written for demonstration the idea, so the codes are not optimized for computing speed. However, we note that if one has a look at the main function \begin{Shaded} \begin{Highlighting}[] \NormalTok{ Gauss_convolution_filter_DG(xj, T, c, filter_nodes, H, grid, u, xg, wg)} \end{Highlighting} \end{Shaded} Clearly, for computing the filtered solution at different points, it does not need information from the filtered solution. In other word, if one has an ideal parallel computing resource, the computational time of evaluating massive nodes will equal to evaluating a node, which is negligible. \subsection{Features of SIAC Filtering}\label{features-of-siac-filtering} \subsection{A Code to Play}\label{a-code-to-play} In the end, we provide a code you can play with your own codes to see how the filtering can improve your results. It does not matter you are using the DG codes, FEM codes, IGA codes, FVM codes, etc., the code should still work. However, the performance may be varying according to the methods (the best performance comes from the DG methods). We will discuss the difference in another \href{}{notebook}, but it is interesting to test your code to see what happens. For the following codes, you need provide the data of your numerical solutions, the grid, the number of elements, degree of the basis functions and a nodes sequence that you want to improve. \textbf{Note:} you may have to change the either the following code or your data structure a little bit. \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}26}]:} \PY{k}{def} \PY{n+nf}{Filtering}\PY{p}{(}\PY{n}{N}\PY{p}{,} \PY{n}{degree}\PY{p}{,} \PY{n}{nodes}\PY{p}{)}\PY{p}{:} \PY{c+c1}{\PYZsh{} gauss points for computing convolution, the number should \PYZgt{}= degree+1 } \PY{n}{Gpn} \PY{o}{=} \PY{n}{degree}\PY{o}{+}\PY{l+m+mi}{1} \PY{n}{xg}\PY{p}{,} \PY{n}{wg} \PY{o}{=} \PY{n}{npleg}\PY{o}{.}\PY{n}{leggauss}\PY{p}{(}\PY{n}{Gpn}\PY{p}{)} \PY{n}{err\PYZus{}dg} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{zeros}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{size}\PY{p}{(}\PY{n}{nodes}\PY{p}{)}\PY{p}{)}\PY{p}{;} \PY{n}{err\PYZus{}filtered} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{zeros}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{size}\PY{p}{(}\PY{n}{nodes}\PY{p}{)}\PY{p}{)}\PY{p}{;} \PY{c+c1}{\PYZsh{} generating the filter} \PY{n}{T}\PY{p}{,} \PY{n}{c}\PY{p}{,} \PY{n}{filter\PYZus{}nodes} \PY{o}{=} \PY{n}{generate\PYZus{}filter}\PY{p}{(}\PY{n}{degree}\PY{p}{)} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{size}\PY{p}{(}\PY{n}{nodes}\PY{p}{)}\PY{p}{)}\PY{p}{:} \PY{n}{u} \PY{o}{=} \PY{n}{Input\PYZus{}DG}\PY{p}{(}\PY{n}{N}\PY{p}{,} \PY{n}{degree}\PY{p}{)} \PY{n}{x} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{linspace}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{,} \PY{n}{N}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)} \PY{n}{dx} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{diff}\PY{p}{(}\PY{n}{x}\PY{p}{)} \PY{n}{xj} \PY{o}{=} \PY{n}{nodes}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{n}{j} \PY{o}{=} \PY{n}{point\PYZus{}location}\PY{p}{(}\PY{n}{xj}\PY{p}{,} \PY{n}{x}\PY{p}{)} \PY{c+c1}{\PYZsh{} original numerical solution} \PY{n}{uh} \PY{o}{=} \PY{n}{u\PYZus{}DG}\PY{p}{(}\PY{p}{(}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{xj} \PY{o}{\PYZhy{}} \PY{p}{(}\PY{n}{x}\PY{p}{[}\PY{n}{j}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]}\PY{o}{+}\PY{n}{x}\PY{p}{[}\PY{n}{j}\PY{p}{]}\PY{p}{)}\PY{p}{)}\PY{o}{/}\PY{n}{dx}\PY{p}{[}\PY{n}{j}\PY{p}{]}\PY{p}{,} \PY{n}{j}\PY{p}{,} \PY{n}{u}\PY{p}{)} \PY{n}{err\PYZus{}dg}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{abs}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{sin}\PY{p}{(}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{np}\PY{o}{.}\PY{n}{pi}\PY{o}{*}\PY{n}{xj}\PY{p}{)}\PY{o}{\PYZhy{}}\PY{n}{uh}\PY{p}{)} \PY{c+c1}{\PYZsh{} filtered solution} \PY{n}{H} \PY{o}{=} \PY{n}{dx}\PY{p}{[}\PY{n}{j}\PY{p}{]} \PY{n}{ustar} \PY{o}{=} \PY{n}{Gauss\PYZus{}convolution\PYZus{}filter\PYZus{}DG}\PY{p}{(}\PY{n}{xj}\PY{p}{,} \PY{n}{T}\PY{p}{,} \PY{n}{c}\PY{p}{,} \PY{n}{filter\PYZus{}nodes}\PY{p}{,} \PY{n}{H}\PY{p}{,} \PY{n}{x}\PY{p}{,} \PY{n}{u}\PY{p}{,} \PY{n}{xg}\PY{p}{,} \PY{n}{wg}\PY{p}{)} \PY{n}{err\PYZus{}filtered}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{abs}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{sin}\PY{p}{(}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{np}\PY{o}{.}\PY{n}{pi}\PY{o}{*}\PY{n}{xj}\PY{p}{)}\PY{o}{\PYZhy{}}\PY{n}{ustar}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{nodes}\PY{p}{,} \PY{n}{err\PYZus{}dg}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{k\PYZhy{}.}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}\PY{n}{linewidth}\PY{o}{=}\PY{l+m+mf}{2.0}\PY{p}{,} \PY{n}{label}\PY{o}{=}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{DG error}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{nodes}\PY{p}{,} \PY{n}{err\PYZus{}filtered}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{r\PYZhy{}.}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}\PY{n}{linewidth}\PY{o}{=}\PY{l+m+mf}{2.0}\PY{p}{,} \PY{n}{label}\PY{o}{=}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Filtered error}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n+nb}{print} \PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{The maximum error at the given nodes: }\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n+nb}{print} \PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{DG error: }\PY{l+s+si}{\PYZpc{}7.2e}\PY{l+s+s2}{\PYZdq{}} \PY{o}{\PYZpc{}} \PY{n}{np}\PY{o}{.}\PY{n}{max}\PY{p}{(}\PY{n}{err\PYZus{}dg}\PY{p}{)}\PY{p}{)} \PY{n+nb}{print} \PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Filtered error: }\PY{l+s+si}{\PYZpc{}7.2e}\PY{l+s+s2}{\PYZdq{}} \PY{o}{\PYZpc{}} \PY{n}{np}\PY{o}{.}\PY{n}{max}\PY{p}{(}\PY{n}{err\PYZus{}filtered}\PY{p}{)}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{yscale}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{log}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{xlim}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{legend}\PY{p}{(}\PY{n}{frameon}\PY{o}{=}\PY{k+kc}{False}\PY{p}{,}\PY{n}{fontsize}\PY{o}{=}\PY{l+m+mi}{12}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{xlabel}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{x}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}\PY{n}{fontsize}\PY{o}{=}\PY{l+m+mi}{16}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{ylabel}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{|error|}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}\PY{n}{fontsize}\PY{o}{=}\PY{l+m+mi}{16}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{show}\PY{p}{(}\PY{p}{)} \end{Verbatim} \_\_ A test example:\_\_ The following code will generate the nodes sequence by all the Gauss-Legendre points on each elements. \textbf{Note:} the default dates come with this tutorial only support \(N = 20, 40, 80, 160\) and \(degree = 1, 2, 3, 4\). \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}27}]:} \PY{k}{def} \PY{n+nf}{Filtering\PYZus{}Gauss}\PY{p}{(}\PY{n}{N}\PY{p}{,} \PY{n}{degree}\PY{p}{,} \PY{n}{Gpn}\PY{p}{)}\PY{p}{:} \PY{n}{xg}\PY{p}{,}\PY{n}{\PYZus{}} \PY{o}{=} \PY{n}{npleg}\PY{o}{.}\PY{n}{leggauss}\PY{p}{(}\PY{n}{Gpn}\PY{p}{)} \PY{n}{nodes} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{zeros}\PY{p}{(}\PY{n}{N}\PY{o}{*}\PY{n}{Gpn}\PY{p}{)} \PY{n}{grid} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{linspace}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{,} \PY{n}{N}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{N}\PY{p}{)}\PY{p}{:} \PY{n}{nodes}\PY{p}{[}\PY{n}{Gpn}\PY{o}{*}\PY{n}{i}\PY{p}{:}\PY{n}{Gpn}\PY{o}{*}\PY{p}{(}\PY{n}{i}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{]} \PY{o}{=} \PY{l+m+mf}{0.5}\PY{o}{*}\PY{p}{(}\PY{n}{grid}\PY{p}{[}\PY{n}{i}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]}\PY{o}{+}\PY{n}{grid}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{)} \PY{o}{+} \PY{l+m+mf}{0.5}\PY{o}{*}\PY{n}{xg}\PY{o}{*}\PY{p}{(}\PY{n}{grid}\PY{p}{[}\PY{n}{i}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]}\PY{o}{\PYZhy{}}\PY{n}{grid}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{)} \PY{n}{Filtering}\PY{p}{(}\PY{n}{N}\PY{p}{,} \PY{n}{degree}\PY{p}{,} \PY{n}{nodes}\PY{p}{)} \PY{c+c1}{\PYZsh{} to show on nbviewer online, comment out for running on a real server } \PY{n}{Filtering\PYZus{}Gauss}\PY{p}{(}\PY{l+m+mi}{40}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{6}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] The maximum error at the given nodes: DG error: 4.62e-05 Filtered error: 1.33e-07 \end{Verbatim} \begin{center} \adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_51_1.png} \end{center} { \hspace*{\fill} \\} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}28}]:} \PY{n}{w} \PY{o}{=} \PY{n}{interactive}\PY{p}{(}\PY{n}{Filtering\PYZus{}Gauss}\PY{p}{,} \PYZbs{} \PY{n}{N}\PY{o}{=}\PY{n}{widgets}\PY{o}{.}\PY{n}{IntText}\PY{p}{(}\PY{n}{value}\PY{o}{=}\PY{l+m+mi}{40}\PY{p}{,} \PY{n}{description}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Number of Elements:}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{diaabled}\PY{o}{=}\PY{k+kc}{False}\PY{p}{)}\PY{p}{,} \PYZbs{} \PY{n}{degree}\PY{o}{=}\PY{n}{widgets}\PY{o}{.}\PY{n}{IntText}\PY{p}{(}\PY{n}{value}\PY{o}{=}\PY{l+m+mi}{2}\PY{p}{,}\PY{n}{description}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Degree:}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{disabled}\PY{o}{=}\PY{k+kc}{False}\PY{p}{)}\PY{p}{,} \PYZbs{} \PY{n}{Gpn}\PY{o}{=}\PY{n}{widgets}\PY{o}{.}\PY{n}{IntText}\PY{p}{(}\PY{n}{value}\PY{o}{=}\PY{l+m+mi}{6}\PY{p}{,} \PY{n}{description}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Gauss Points:}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{disabled}\PY{o}{=}\PY{k+kc}{False}\PY{p}{)}\PY{p}{)} \PY{n}{display}\PY{p}{(}\PY{n}{w}\PY{p}{)} \end{Verbatim} \begin{verbatim} interactive(children=(IntText(value=40, description='Number of Elements:'), IntText(value=2, description='Degree:'), IntText(value=6, description='Gauss Points:'), Output()), _dom_classes=('widget-interact',)) \end{verbatim} \section{References}\label{references} (de Boor, 2001) Carl de Boor, ``\emph{A practical guide to splines}'', 2001. (Bramble and Schatz, 1977) Bramble J. H. and Schatz A. H., ``\emph{Higher order local accuracy by averaging in the finite element method}'', Math. Comp., vol. 31, number 137, pp. 94-\/-111, 1977. (Cockburn, Luskin et al., 2003) Cockburn Bernardo, Luskin Mitchell and Shu Endre, ``\emph{Enhanced accuracy by post-processing for finite element methods for hyperbolic equations}'', Math. Comp., vol. 72, number 242, pp. 577-\/-606, 2003. \href{http://dx.doi.org/10.1090/S0025-5718-02-01464-3}{online} (Li, 2015) Xiaozhou Li, ``\emph{Smoothness-Increasing and Accuracy-Conserving (SIAC) Filters for Discontinuous Galerkin Methods.}'', 2015. % Add a bibliography block to the postdoc \end{document}
{ "alphanum_fraction": 0.5230645256, "avg_line_length": 69.4593856655, "ext": "tex", "hexsha": "cc993b384fba77bdeafac251bc0c1841608a7c25", "lang": "TeX", "max_forks_count": 8, "max_forks_repo_forks_event_max_datetime": "2021-09-24T08:09:27.000Z", "max_forks_repo_forks_event_min_datetime": "2018-10-18T10:20:56.000Z", "max_forks_repo_head_hexsha": "68d5a384dd939b3e8079da4470d6401d11b63a4c", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "xiaozhouli/Jupyter", "max_forks_repo_path": "Tutorial_of_Filtering/notebook.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "68d5a384dd939b3e8079da4470d6401d11b63a4c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "xiaozhouli/Jupyter", "max_issues_repo_path": "Tutorial_of_Filtering/notebook.tex", "max_line_length": 510, "max_stars_count": 6, "max_stars_repo_head_hexsha": "68d5a384dd939b3e8079da4470d6401d11b63a4c", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "xiaozhouli/Jupyter", "max_stars_repo_path": "Tutorial_of_Filtering/notebook.tex", "max_stars_repo_stars_event_max_datetime": "2021-11-14T09:50:30.000Z", "max_stars_repo_stars_event_min_datetime": "2020-02-27T13:09:06.000Z", "num_tokens": 44615, "size": 101758 }
% Default to the notebook output style % Inherit from the specified cell style. \documentclass[11pt]{article} \usepackage[T1]{fontenc} % Nicer default font (+ math font) than Computer Modern for most use cases \usepackage{mathpazo} % Basic figure setup, for now with no caption control since it's done % automatically by Pandoc (which extracts ![](path) syntax from Markdown). \usepackage{graphicx} % We will generate all images so they have a width \maxwidth. This means % that they will get their normal width if they fit onto the page, but % are scaled down if they would overflow the margins. \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth \else\Gin@nat@width\fi} \makeatother \let\Oldincludegraphics\includegraphics % Set max figure width to be 80% of text width, for now hardcoded. \renewcommand{\includegraphics}[1]{\Oldincludegraphics[width=.8\maxwidth]{#1}} % Ensure that by default, figures have no caption (until we provide a % proper Figure object with a Caption API and a way to capture that % in the conversion process - todo). \usepackage{caption} \DeclareCaptionLabelFormat{nolabel}{} \captionsetup{labelformat=nolabel} \usepackage{adjustbox} % Used to constrain images to a maximum size \usepackage{xcolor} % Allow colors to be defined \usepackage{enumerate} % Needed for markdown enumerations to work \usepackage{geometry} % Used to adjust the document margins \usepackage{amsmath} % Equations \usepackage{amssymb} % Equations \usepackage{textcomp} % defines textquotesingle % Hack from http://tex.stackexchange.com/a/47451/13684: \AtBeginDocument{% \def\PYZsq{\textquotesingle}% Upright quotes in Pygmentized code } \usepackage{upquote} % Upright quotes for verbatim code \usepackage{eurosym} % defines \euro \usepackage[mathletters]{ucs} % Extended unicode (utf-8) support \usepackage[utf8x]{inputenc} % Allow utf-8 characters in the tex document \usepackage{fancyvrb} % verbatim replacement that allows latex \usepackage{grffile} % extends the file name processing of package graphics % to support a larger range % The hyperref package gives us a pdf with properly built % internal navigation ('pdf bookmarks' for the table of contents, % internal cross-reference links, web links for URLs, etc.) \usepackage{hyperref} \usepackage{longtable} % longtable support required by pandoc >1.10 \usepackage{booktabs} % table support for pandoc > 1.12.2 \usepackage[inline]{enumitem} % IRkernel/repr support (it uses the enumerate* environment) \usepackage[normalem]{ulem} % ulem is needed to support strikethroughs (\sout) % normalem makes italics be italics, not underlines % Colors for the hyperref package \definecolor{urlcolor}{rgb}{0,.145,.698} \definecolor{linkcolor}{rgb}{.71,0.21,0.01} \definecolor{citecolor}{rgb}{.12,.54,.11} % ANSI colors \definecolor{ansi-black}{HTML}{3E424D} \definecolor{ansi-black-intense}{HTML}{282C36} \definecolor{ansi-red}{HTML}{E75C58} \definecolor{ansi-red-intense}{HTML}{B22B31} \definecolor{ansi-green}{HTML}{00A250} \definecolor{ansi-green-intense}{HTML}{007427} \definecolor{ansi-yellow}{HTML}{DDB62B} \definecolor{ansi-yellow-intense}{HTML}{B27D12} \definecolor{ansi-blue}{HTML}{208FFB} \definecolor{ansi-blue-intense}{HTML}{0065CA} \definecolor{ansi-magenta}{HTML}{D160C4} \definecolor{ansi-magenta-intense}{HTML}{A03196} \definecolor{ansi-cyan}{HTML}{60C6C8} \definecolor{ansi-cyan-intense}{HTML}{258F8F} \definecolor{ansi-white}{HTML}{C5C1B4} \definecolor{ansi-white-intense}{HTML}{A1A6B2} % commands and environments needed by pandoc snippets % extracted from the output of `pandoc -s` \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}} % Add ',fontsize=\small' for more characters per line \newenvironment{Shaded}{}{} \newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{{#1}}}} \newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.56,0.13,0.00}{{#1}}} \newcommand{\DecValTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}} \newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}} \newcommand{\FloatTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}} \newcommand{\CharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\StringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\CommentTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textit{{#1}}}} \newcommand{\OtherTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{{#1}}} \newcommand{\AlertTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{{#1}}}} \newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.02,0.16,0.49}{{#1}}} \newcommand{\RegionMarkerTok}[1]{{#1}} \newcommand{\ErrorTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{{#1}}}} \newcommand{\NormalTok}[1]{{#1}} % Additional commands for more recent versions of Pandoc \newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.53,0.00,0.00}{{#1}}} \newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.73,0.40,0.53}{{#1}}} \newcommand{\ImportTok}[1]{{#1}} \newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.73,0.13,0.13}{\textit{{#1}}}} \newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} \newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} \newcommand{\VariableTok}[1]{\textcolor[rgb]{0.10,0.09,0.49}{{#1}}} \newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{{#1}}}} \newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.40,0.40,0.40}{{#1}}} \newcommand{\BuiltInTok}[1]{{#1}} \newcommand{\ExtensionTok}[1]{{#1}} \newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.74,0.48,0.00}{{#1}}} \newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.49,0.56,0.16}{{#1}}} \newcommand{\InformationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} \newcommand{\WarningTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} \renewcommand{\hat}{\widehat} % Define a nice break command that doesn't care if a line doesn't already % exist. \def\br{\hspace*{\fill} \\* } % Math Jax compatability definitions \def\gt{>} \def\lt{<} % Document parameters \title{Lecture 5: Statistical Inference} \author{Zhentao Shi} % Pygments definitions \makeatletter \def\PY@reset{\let\PY@it=\relax \let\PY@bf=\relax% \let\PY@ul=\relax \let\PY@tc=\relax% \let\PY@bc=\relax \let\PY@ff=\relax} \def\PY@tok#1{\csname PY@tok@#1\endcsname} \def\PY@toks#1+{\ifx\relax#1\empty\else% \PY@tok{#1}\expandafter\PY@toks\fi} \def\PY@do#1{\PY@bc{\PY@tc{\PY@ul{% \PY@it{\PY@bf{\PY@ff{#1}}}}}}} \def\PY#1#2{\PY@reset\PY@toks#1+\relax+\PY@do{#2}} \expandafter\def\csname PY@tok@w\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.73,0.73}{##1}}} \expandafter\def\csname PY@tok@c\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@cp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.74,0.48,0.00}{##1}}} \expandafter\def\csname PY@tok@k\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kt\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.69,0.00,0.25}{##1}}} \expandafter\def\csname PY@tok@o\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@ow\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.67,0.13,1.00}{##1}}} \expandafter\def\csname PY@tok@nb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@nf\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@nc\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@nn\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@ne\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.82,0.25,0.23}{##1}}} \expandafter\def\csname PY@tok@nv\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@no\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.53,0.00,0.00}{##1}}} \expandafter\def\csname PY@tok@nl\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.63,0.63,0.00}{##1}}} \expandafter\def\csname PY@tok@ni\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.60,0.60,0.60}{##1}}} \expandafter\def\csname PY@tok@na\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.49,0.56,0.16}{##1}}} \expandafter\def\csname PY@tok@nt\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@nd\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.67,0.13,1.00}{##1}}} \expandafter\def\csname PY@tok@s\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@sd\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@si\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.53}{##1}}} \expandafter\def\csname PY@tok@se\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.13}{##1}}} \expandafter\def\csname PY@tok@sr\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.53}{##1}}} \expandafter\def\csname PY@tok@ss\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@sx\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@m\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@gh\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}} \expandafter\def\csname PY@tok@gu\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.50,0.00,0.50}{##1}}} \expandafter\def\csname PY@tok@gd\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.63,0.00,0.00}{##1}}} \expandafter\def\csname PY@tok@gi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.63,0.00}{##1}}} \expandafter\def\csname PY@tok@gr\endcsname{\def\PY@tc##1{\textcolor[rgb]{1.00,0.00,0.00}{##1}}} \expandafter\def\csname PY@tok@ge\endcsname{\let\PY@it=\textit} \expandafter\def\csname PY@tok@gs\endcsname{\let\PY@bf=\textbf} \expandafter\def\csname PY@tok@gp\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}} \expandafter\def\csname PY@tok@go\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.53,0.53,0.53}{##1}}} \expandafter\def\csname PY@tok@gt\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.27,0.87}{##1}}} \expandafter\def\csname PY@tok@err\endcsname{\def\PY@bc##1{\setlength{\fboxsep}{0pt}\fcolorbox[rgb]{1.00,0.00,0.00}{1,1,1}{\strut ##1}}} \expandafter\def\csname PY@tok@kc\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kd\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kn\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kr\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@bp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@fm\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@vc\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@vg\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@vi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@vm\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@sa\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@sb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@sc\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@dl\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@s2\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@sh\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@s1\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@mb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@mf\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@mh\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@mi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@il\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@mo\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@ch\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@cm\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@cpf\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@c1\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@cs\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \def\PYZbs{\char`\\} \def\PYZus{\char`\_} \def\PYZob{\char`\{} \def\PYZcb{\char`\}} \def\PYZca{\char`\^} \def\PYZam{\char`\&} \def\PYZlt{\char`\<} \def\PYZgt{\char`\>} \def\PYZsh{\char`\#} \def\PYZpc{\char`\%} \def\PYZdl{\char`\$} \def\PYZhy{\char`\-} \def\PYZsq{\char`\'} \def\PYZdq{\char`\"} \def\PYZti{\char`\~} % for compatibility with earlier versions \def\PYZat{@} \def\PYZlb{[} \def\PYZrb{]} \makeatother % Exact colors from NB \definecolor{incolor}{rgb}{0.0, 0.0, 0.5} \definecolor{outcolor}{rgb}{0.545, 0.0, 0.0} % Prevent overflowing lines due to hard-to-break entities \sloppy % Setup hyperref package \hypersetup{ breaklinks=true, % so long urls are correctly broken across lines colorlinks=true, urlcolor=urlcolor, linkcolor=linkcolor, citecolor=citecolor, } % Slightly bigger margins than the latex defaults \geometry{verbose,tmargin=1in,bmargin=1in,lmargin=1in,rmargin=1in} \begin{document} \maketitle Notation: \(\mathbf{X}\) denotes a random variable or random vector. \(\mathbf{x}\) is its realization. \section{Hypothesis Testing}\label{hypothesis-testing} \begin{itemize} \item A \emph{hypothesis} is a statement about the parameter space \(\Theta\). \item The \emph{null hypothesis} \(\Theta_{0}\) is a subset of \(\Theta\) of interest, ideally suggested by scientific theory. \item The \emph{alternative hypothesis} \(\Theta_{1}=\Theta\backslash\Theta_{0}\) is the complement of \(\Theta_{0}\). \item \emph{Hypothesis testing} is a decision whether to accept the null hypothesis or to reject it according to the observed evidence. \item If \(\Theta_0\) is a singleton, we call it a \emph{simple hypothesis}; otherwise we call it a \emph{composite hypothesis}. \item A \emph{test function} is a mapping \[\phi:\mathcal{X}^{n}\mapsto\left\{ 0,1\right\},\] where \(\mathcal{X}\) is the sample space. We accept the null hypothesis if \(\phi\left(\mathbf{x}\right)=0\), or reject it if \(\phi\left(\mathbf{x}\right)=1\). \item The \emph{acceptance region} is defined as \(A_{\phi}=\left\{ \mathbf{x}\in\mathcal{X}^{n}:\phi\left(\mathbf{x}\right)=0\right\} ,\) and the \emph{rejection region} is \(R_{\phi}=\left\{ \mathbf{x}\in\mathcal{X}^{n}:\phi\left(\mathbf{x}\right)=1\right\} .\) \item The \emph{power function} of the test \(\phi\) is \[\beta_{\phi}\left(\theta\right)=P_{\theta}\left(\phi\left(\mathbf{X}\right)=1\right)=E_{\theta}\left(\phi\left(\mathbf{X}\right)\right).\] The power function measures, at a given point \(\theta\), the probability that the test function rejects the null. \item The \emph{power} of \(\phi\) at \(\theta\) for some \(\theta\in\Theta_{1}\) is defined as the value of \(\beta_{\phi}\left(\theta\right)\). The \emph{size} of the test \(\phi\) is define as \(\alpha=\sup_{\theta\in\Theta_{0}}\beta_{\phi}\left(\theta\right).\) Notice that the definition of power depends on a \(\theta\) in the alternative, whereas that of size is independent of \(\theta\) as it takes the supremum over the set of null \(\Theta_0\). \item The \emph{level} of the test \(\phi\) is a value \(\alpha\in\left(0,1\right)\) such that \(\alpha\geq\sup_{\theta\in\Theta_{0}}\beta_{\phi}\left(\theta\right)\), which is often used when it is difficult to attain the exact supremum. A test of size $\alpha$ is also of level $\alpha$ or bigger; while a test of level $\alpha$ must have size smaller or equal to $\alpha$. \end{itemize} \begin{verbatim} | decision | reject $H_{1}$ | reject $H_{0}$ |--------------|------------------| --------------- | $H_{0}$ true | correct | Type I error | $H_{0}$ false| Type II error | correct \end{verbatim} \begin{itemize} \tightlist \item size = \emph{P}(reject \(H_{0}\) when \(H_{0}\) true) \item power = \emph{P}(reject \(H_{0}\) when \(H_{0}\) false) \item The \emph{probability of committing Type I error} is \(\beta_{\phi}\left(\theta\right)\) for some \(\theta\in\Theta_{0}\). \item The \emph{probability of committing Type II error} is \(1-\beta_{\phi}\left(\theta\right)\) for \(\theta\in\Theta_{1}\). \end{itemize} The philosophy on the hypothesis testing has been debated for centuries. At present the prevailing framework in statistics textbooks is the frequentist perspective. A frequentist views the parameter as a fixed constant, and they keep a conservative attitude about the Type I error. Only if overwhelming evidence is demonstrated should a researcher reject the null. Under the philosophy of protecting the null hypothesis, a desirable test should have a small level. Conventionally we take \(\alpha=0.01,\) 0.05 or 0.1. There can be many tests of the correct size. \bigskip \textbf{Example} A trivial test function, \(\phi(\mathbf{X})=1\left\{ 0\leq U\leq\alpha\right\}\), where \(U\) is a random variable from a uniform distribution on \(\left[0,1\right]\), has correct size $\alpha$ but no power. Another trivial test function \(\phi\left(\mathbf{X}\right)=1\) has the biggest power but useless size. \bigskip Usually, we design a test by proposing a test statistic \(T_{n}:\mathcal{X}^{n}\mapsto\mathbb{R}^{+}\) and a critical value \(c_{1-\alpha}\). Given \(T_n\) and \(c_{1-\alpha}\), we write the test function as \[\phi\left(\mathbf{X}\right)=1\left\{ T_{n}\left(\mathbf{X}\right)>c_{1-\alpha}\right\}.\] To ensure such a \(\phi\left(\mathbf{x}\right)\) has correct size, we figure out the distribution of \(T_{n}\) under the null hypothesis (called the \emph{null distribution}), and choose a critical value \(c_{1-\alpha}\) according to the null distribution and the desirable size or level \(\alpha\). The concept of \emph{level} is useful if we do not have sufficient information to derive the exact size of a test. \bigskip \textbf{Example} If \(\left(X_{1i},X_{2i}\right)_{i=1}^{n}\) are randomly drawn from some unknown joint distribution, but we know the marginal distribution is \(X_{ji}\sim N\left(\theta_{j},1\right)\), for \(j=1,2\). In order to test the joint hypothesis \(\theta_{1}=\theta_{2}=0\), we can construct a test function \[\phi\left(\mathbf{X}_{1},\mathbf{X}_{2}\right)=1\left\{ \left\{ \sqrt{n}\left|\overline{X}_{1}\right|\geq c_{1-\alpha/4}\right\} \cup\left\{ \sqrt{n}\left|\overline{X}_{2}\right|\geq c_{1-\alpha/4}\right\} \right\} ,\] where \(c_{1-\alpha/4}\) is the \(\left(1-\alpha/4\right)\)-th quantile of the standard normal distribution. The level of this test is \[\begin{aligned} P_{\theta_{1}=\theta_{2}=0}\left(\phi\left(\mathbf{X}_{1},\mathbf{X}_{2}\right)\right) & \leq P_{\theta_{1}=0}\left(\sqrt{n}\left|\overline{X}_{1}\right|\geq c_{1-\alpha/4}\right)+P_{\theta_{2}=0}\left(\sqrt{n}\left|\overline{X}_{2}\right|\geq c_{1-\alpha/4}\right)\\ & =\alpha/2+\alpha/2=\alpha.\end{aligned}\] where the inequality follows by the \emph{Bonferroni inequality} \(P\left(A\cup B\right)\leq P\left(A\right)+P\left(B\right)\). Therefore, the level of \(\phi\left(\mathbf{X}_{1},\mathbf{X}_{2}\right)\) is \(\alpha\), but the exact size is unknown without the knowledge of the joint distribution. (Even if we know the correlation of \(X_{1i}\) and \(X_{2i}\), putting two marginally normal distributions together does not make a jointly normal vector in general.) \bigskip Denote the class of test functions of level \(\alpha\) as \(\Psi_{\alpha}=\left\{ \phi:\sup_{\theta\in\Theta_{0}}\beta_{\phi}\left(\theta\right)\leq\alpha\right\}\). A \emph{uniformly most powerful test} \(\phi^{*}\in\Psi_{\alpha}\) is a test function such that, for every \(\phi\in\Psi_{\alpha},\) \[\beta_{\phi^{*}}\left(\theta\right)\geq\beta_{\phi}\left(\theta\right)\] uniformly over \(\theta\in\Theta_{1}\). \textbf{Example} Suppose a random sample of size 6 is generated from \[\left(X_{1},\ldots,X_{6}\right)\sim\text{i.i.d.}N\left(\theta,1\right),\] where \(\theta\) is unknown. We want to infer the population mean of the normal distribution. The null hypothesis is \(H_{0}\): \(\theta\leq0\) and the alternative is \(H_{1}\): \(\theta>0\). All tests in \[\Psi=\left\{ 1\left\{ \bar{X}\geq c/\sqrt{6}\right\} :c\geq1.64\right\}\] has the correct level. Since \(\bar{X}=N\left(\theta,1/6 \right)\), the power function for those in \(\Psi\) is \[\beta_{\phi}\left(\theta\right)=P\left(\bar{X}\geq\frac{c}{\sqrt{6}}\right)=P\left(\sqrt{6}\left(\bar{X}-\theta\right)\geq c-\sqrt{6}\theta\right)=1-\Phi\left(c-\sqrt{6}\theta\right)\] where \(\Phi\) is the cdf of standard normal. It is clear that \(\beta_{\phi}\left(\theta\right)\) is monotonically decreasing in \(c\). Thus the test function \[\phi\left(\mathbf{X}\right)=1\left\{ \bar{X}\geq 1.64/\sqrt{6}\right\}\] is the most powerful test in \(\Psi\), as \(c=1.64\) is the lower bound that \(\Psi\) allows. Another commonly used indicator in hypothesis testing is \(p\)-value: \[\sup_{\theta\in\Theta_{0}}P_{\theta}\left(T_{n}\left(\mathbf{x}\right)\leq T_{n}\left(\mathbf{X}\right)\right).\] In the above expression, \(T_{n}\left(\mathbf{x}\right)\) is the realized value of the test statistic \(T_{n}\), while \(T_{n}\left(\mathbf{X}\right)\) is the random variable generated by \(\mathbf{X}\) under the null \(\theta\in\Theta_{0}\). The interpretation of the \(p\)-value is tricky. \(p\)-value is the probability that we observe \(T_n (\mathbf{X})\) being greater than the realized \(T_n (\mathbf{x} )\) if the null hypothesis is true. \(p\)-value is \emph{not} the probability that the null hypothesis is true. Under the frequentist perspective, the null hypothesis is either true or false, with certainty. The randomness of a test comes only from sampling, not from the hypothesis. It measures whether the data is consistent with the null hypothesis, or whether the evidence from the data is compatible with the null hypothesis. \(p\)-value is closely related to the corresponding test. When \(p\)-value is smaller than the specified test size \(\alpha\), the test rejects the null hypothesis. \section{Confidence Interval}\label{confidence-interval} An \emph{interval estimate} is a function \(C:\mathcal{X}^{n}\mapsto\left\{ \Theta':\Theta'\subseteq\Theta\right\}\) that maps a point in the sample space to a subset of the parameter space. The \emph{coverage probability} of an \emph{interval estimator} \(C\left(\mathbf{X}\right)\) is defined as \(P_{\theta}\left(\theta\in C\left(\mathbf{X}\right)\right)\). The coverage probability is the frequency that the interval estimator captures the true parameter that generates the sample (From the frequentist perspective, the parameter is fixed while the region is random). It is \emph{not} the probability that \(\theta\) is inside the given region (From the Bayesian perspective, the parameter is random while the region is fixed conditional on \(\mathbf{X}\).) Suppose a random sample of size 6 is generated from \[\left(X_{1},\ldots,X_{6}\right)\sim\text{i.i.d. }N\left(\theta,1\right).\] Find the coverage probability of the random interval \[\left[\bar{X}-1.96/\sqrt{6},\bar{X}+1.96/\sqrt{6}\right].\] Hypothesis testing and confidence interval are closely related. Sometimes it is difficult to directly construct the confidence interval, but easier to test a hypothesis. One way to construct confidence interval is by \emph{inverting a corresponding test}. Suppose \(\phi\) is a test of size \(\alpha\). If \(C\left(\mathbf{X}\right)\) is constructed as \[C\left(\mathbf{x}\right)=\left\{ \theta\in\Theta:\phi_{\theta}\left(\mathbf{x}\right)=0\right\},\] then its coverage probability \[P_{\theta}\left(\theta\in C\left(\mathbf{X}\right)\right)=1-P_{\theta}\left(\phi_{\theta}\left(\mathbf{X}\right)=1\right)=1-\alpha.\] \section{Bayesian Credible Set} The Bayesian framework offers a coherent and natural language for statistical decision. However, the major criticism against Bayesian statistics is the arbitrariness of the choice of the prior. In the Bayesian framework, both the data $\mathbf{X}_n$ and the parameter $\theta$ are random variables. Before she observes the data, she holds a \emph{prior distribution} $\pi$ about $\theta$. After observing the data, she updates the prior distribution to a \emph{posterior distribution} $p(\theta | \mathbf{X}_n)$. The \emph{Bayes Theorem} connects the prior and the posterior as \[ p( \theta| \mathbf{X}_n ) \propto f( \mathbf{X}_n | \theta ) \pi(\theta) \] where $f( \mathbf{X}_n | \theta )$ is the likelihood function. Here is a classical example to illustrate the Bayesian approach of statistical inference. Suppose we have an iid sample $(X_1,\ldots,X_n)$ drawn from a normal distribution with unknown $\theta$ and known $\sigma$. For a researcher with a prior distribution $\theta \sim N(\theta_0, \sigma_0^2)$, her posterior distribution is, by some routine calculation, the posterior is also a normal distribution \[ p(\theta | \mathbf{x}) \sim N\left( \tilde{\theta}, \tilde{\sigma}^2 \right), \] where $ \tilde{\theta} = \frac{\sigma^2}{n \sigma_0^2 + \sigma^2} \theta_0 + \frac{n\sigma_0^2}{n\sigma_0^2 + \sigma^2} \bar{x} $ and $\tilde{\sigma}^2 = \frac{\sigma_0^2 \sigma^2}{n\sigma_0^2 + \sigma^2}$. Thus the Bayesian credible set is \[ \left( \tilde{\theta} - z_{1-\alpha/2 } \cdot \tilde{\sigma}, \tilde{\theta} + z_{1-\alpha/2 }\cdot \tilde{\sigma} \right). \] This posterior distribution depends on $\theta_0$ and $\sigma_0^2$ from the posterior. When the sample size is sufficiently large the posterior can be approximated by $N( \bar{x}, \sigma^2 / n )$, where the prior information is overwhelmed by the information accumulated from the data. In contrast, a frequentist estimates $\hat{\theta} = \bar{x} \sim N(\theta, \sigma^2 / n) $. Her confidence interval is $$ \left( \bar{x} - z_{1-\alpha/2 } \cdot \sigma/\sqrt{n}, \bar{x} - z_{1-\alpha/2 } \cdot \sigma/\sqrt{n} \right). $$ \section{Application in OLS}\label{application-in-ols} \subsection{Wald Test}\label{wald-test} Suppose the OLS estimator \(\widehat{\beta}\) is asymptotic normal, i.e. \[\sqrt{n}\left(\widehat{\beta}-\beta\right)\stackrel{d}{\to}N\left(0,\Omega\right)\] where \(\Omega\) is a \(K\times K\) positive definite covariance matrix and \(R\) is a \(q\times K\) constant matrix, then \(R\sqrt{n}\left(\widehat{\beta}-\beta\right)\stackrel{d}{\to}N\left(0,R\Omega R'\right)\). Moreover, if \(\mbox{rank}\left(R\right)=q\), then \[n\left(\widehat{\beta}-\beta\right)'R'\left(R\Omega R'\right)^{-1}R\left(\widehat{\beta}-\beta\right)\stackrel{d}{\to}\chi_{q}^{2}.\] Now we intend to test the null hypothesis \(R\beta=r\). Under the null, the Wald statistic \[W_{n}=n\left(R\widehat{\beta}-r\right)'\left(R\widehat{\Omega}R'\right)^{-1}\left(R\widehat{\beta}-r\right)\stackrel{d}{\to}\chi_{q}^{2}\] where \(\widehat{\Omega}\) is a consistent estimator of \(\Omega\). \textbf{Example} (Single test) In a linear regression \[\begin{aligned} y & = x_{i}'\beta+e_{i}=\sum_{k=1}^{5}\beta_{k}x_{ik}+e_{i}.\nonumber \\ E\left[e_{i}x_{i}\right] & = \mathbf{0}_{5},\label{eq:example}\end{aligned} \] where \(y\) is wage and \[x=\left(\mbox{edu},\mbox{age},\mbox{experience},\mbox{experience}^{2},1\right)'.\] To test whether \emph{education} affects \emph{wage}, we specify the null hypothesis \(\beta_{1}=0\). Let \(R=\left(1,0,0,0,0\right)\). \[\sqrt{n}\widehat{\beta}_{1}=\sqrt{n}\left(\widehat{\beta}_{1}-\beta_{1}\right)=\sqrt{n}R\left(\widehat{\beta}-\beta\right)\stackrel{d}{\to}N\left(0,R\Omega R'\right)\sim N\left(0,\Omega_{11}\right),\label{eq:R11}\] where \(\Omega{}_{11}\) is the \(\left(1,1\right)\) (scalar) element of \(\Omega\). Therefore, \[\sqrt{n}\frac{\widehat{\beta}_{1}}{\widehat{\Omega}_{11}^{1/2}}=\sqrt{\frac{\Omega_{11}}{\widehat{\Omega}_{11}}}\sqrt{n}\frac{\widehat{\beta}_{1}}{\Omega_{11}^{1/2}}\] If \(\widehat{\Omega}\stackrel{p}{\to}\Omega\), then \(\left(\Omega_{11}/\widehat{\Omega}_{11}\right)^{1/2}\stackrel{p}{\to}1\) by the continuous mapping theorem. As \(\sqrt{n}\widehat{\beta}_{1}/\Omega_{11}^{1/2}\stackrel{d}{\to}N\left(0,1\right)\), we conclude \(\sqrt{n}\widehat{\beta}_{1}/\widehat{\Omega}_{11}^{1/2}\stackrel{d}{\to}N\left(0,1\right).\) The above example is a test about a single coefficient, and the test statistic is essentially a \emph{t}-statistic. The following example gives a test about a joint hypothesis. \textbf{Example} (Joint test) We want to simultaneously test \(\beta_{1}=1\) and \(\beta_{3}+\beta_{4}=2\) in the above example. The null hypothesis can be expressed in the general form \(R\beta=r\), where the restriction matrix \(R\) is \[R=\begin{pmatrix}1 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 1 & 0 \end{pmatrix}\] and \(r=\left(1,2\right)'\). Once we figure out \(R\), it is routine to construct the test. These two examples are linear restrictions. In order to test a nonlinear regression, we need the so-called \emph{delta method}. \textbf{Delta method} If \(\sqrt{n}\left(\widehat{\theta}-\theta_{0}\right)\stackrel{d}{\to}N\left(0,\Omega_{K\times K}\right)\), and \(f:\mathbb{R}^{K}\mapsto\mathbb{R}^{q}\) is a continuously differentiable function for some \(q\leq K\), then \[\sqrt{n}\left(f\left(\widehat{\theta}\right)-f\left(\theta_{0}\right)\right)\stackrel{d}{\to}N\left(0,\frac{\partial f}{\partial\theta}\left(\theta_{0}\right)\Omega\frac{\partial f}{\partial\theta}\left(\theta_{0}\right)'\right).\] This result can be easily shown by a mean-value expansion \[ f(\hat{\theta} ) - f(\theta_0) = \frac{ \partial f(\tilde{\theta}) }{\partial \theta} (\hat{\theta} - \theta_0) \] where $\tilde{\theta}$ lies on the line segment connecting $\hat{\theta}$ and $\theta_0$. Multiply both sides by $\sqrt{n}$ and notice $\tilde{\theta} \stackrel{p}{\to} \theta_0$, by Slutsky theorem we have $\sqrt{n} (f(\hat{\theta} ) - f(\theta_0) ) \stackrel{d}{\to} \frac{\partial f}{\partial\theta}\left(\theta_{0}\right) N(0,\Omega). $ In the example of linear regression, the optimal experience level can be found by setting the first order condition with respective to experience to set, \(\beta_{3}+2\beta_{4}\mbox{experience}^{*}=0\). We test the hypothesis that the optimal experience level is 20 years; in other words, \[\mbox{experience}^{*}=-\frac{\beta_{3}}{2\beta_{4}}=20.\] This is a nonlinear hypothesis. If \(q\leq K\) where \(q\) is the number of restrictions, we have \[n\left(f\left(\widehat{\theta}\right)-f\left(\theta_{0}\right)\right)'\left(\frac{\partial f}{\partial\theta}\left(\theta_{0}\right)\Omega\frac{\partial f}{\partial\theta}\left(\theta_{0}\right)'\right)^{-1}\left(f\left(\widehat{\theta}\right)-f\left(\theta_{0}\right)\right)\stackrel{d}{\to}\chi_{q}^{2},\] where in this example, \(\theta=\beta\), \(f\left(\beta\right)=-\beta_{3}/\left(2\beta_{4}\right)\). The gradient \[\frac{\partial f}{\partial\beta}\left(\beta\right)=\left(0,0,-\frac{1}{2\beta_{4}},\frac{\beta_{3}}{2\beta_{4}^{2}}\right)\] Since \(\widehat{\beta}\stackrel{p}{\to}\beta_{0}\), by the continuous mapping theorem theorem, if \(\beta_{0,4}\neq0\), we have \(\frac{\partial}{\partial\beta}f\left(\widehat{\beta}\right)\stackrel{p}{\to}\frac{\partial}{\partial\beta}f\left(\beta_{0}\right)\). Therefore, the (nonlinear) Wald test is \[W_{n}=n\left(f\left(\widehat{\beta}\right)-20\right)'\left(\frac{\partial f}{\partial\beta}\left(\widehat{\beta}\right)\widehat{\Omega}\frac{\partial f}{\partial\beta}\left(\widehat{\beta}\right)'\right)^{-1}\left(f\left(\widehat{\beta}\right)-20\right)\stackrel{d}{\to}\chi_{1}^{2}.\] This is a valid test with correct asymptotic size. However, we can equivalently state the null hypothesis as \(\beta_{3}+40\beta_{4}=0\) and we can construct a Wald statistic accordingly. In general, a linear hypothesis is preferred to a nonlinear one, due to the approximation error in the delta method under the null and more importantly the invalidity of the Taylor expansion under the alternative. It also highlights the problem of Wald test being \emph{variant} for re-parametrization. \subsection{Lagrangian Multiplier Test*}\label{lagrangian-multiplier-test} Restricted least square \[\min_{\beta}\left(y-X\beta\right)'\left(y-X\beta\right)\mbox{ s.t. }R\beta=r.\] Turn it into an unrestricted problem \[L\left(\beta,\lambda\right)=\frac{1}{2n}\left(y-X\beta\right)'\left(y-X\beta\right)+\lambda'\left(R\beta-r\right).\] The first-order condition \begin{align*} \frac{\partial}{\partial\beta}L & = -\frac{1}{n}X'\left(y-X\tilde{\beta}\right)+\tilde{\lambda}R=-\frac{1}{n}X'e+\frac{1}{n}X'X\left(\tilde{\beta}-\beta^{*}\right)+R'\tilde{\lambda}=0.\\ \frac{\partial}{\partial\lambda}L & = R\tilde{\beta}-r=R\left(\tilde{\beta}-\beta^{*}\right)=0 \end{align*} Combine these two equations into a linear system, \[ \begin{pmatrix} \widehat{Q} & R'\\ R & 0 \end{pmatrix}\begin{pmatrix}\tilde{\beta}-\beta^{*}\\ \tilde{\lambda} \end{pmatrix}=\begin{pmatrix}\frac{1}{n}X'e\\ 0 \end{pmatrix},\] where $\hat{Q} = X'X/n$. Thus we can explicitly express the estimator as \[\begin{aligned} \begin{pmatrix}\tilde{\beta}-\beta^{*}\\ \tilde{\lambda} \end{pmatrix} & =\begin{pmatrix}\widehat{Q} & R'\\ R & 0 \end{pmatrix}^{-1} \begin{pmatrix}\frac{1}{n}X'e\\ 0 \end{pmatrix}\\ & = \begin{pmatrix}\widehat{Q}^{-1}-\widehat{Q}^{-1}R'\left(R\widehat{Q}^{-1}R'\right)^{-1}R\widehat{Q}^{-1} & \widehat{Q}^{-1}R'\left(R\widehat{Q}^{-1}R'\right)^{-1}\\ \left(R\widehat{Q}^{-1}R'\right)^{-1}R\widehat{Q}^{-1} & (R'Q^{-1}R)^{-1} \end{pmatrix} \begin{pmatrix}\frac{1}{n}X'e\\ 0 \end{pmatrix}.\end{aligned}\] We conclude that \[ \sqrt{n}\tilde{\lambda}=\left(R\widehat{Q}^{-1}R'\right)^{-1}R\widehat{Q}^{-1}\frac{1}{\sqrt{n}}X'e \stackrel{d}{\to} N\left(0,\left(RQ^{-1}R'\right)^{-1}RQ^{-1}\Omega Q^{-1}R'\left(RQ^{-1}R'\right)^{-1}\right).\] Let \(W=\left(RQ^{-1}R'\right)^{-1}RQ^{-1}\Omega Q^{-1}R'\left(RQ^{-1}R'\right)^{-1}\), we have \[n\tilde{\lambda}'W^{-1}\tilde{\lambda}\stackrel{d}{\to} \chi_{q}^{2}.\] If homoskedastic, then \(W=\sigma^{2}\left(RQ^{-1}R'\right)^{-1}RQ^{-1}QQ^{-1}R'\left(RQ^{-1}R'\right)^{-1}=\sigma^{2}\left(RQ^{-1}R'\right)^{-1}.\) Replace $W$ with the estimated $\hat{W}$, \[\begin{aligned} \frac{n\tilde{\lambda}'R\hat{Q}^{-1}R'\tilde{\lambda}}{\hat{\sigma}^{2}} & =\frac{1}{n\hat{\sigma}^{2}}\left(y-X\tilde{\beta}\right)'X \hat{Q}^{-1} R' (R \hat{Q}^{-1} R')^{-1} R \hat{Q}^{-1} X'\left(y-X\tilde{\beta}\right)\\ & =\frac{1}{n\hat{\sigma}^{2}}\left(y-X\tilde{\beta}\right)'P_{X \hat{Q}^{-1} R'}\left(y-X\tilde{\beta}\right).\end{aligned}\] \subsection{Likelihood-Ratio test*}\label{likelihood-ratio-test} For likelihood ratio test, the starting point can be a criterion function \(L\left(\beta\right)=\left(y-X\beta\right)'\left(y-X\beta\right)\). It does not have to be the likelihood function. \[\begin{aligned} L\left(\tilde{\beta}\right)-L\left(\widehat{\beta}\right) & =\frac{\partial L}{\partial\beta}\left(\widehat{\beta}\right)+\frac{1}{2}\left(\tilde{\beta}-\widehat{\beta}\right)'\frac{\partial L}{\partial\beta\partial\beta}\left(\dot{\beta}\right)\left(\tilde{\beta}-\widehat{\beta}\right)\\ & =0+\frac{1}{2}\left(\tilde{\beta}-\widehat{\beta}\right)'\widehat{Q}\left(\tilde{\beta}-\widehat{\beta}\right).\end{aligned}\] From the derivation of LM test, we have \[\begin{aligned} \sqrt{n}\left(\tilde{\beta}-\beta^{*}\right) & = \left(\widehat{Q}^{-1}-\widehat{Q}^{-1}R'\left(R\widehat{Q}^{-1}R'\right)^{-1}R\widehat{Q}^{-1}\right)\frac{1}{\sqrt{n}}X'e\\ & = \frac{1}{\sqrt{n}}\left(X'X\right)X'e-\widehat{Q}^{-1}R'\left(R\widehat{Q}^{-1}R'\right)^{-1}R\widehat{Q}^{-1}\frac{1}{\sqrt{n}}X'e\\ & = \sqrt{n}\left(\widehat{\beta}-\beta^{*}\right)-\widehat{Q}^{-1}R'\left(R\widehat{Q}^{-1}R'\right)^{-1}R\widehat{Q}^{-1}\frac{1}{\sqrt{n}}X'e \end{aligned}\] Therefore \[\sqrt{n}\left(\tilde{\beta}-\widehat{\beta}\right)=-\widehat{Q}^{-1}R'\left(R\widehat{Q}^{-1}R'\right)^{-1}R\widehat{Q}^{-1}\frac{1}{\sqrt{n}}X'e\] and \[\begin{aligned} & n\left(\tilde{\beta}-\beta\right)'\widehat{Q}\left(\tilde{\beta}-\widehat{\beta}\right)\\ & = \frac{1}{\sqrt{n}}e'X\widehat{Q}^{-1}R'\left(R\widehat{Q}^{-1}R'\right)^{-1}R\widehat{Q}^{-1}\widehat{Q}\widehat{Q}^{-1}R'\left(R\widehat{Q}^{-1}R'\right)^{-1}R\widehat{Q}^{-1}\frac{1}{\sqrt{n}}X'e\\ & = \frac{1}{\sqrt{n}}e'X\widehat{Q}^{-1}R'\left(R\widehat{Q}^{-1}R'\right)^{-1}R\widehat{Q}^{-1}\frac{1}{\sqrt{n}}X'e \end{aligned}\] In general, it is a quadratic form of normal distributions. If homoskedastic, then \[\left(R\widehat{Q}^{-1}R'\right)^{-1/2}R\widehat{Q}^{-1}\frac{1}{\sqrt{n}}X'e\] has variance \[\sigma^{2}\left(RQ^{-1}R'\right)^{-1/2}RQ^{-1}QQ^{-1}R'\left(RQ^{-1}R'\right)^{-1/2}=\sigma^{2}I_{q}.\] We can view the optimization of the log-likelihood as a two-step optimization with the inner step \(\sigma=\sigma\left(\beta\right)\). By the envelop theorem, when we take derivative with respect to \(\beta\), we can ignore the indirect effect of \(\partial\sigma\left(\beta\right)/\partial\beta\). % Add a bibliography block to the postdoc \end{document}
{ "alphanum_fraction": 0.6731967716, "avg_line_length": 52.9797023004, "ext": "tex", "hexsha": "68b9d8c261143d182104b133a151c7dc41a9dd75", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-06-27T03:39:58.000Z", "max_forks_repo_forks_event_min_datetime": "2019-06-27T03:39:58.000Z", "max_forks_repo_head_hexsha": "8518c8ad7162f11fab7db3550f0ad87fece3811a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "fatjudy/Econ5121A", "max_forks_repo_path": "lec_notes_ipynb/lecture5.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8518c8ad7162f11fab7db3550f0ad87fece3811a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "fatjudy/Econ5121A", "max_issues_repo_path": "lec_notes_ipynb/lecture5.tex", "max_line_length": 309, "max_stars_count": null, "max_stars_repo_head_hexsha": "8518c8ad7162f11fab7db3550f0ad87fece3811a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "fatjudy/Econ5121A", "max_stars_repo_path": "lec_notes_ipynb/lecture5.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 14653, "size": 39152 }
\section{Adjusting the queuing model}\label{sec:scenarios} This section is comprised of several `what-if' scenarios --- a classic component of healthcare operational research --- under the novel parameterisation of the queue established in Section~\ref{sec:model}. The outcomes of interest in this work are server (resource) utilisation and system times as these metrics capture the driving forces of cost and flow as well as the overall state of the system, its staff and its patients. Specifically, the objective of these experiments is to address the following questions: \begin{itemize} \item How would the system be affected by a change in overall patient arrivals? \item How is the system affected by a change in resource availability (i.e.\ a change in \(c\))? \item How is the system affected by patients moving between clusters? \end{itemize} Owing to the nature of the observed data, the queuing model parameterisation and its assumptions, the effects on the chosen metrics in each scenario are given in relative terms with respect to the base case. The base case being those results generated from the best parameter set recorded in Table~\ref{tab:comparison}. In particular, the data from each scenario is scaled by the corresponding median value in the base case meaning that a metric having a value of 1 is `normal'. As mentioned in Section~\ref{sec:intro}, the source code used throughout this work is available online and has been archived online~\cite{Wilde2020github}. In addition to this, the datasets generated from the simulations in this section have been archived along with those generated from the parameter sweep~\cite{Wilde2020results}. \subsection{Changes to overall patient arrivals}\label{subsec:arrivals} Changes in overall patient arrivals to a queue reflect real-world scenarios where some stimulus is improving (or worsening) the condition of the patient population. Examples of stimuli could include an aging population or independent life events that lead to a change in deprivation such as an accident or job loss. Within this model, overall patient arrivals are altered using a scaling factor denoted by \(\sigma\in\mathbb{R}\). This scaling factor is applied to the model by multiplying each cluster's arrival rate by \(\sigma\). That is, for cluster \(i\), its new arrival rate, \(\hat\lambda_i\), is given by: \begin{equation}\label{eq:lambda} \hat\lambda_{i} = \sigma\lambda_i \end{equation} \begin{figure} \centering \begin{subfigure}{.5\imgwidth} \includegraphics[width=\linewidth]{lambda_time} \caption{}\label{fig:lambda_time} \end{subfigure}\hfill% \begin{subfigure}{.5\imgwidth} \includegraphics[width=\linewidth]{lambda_util} \caption{}\label{fig:lambda_util} \end{subfigure} \caption{% Plots of \(\sigma\) against relative (\subref{fig:lambda_time})~system time and (\subref{fig:lambda_util})~server utilisation. }\label{fig:lambda} \end{figure} Figure~\ref{fig:lambda} shows the effects of changing patient arrivals on (\subref{fig:lambda_time})~relative system times and (\subref{fig:lambda_util})~relative server utilisation over values of \(\sigma\) from \input{tex/lambda_scaling_min} to \input{tex/lambda_scaling_max} at a precision of \input{tex/lambda_scaling_step}. Specifically, each plot in the figure (and the subsequent figures in this section) shows the median and interquartile range (IQR) of each relative attribute. These metrics provide an insight into the experience of the average user (or server) in the system, and in the stability or variation of the body of users (servers). What is evident from these plots is that things are happening as one might expect: as arrivals increase, the strain on the system increases. However, it should be noted that it also appears that the model has some amount of slack relative to the base case. Looking at Figure~\ref{fig:lambda_time}, for instance, the relative system times (i.e.\ the relative length of stay for patients) remains unchanged up to \(\sigma \approx 1.2\), or an approximate 20\% increase in arrivals of COPD patients. Beyond that, relative system times rise to an untenable point where the median time becomes orders of magnitude above the norm. However, Figure~\ref{fig:lambda_util} shows that the situation for the system's resources reaches its worst case near to the start of that spike in relative system times (at \(\sigma \approx 1.4\)). That is, the median server utilisation reaches a maximum (this corresponds to constant utilisation) at this point and the variation in server utilisation disappears entirely. \subsection{Changes to resource availability}\label{subsec:resources} As is discussed in Section~\ref{sec:model}, the resource availability of the system is captured by the number of parallel servers in the system, \(c\). Therefore, to modify the overall resource availability, only the number of servers need be changed. This kind of sensitivity analysis is usually done to determine the opportunity cost of adding service capacity to a system, e.g.\ would adding \(n\) servers sufficiently increase efficiency without exceeding a budget? To reiterate the beginning of this section, all suitable parameters are given in relative terms. This includes the number of servers here. By doing this, the changes in resource availability are more easily seen, and do away with any concerns as to what a particular number of servers exactly reflects in the real world. \begin{figure} \centering \begin{subfigure}{.5\imgwidth} \includegraphics[width=\linewidth]{servers_time} \caption{}\label{fig:servers_time} \end{subfigure}\hfill% \begin{subfigure}{.5\imgwidth} \includegraphics[width=\linewidth]{servers_util} \caption{}\label{fig:servers_util} \end{subfigure} \caption{% Plots of the relative number of servers against relative (\subref{fig:servers_time})~system time and (\subref{fig:servers_util})~server utilisation. }\label{fig:servers} \end{figure} Figure~\ref{fig:servers} shows how the relative resource availability affects relative system times and server utilisation. In this scenario, the relative number of servers took values from \input{tex/num_servers_change_min} to \input{tex/num_servers_change_max} at steps of \input{tex/num_servers_change_step} --- this is equivalent to a step size of 1 in the actual number of servers. Overall, these figures fortify the claim from the previous scenario that there is some room to manoeuvre so that the system runs `as normal' but pressing on those boundaries results in massive changes to both resource requirements and system times. In Figure~\ref{fig:servers_time} this amounts to a maximum of 20\% slack in resources before relative system times are affected; further reductions quickly result in a potentially tenfold increase in the median system time, and up to 50 times once resource availability falls by 50\%. Moreover, the variation in the body of the relative times (i.e.\ the IQR) decreases as resource availability decreases. The reality of this is that patients arriving at a hospital are forced to consume larger amounts of resources (simply by being in a hospital) regardless of their condition, putting added strains on the system. Meanwhile, it appears that there is no tangible change in relative system times given an increase in the number of servers. This indicates that the model carries sufficient resources to cater to the population under normal circumstances, and that adding service capacity will not necessarily improve system times. Again, Figure~\ref{fig:servers_util} shows that there is a substantial change in the variation in the relative utilisation of the servers. In this case, the variation dissipates as resource levels fall and increases as they increase. While the relationship between real hospital resources and the number of servers is not exact, having variation in server utilisation would suggest that parts of the system may be configured or partitioned away in the case of some significant public health event (such as a global pandemic) without overloading the system. \subsection{Moving arrivals between clusters}\label{subsec:moving} This scenario is perhaps the most relevant to actionable public health research of those presented here. The clusters identified in this work could be characterised by their clinical complexities and resource requirements, as done in Section~\ref{subsec:overview}. Therefore, being able to model the movement of some proportion of patient spells from one cluster to another will reveal how those complexities and requirements affect the system itself. The reality is then that if some public health policy could be implemented to enact that movement informed by a model such as this then real change would be seen in the real system. In order to model the effects of spells moving between two clusters, the assumption is that services remain the same (and so does each cluster's \(p_i\)) but their arrival rates are altered according to some transfer proportion. Consider two clusters indexed at \(i, j\), and their respective arrival rates, \(\lambda_i, \lambda_j\), and let \(\delta \in [0, 1]\) denote the proportion of arrivals to be moved from cluster \(i\) to cluster \(j\). Then the new arrival rates for each cluster, denoted by \(\hat\lambda_i, \hat\lambda_j\) respectively, are: \begin{equation}\label{eq:moving} \hat\lambda_i = \left(1 - \delta\right) \lambda_i \quad \text{and} \quad \hat\lambda_j = \delta\lambda_i + \lambda_j \end{equation} By moving patient arrivals between clusters in this way, the overall arrivals are left the same since the sum of the arrival rates is the same. Hence, the (relative) effect on server utilisation and system time can be measured independently. Figures~\ref{fig:moving_time}~and~\ref{fig:moving_util} show the effect of moving patient arrivals between clusters on relative system time and relative server utilisation respectively. In each figure, the median and IQR for the corresponding attribute is shown, as in the previous scenarios. Each scenario was simulated using values of \(\delta\) from \input{tex/moving_clusters_min} to \input{tex/moving_clusters_max} at steps of \input{tex/moving_clusters_step}. Considering Figure~\ref{fig:moving_time}, it is clear that there are some cases where reducing particular types of spells (by making them like another type of spell) has no effect on overall system times. Namely, moving the high resource requirement spells that make up Cluster 0 and Cluster 3 to any other cluster. These clusters make up only 10\% of all arrivals and this figure shows that in terms of system times the model is able to handle them without concern under normal conditions. The concern comes when either of the other clusters moves to Cluster 0 or Cluster 3. Even as few as one in five of the low complexity, low resource needs arrivals in Cluster 2 moving to either cluster results in large jumps in the median system time for all arrivals, and soon after, as in the previous scenario, any variation in the system times disappears indicating an overborne system. With relative server utilisation, the story is much the same. The normal levels of high complexity, high resource arrivals from Cluster 3 are absorbed by the system and moving these arrivals to another cluster bears no effect on resource consumption levels. Likewise, either of the low resource need clusters moving even slightly toward high resource requirements completely overruns the system's resources. However, the relative utilisation levels of the system resources can be reduced by moving arrivals from Cluster 0 to either Cluster 1 or Cluster 2, i.e.\ by reducing the overall resource requirements of such spells. In essence, this entire analysis offers two messages: that there are several ways in which the system can get worse and even overwhelmed but, more importantly, that any meaningful impact on the system must come from a stimulus outside of the system that results in more healthy patients arriving to the hospital. This is non-trivial; the first two scenarios in this analysis show that there are no quick solutions to reduce the effect of COPD patients on hospital capacity or length of stay. The only effective intervention is found through inter-cluster transfers. \begin{figure} \centering \includegraphics[width=\imgwidth]{moving_time} \caption{% Plots of proportions of each cluster moving to another against relative system time. }\label{fig:moving_time} \end{figure} \begin{figure} \centering \includegraphics[width=\imgwidth]{moving_util} \caption{% Plots of proportions of each cluster moving to another on relative server utilisation. }\label{fig:moving_util} \end{figure}
{ "alphanum_fraction": 0.7837649787, "avg_line_length": 53.2304526749, "ext": "tex", "hexsha": "e9531716d2fe82eec014ac780c4a5c14474c4355", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2022-03-23T20:29:08.000Z", "max_forks_repo_forks_event_min_datetime": "2022-03-23T20:29:08.000Z", "max_forks_repo_head_hexsha": "387a14f886d3c562228bb4be45abdd7ed996eda1", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "drvinceknight/copd-paper", "max_forks_repo_path": "sections/scenarios.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "387a14f886d3c562228bb4be45abdd7ed996eda1", "max_issues_repo_issues_event_max_datetime": "2022-03-24T12:15:13.000Z", "max_issues_repo_issues_event_min_datetime": "2020-06-28T13:59:15.000Z", "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "drvinceknight/copd-paper", "max_issues_repo_path": "sections/scenarios.tex", "max_line_length": 80, "max_stars_count": null, "max_stars_repo_head_hexsha": "387a14f886d3c562228bb4be45abdd7ed996eda1", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "drvinceknight/copd-paper", "max_stars_repo_path": "sections/scenarios.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2954, "size": 12935 }