Search is not available for this dataset
text
string
meta
dict
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% HBN-POD2 Scientific Data Submission %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \documentclass[fleqn,10pt,inline]{wlscirep} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{lineno} \usepackage{graphicx} \usepackage{siunitx} \usepackage[export]{adjustbox} \usepackage{subcaption} \usepackage{longtable} \usepackage{multirow} \usepackage{multicol} \usepackage{xargs} % Use more than one optional parameter in a new commands \usepackage[colorinlistoftodos,prependcaption]{todonotes} \PassOptionsToPackage{hyphens}{url} \PassOptionsToPackage{inline}{enumitem} \usepackage{hyperref} % \usepackage[nobiblatex]{xurl} % \interfootnotelinepenalty=\@m \graphicspath{{figures/}} \sisetup{% locale=US, group-minimum-digits=5, group-separator={,}, detect-all, } \linenumbers \title{An open, analysis-ready, and quality controlled resource for pediatric brain white-matter research} \author[1,$\dagger$,*]{Adam Richie-Halford} \author[2,3,$\dagger$,*]{Matthew Cieslak} \author[4]{Lei Ai} \author[1,5,6]{Sendy Caffarra} \author[2,3]{Sydney Covitz} \author[4,7]{Alexandre R. Franco} \author[5,8,9]{Iliana I. Karipidis} \author[10]{John Kruper} \author[4,7]{Michael Milham} \author[8]{B\'arbara Avelar-Pereira} \author[5]{Ethan Roy} \author[2,3]{Valerie J. Sydnor} \author[1,5]{Jason Yeatman} \author[12]{The Fibr Community Science Consortium} \author[2,3,$\ddagger$]{Theodore D. Satterthwaite} \author[10,11,$\ddagger$]{Ariel Rokem} \affil[1]{Stanford University, Division of Developmental and Behavioral Pediatrics, Stanford, California, 94305, USA} \affil[2]{University of Pennsylvania, Department of Psychiatry, Philadelphia, Pennsylvania, 19104, USA} \affil[3]{University of Pennsylvania, Lifespan Informatics and Neuroimaging Center, Philadelphia, Pennsylvania, 19104, USA} \affil[4]{Child Mind Institute, Center for the Developing Brain, New York City, New York, 10022, USA} \affil[5]{Stanford University, Graduate School of Education, Stanford, California, 94305, USA} \affil[6]{University of Modena and Reggio Emilia, Department of Biomedical, Metabolic and Neural Sciences, 41125 Modena, Italy} \affil[7]{Nathan Kline Institute for Psychiatric Research, Center for Biomedical Imaging and Neuromodulation, Orangeburg, New York, 10962, USA} \affil[8]{Stanford University, Department of Psychiatry and Behavioral Sciences, School of Medicine, Stanford, California, 94305, USA} \affil[9]{University of Zurich, Department of Child and Adolescent Psychiatry and Psychotherapy, University Hospital of Psychiatry Zurich, Zurich, 8032, Switzerland} \affil[10]{University of Washington, Department of Psychology, Seattle, Washington, 98195, USA} \affil[11]{University of Washington, eScience Institute, Seattle, Washington, 98195, USA} \affil[12]{The Fibr Community Science Consortium} \affil[*]{corresponding authors: Adam Richie-Halford ([email protected]), Matthew Cieslak ([email protected])} \affil[$\dagger$]{These authors contributed equally to this work} \affil[$\ddagger$]{These authors also contributed equally to this work} \begin{abstract} We created a set of resources to enable research based on openly-available diffusion MRI (dMRI) data from the Healthy Brain Network (HBN) study. First, we curated the HBN dMRI data (N=\num{2747}) into the Brain Imaging Data Structure and preprocessed it according to best-practices, including denoising and correcting for motion effects, susceptibility-related distortions, and eddy currents. Preprocessed, analysis-ready data was made openly available. Data quality plays a key role in the analysis of dMRI. To optimize QC and scale it to this large dataset, we trained a neural network through the combination of a small data subset scored by experts and a larger set scored by community scientists. The network performs QC highly concordant with that of experts on a held out set (ROC-AUC $=0.947$). A further analysis of the neural network demonstrates that it relies on image features with relevance to QC. Altogether, this work both delivers resources to advance transdiagnostic research in brain connectivity and pediatric mental health, and establishes a novel paradigm for automated QC of large datasets. \end{abstract} \begin{document} \flushbottom \maketitle % Click the title above to edit the author information and abstract \thispagestyle{empty} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section*{Background \& Summary} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Childhood and adolescence are characterized by rapid dynamic change to human brain structure and function \cite{Lebel2018-oy}. This period of development is also a time during which the symptoms of many mental health disorders emerge \cite{Paus2008-gi}. Understanding how individual differences in brain development relate to the onset and progression of psychopathology inevitably requires large datasets \cite{Paus2010-qk, Fair2021-eg}. The Healthy Brain Network (HBN) is a landmark pediatric mental health study that is designed to eventually include MRI images along with detailed clinical and cognitive phenotyping from over \num{5000} New York City area children and adolescents \cite{alexander2017-yc}. The HBN dataset takes a trans-diagnostic approach and provides a broad range of phenotypic and brain imaging data for each individual. One of the brain imaging measurements acquired is diffusion MRI (dMRI), which is the dominant technology for inferring the physical properties of white matter \cite{wandell2016-qt}. The dMRI data is openly available in its raw form through the Functional Connectomes Project and the International Neuroimaging Data-Sharing Initiative (FCP-INDI), spurring collaboration on open and reproducible science \cite{Mennes2013-dl}. However, this raw, publicly available data requires extensive processing and quality assurance before it can be fruitfully analyzed. The most immediate contribution of the present work is a large openly-available analysis-ready dMRI data resource derived from the HBN dataset. In the past decade, projects such as the Human Connectome Project (HCP) \cite{van-essen2013-oi}, UK Biobank \cite{miller2016-mq}, ABCD \cite{jernigan2018-my}, and CamCAN \cite{taylor2017-or,shafto2014-ld}, as well as FCP-INDI, have ushered a culture of data sharing in open big-data human neuroscience. The adoption and reuse of these datasets reduces or eliminates the data collection burden on downstream researchers. Some projects, such as the HCP \cite{glasser2013-lo}, also provide preprocessed derivatives, further reducing researchers' burden and extending the benefits of data-sharing from data collection to preprocessing and secondary analysis. Following the example of the HCP, the present study provides analysis-ready dMRI derivatives from HBN. This avoids duplication of and heterogeneity across the preprocessing effort, while also ensuring a high standard of data quality for HBN researchers. The analysis of a large, multi-site dMRI dataset must take into account the inevitable variability in scanning parameters across scanning sessions. Critical preprocessing steps, such as susceptibility distortion correction \cite{jones2010-ps} require additional MRI acquisitions besides dMRI and accurate metadata accompanying each image. A session missing an acquisition or important metadata can either be processed to the extent its available data allows or excluded entirely. In addition, the quality of preprocessed data is heavily affected by differences in acquisition parameters \cite{yeh2019-kb} and by differences in preprocessing steps. Here we address these problems by meticulously curating the HBN data according to the Brain Imaging Data Specification (BIDS) \cite{gorgolewski2016-lh} and processing the data using the \emph{QSIPrep} \cite{cieslak2021-iq} BIDS App \cite{Gorgolewski2017-mb}. \emph{QSIPrep} automatically builds and executes benchmarked workflows that adhere to best practices in the field given the available BIDS data. The results include automated data quality metrics, visual reports and a description of the processing steps automatically chosen to process each session. This preprocessing requires a costly compute infrastructure and is both time-consuming and error-prone. Requiring researchers to process dMRI data on their own introduces both a practical barrier to access and an extra source of heterogeneity into the data, devaluing its scientific utility. We provide the preprocessed data as a transparent and open resource, thereby reducing barriers to data access and allowing researchers to spend more of their time answering questions in brain development and psychopathology rather than recapitulating preprocessing. In addition to requiring extensive preprocessing, dMRI data must be thoroughly checked for quality. dMRI measurements are susceptible to a variety of artifacts that affect the quality of the signals and the ability to make accurate inferences from them. In small studies, with few participants, it is common to thoroughly examine the data from every participant as part of a quality control (QC) process. However, expert examination is time consuming and is prohibitive in large datasets such as HBN. This difficulty could be ameliorated through the automation of QC. Given their success in other visual recognition tasks, machine learning and computer vision methods, such as convolutional deep artificial neural networks or ``deep learning'' \cite{lecun2015deep}, are promising avenues for automation of QC. However, one of the challenges of these new methods is that they require a large training dataset to attain accurate performance. In previous work, we demonstrated that deep learning can accurately emulate expert QC of T1-weighted (T1w) anatomical brain images \cite{keshavan2019-er}. To obtain a large enough training dataset of T1w images in our prior study, we deployed a community science tool \footnote{% While the term ``citizen science'' evokes a sense of civic duty in scientific engagement, it can also imply a barrier for community members who want to contribute to science but may not be citizens of a particular country. In this manuscript we use the more modern term ``community science.'' } that collected quality control scores of parts of the dataset from volunteers through a web application. The scores were then calibrated using a gold standard expert-scored subset of these images. A deep learning neural network was trained on the calibrated and aggregated score, resulting in very high concordance with expert ratings on a separate test dataset. We termed this approach ``hybrid QC'', because it combined information from experts with information from community scientists to create a scalable machine learning algorithm that can be applied to future data collection. However, the hybrid QC proof-of-concept left lingering questions about its applicability to other datasets because it was trained on a single-site, single-modality dataset. Here, we expand the hybrid-QC approach to a large multi-site dMRI dataset. Moreover, one of the common critiques of deep learning is that it can learn irrelevant features of the data and does not provide information that is transparent enough to interpret \cite{lipton2017doctor, salahuddin2022transparency, Zech2018-ki}. To confirm that the hybrid-QC deep learning algorithm uses meaningful features of the diffusion-weighted images (DWI) to perform accurate QC, we used machine learning interpretation methods that pry open the ``black box'' of the neural network, thereby highlighting the features that lead to a specific QC score \cite{sundararajan2017axiomatic, murdoch2019definitions}. Taken together, the combination of curated BIDS data, preprocessed images, and quality control scores generated by the deep learning algorithm provides researchers with a rich and accessible data resource. Making MRI derivatives accessible not only reduces the burden of processing large datasets for research groups with limited resources \cite{laird2021large}, but also aids research performed by clinicians who are interested in brain-behavior relationships but may be lacking the technical training to process large-scale dMRI data. We anticipate that these HBN Preprocessed Open Diffusion Derivatives (HBN-POD2) will accelerate translational research on both normal and abnormal brain development. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section*{Methods} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% The aims of this data resource were fourfold \begin{enumerate*}[% label=(\roman*),% before=\unskip{: },% itemjoin={{, }},% itemjoin*={{, and }}] \item curate the HBN MRI data into a fully BIDS-compliant MRI dataset \item perform state-of-the-art diffusion MRI (dMRI) preprocessing using \emph{QSIPrep} \item assign QC scores to each participant \item provide unrestricted public release to the outputs from each of these steps. \end{enumerate*} We started with MRI data from \num{2747} HBN participants available through FCP-INDI, curating these data for compliance with the Brain Imaging Data Structure (BIDS) specification \cite{gorgolewski2016-lh}. We preprocessed the structural MRI (sMRI) and diffusion MRI (dMRI) data using \emph{QSIPrep}. Participants that could not be curated to comply with the BIDS standard or that did not have dMRI data were excluded, resulting in \num{2134} participants with preprocessed, BIDS-compliant dMRI data (Figure~\ref{fig:hbn-sankey}). \begin{figure}[tbp] \centering \includegraphics[width=0.75\linewidth]{hbn-pod2-sankey.png} \caption{% {\bf HBN-POD2 data provenance}: Imaging data for \num{2747} participants, aged 5-21 years and collected at four sites in the New York City area, was made available through the Functional Connectomes Project and the International Neuroimaging Data-Sharing Initiative (FCP-INDI). % These data were curated for compliance to the BIDS specification \cite{gorgolewski2016-lh} and availability of imaging metadata in json format. \num{2615} participants met this specification. % Imaging data was preprocessed using \emph{QSIPrep} \cite{cieslak2021-iq} to group, distortion correct, motion correct, denoise, coregister and resample MRI scans. Of the BIDS curated participants, \num{2134} passed this step, with the majority of failures coming from participants with missing dMRI scans. % Expert raters assigned QC scores to \num{200} of these participants, creating a ``gold standard'' QC subset (Figure \ref{fig:expert-qc}). Community raters then assigned binary QC ratings to a superset of the gold standard containing \num{1653} participants. An image classification algorithm was trained on a combination of automated quality metrics from \emph{QSIPrep} and community scientist reviews to ``extend'' the expert ratings to the community science subset (Figure \ref{fig:fibr-qc}. Finally, a deep learning QC model was trained on the community science subset to assign QC scores to the entire dataset and to future releases from HBN (Figure \ref{fig:dl-qc}). % The HBN-POD2 dataset, including QC ratings, is openly available through FCP-INDI. } \label{fig:hbn-sankey} \end{figure} \subsection*{Inputs} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Inputs for this study consisted of MRI data from the Healthy Brain Network pediatric mental health study \cite{alexander2017-yc}, containing dMRI data from \num{2747} participants aged 5-21 years. These data were measured using a \qty{1.5}{\tesla} Siemens mobile scanner on Staten Island (SI, $N=300$) and three fixed \qty{3}{\tesla} Siemens MRI scanners at sites in the New York area: Rutgers University Brain Imaging Center (RU, $N=873$), the CitiGroup Cornell Brain Imaging Center (CBIC, $N=887$), and the City University of New York Advanced Science Research Center (CUNY, $N=74$), where numbers in parentheses represent participant counts in HBN-POD2. Informed consent was obtained from each participant aged 18 or older. For participants younger than 18, written consent was obtained from their legal guardians and written assent was obtained from the participant. Voxel resolution was \qty{1.8}{\mm} $\times$ \qty{1.8}{\mm} $\times$ \qty{1.8}{\mm} with \num{64} non-colinear directions measured for each of $b=1000$ \unit{\second \per \mm^{2}} and $b=2000$ \unit{\second \per \mm^{2}}. Figure~\ref{fig:metric-dist} depicts the age distribution of study participants by sex for each of these scan site as well as pairwise distributions for the automated quality metrics that are described in the next sections. \subsection*{BIDS curation} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% We curated the imaging metadata for \num{2615} of the \num{2747} currently available HBN participants. Using dcm2bids and custom scripts, we conformed the data to the Brain Imaging Data Structure (BIDS)\cite{gorgolewski2016-lh} specification. The BIDS-curated dataset is available on FCP-INDI and can be accessed via AWS S3 at \url{s3://fcp-indi/data/Projects/HBN/BIDS_curated/}. After conforming the data to BIDS, we used the ``Curation of BIDS'' (CuBIDS) package \cite{sydney-covitz2022-cubids} to identify unique combinations, or ``variants'' of imaging parameters in the curated dMRI and fieldmap acquisitions. CuBIDS is a Python-based software package that provides a sanity-preserving workflow to help users reproducibly parse, validate, curate, and understand heterogeneous BIDS imaging datasets. CuBIDS includes a robust implementation of the BIDS Validator that scales to large samples and incorporates DataLad \cite{halchenko2021datalad}, a distributed data management system, to ensure reproducibility and provenance tracking throughout the curation process. CuBIDS tools also employ agglomerative clustering to identify variants of imaging parameters. Each session was grouped according to metadata parameters that affect the dMRI signal (PhaseEncodingDirection, EchoTime, VoxelSize, FlipAngle, PhasePartialFourier, NumberOfVolumes, Fieldmap availability). We identified a total of 20 unique DWI acquisitions across HBN-POD2, where about 5\% of acquisitions were different from the most common DWI acquisition at their site. \subsection*{Preprocessing} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% We performed dMRI preprocessing on \num{2615} participants, using \emph{QSIPrep} \cite{cieslak2021-iq} 0.12.1, which is based on \emph{Nipype} 1.5.1 \cite{nipype1,nipype2}, RRID:SCR\_002502. \emph{QSIPrep} a robust and scalable pipeline to group, distortion correct, motion correct, denoise, coregister and resample MRI scans. In total, \num{417} participants failed this preprocessing step, largely due to missing dMRI files. In keeping with the BIDS specification, the preprocessed dataset is available as a derivative dataset within the BIDS-curated dataset and can be access on AWS S3 at \url{s3://fcp-indi/data/Projects/HBN/BIDS_curated/derivatives/qsiprep/}. \emph{QSIPrep} fosters reproducibility by automatically generating thorough methods boilerplate for later use in scientific publications, which we use for the remainder of this subsection to document each preprocessing step. \begin{itemize} \item {\it Anatomical data preprocessing} The T1-weighted (T1w) image was corrected for intensity non-uniformity (INU) using \texttt{N4BiasFieldCorrection} \cite{n4} (ANTs 2.3.1), and used as T1w-reference throughout the workflow. The T1w-reference was then skull-stripped using \texttt{antsBrainExtraction.sh} (ANTs 2.3.1), using OASIS as target template. Spatial normalization to the ICBM 152 Nonlinear Asymmetrical template version 2009c (RRID:SCR\_008796)\cite{mni} was performed through nonlinear registration with \texttt{antsRegistration} (ANTs~2.3.1, RRID:SCR\_004757)\cite{ants}, using brain-extracted versions of both T1w volume and template. Brain tissue segmentation of cerebrospinal fluid (CSF), white-matter (WM) and gray-matter (GM) was performed on the brain-extracted T1w using \texttt{FAST} (FSL 6.0.3:b862cdd5, RRID:SCR\_002823)\cite{fsl-fast}. \item {\it Diffusion data preprocessing} Any images with a $b$-value less than \qty{100}{\second \per \mm^{2}} were treated as a $b=0$ image. MP-PCA denoising as implemented in MRtrix3's \texttt{dwidenoise}~\cite{dwidenoise1} was applied with a 5-voxel window. After MP-PCA, B1 field inhomogeneity was corrected using \texttt{dwibiascorrect} from MRtrix3 with the N4 algorithm \cite{n4}. After B1 bias correction, the mean intensity of the DWI series was adjusted so all the mean intensity of the $b=0$ images matched across each separate DWI scanning sequence. FSL (version 6.0.3:b862cdd5)'s eddy was used for head motion correction and Eddy current correction \cite{anderssoneddy}. Eddy was configured with a \(q\)-space smoothing factor of 10, a total of 5 iterations, and \num{1000} voxels used to estimate hyperparameters. A linear first level model and a linear second level model were used to characterize Eddy current-related spatial distortion. \(q\)-space coordinates were forcefully assigned to shells. Field offset was attempted to be separated from participant movement. Shells were aligned post-eddy. Eddy's outlier replacement was run \cite{eddyrepol}. Data were grouped by slice, only including values from slices determined to contain at least \num{250} intracerebral voxels. Groups deviating by more than four standard deviations from the prediction had their data replaced with imputed values. Data was collected with reversed phase-encode blips, resulting in pairs of images with distortions going in opposite directions. Here, $b=0$ reference images with reversed phase encoding directions were used along with an equal number of $b=0$ images extracted from the DWI scans. From these pairs the susceptibility-induced off-resonance field was estimated using a method similar to that described in \cite{topup}. The fieldmaps were ultimately incorporated into the Eddy current and head motion correction interpolation. Final interpolation was performed using the \texttt{jac} method. Several confounding time-series were calculated based on the \emph{preprocessed DWI}: framewise displacement (FD) using the implementation in \emph{Nipype} following the definitions by \cite{power-fd-dvars}. The DWI time-series were resampled to ACPC, and their corresponding gradient directions were rotated accordingly to generate a \emph{preprocessed DWI run in ACPC space}. \end{itemize} \noindent Many internal operations of \emph{QSIPrep} use \emph{Nilearn} 0.6.2 \cite{nilearn}, RRID:SCR\_001362 and \emph{DIPY} \cite{dipy}. For more details of the pipeline, see \href{https://qsiprep.readthedocs.io/en/latest/workflows.html}{the section corresponding to workflows in \emph{QSIPrep}'s documentation}. \subsection*{Cloud-based distributed preprocessing} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% The containerization of \emph{QSIPrep} provided a consistent preprocessing pipeline for each participant but the number of participants made serial processing of each participant prohibitive on a single machine. We used \emph{cloudknot}, a previously developed cloud-computing library \cite{cloudknot} to parallelize the preprocessing over individual participants on spot instances in the Amazon Web Services Batch service. \emph{Cloudknot} takes as input a user-defined Python function and creates the necessary AWS infrastructure to map that function onto a range of inputs, in this case, the participant IDs. Using \emph{cloudknot} and AWS Batch Spot Instances, the preprocessing cost less than \textdollar1.00 per participant. \subsection*{Quality Control} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% To QC all available HBN dMRI data, we adopted a hybrid QC approach that combines expert rating, community science, and deep learning, drawing on the success of a previous application in assessing the quality of HBN's structural T1w MRI data \cite{keshavan2019-er}. This method \begin{enumerate*}[% label=(\roman*),% before={{ }},% itemjoin={{; }},% itemjoin*={{ and }}] \item starts with dMRI expert raters labelling a small subset of participants, the ``gold standard'' dataset \item amplifies these labels using a community science web application to extend expert ratings to a much larger subset of the data, the community science subset \item trains a deep learning model on the community science subset to predict expert decisions on the entire dataset. \end{enumerate*} \subsubsection*{Expert quality control} The expert QC ``gold standard'' subset was created by randomly selecting 200 participants from the preprocessed dataset, sampled such that the proportional site distribution in the gold standard subset matched that of the preprocessed dataset. We then developed \emph{dmriprep-viewer}, a dMRI data viewer and QC rating web application to display \emph{QSIPrep} outputs and collect expert ratings \cite{richie-halford2022-viewer}. The viewer ingests \emph{QSIPrep} outputs and generates a browser-based interface for expert QC. It provides a study overview displaying the distributions of \emph{QSIPrep}'s automated data quality metrics (described at \url{https://qsiprep.readthedocs.io/en/latest/preprocessing.html#quality-control-data}). Each datum on the study overview page is interactively linked to a participant-level QC page that provides an interactive version of \emph{QSIPrep}'s visual reports (described at \url{https://qsiprep.readthedocs.io/en/latest/preprocessing.html#visual-reports}). The viewer allows users to assign a rating of $-2$ (definitely fail), $-1$ (probably fail), $0$ (not sure), $1$ (probably pass), or $2$ (definitely pass) to a participant. To standardize rater expectations before rating, expert raters watched a tutorial video (available on YouTube at \url{https://youtu.be/SQ0v-O-e5b8} and in the OSF project). Six of the co-authors, who are all dMRI experts, rated the gold standard subset using extensive visual examination of each participant's dMRI data, including the preprocessed diffusion weighting imaging (DWI) time series, a plot of motion parameters throughout the DWI scan, and full 3D volumes depicting \begin{enumerate*}[% label=(\roman*),% before={{ }},% itemjoin={{, }},% itemjoin*={{ and }}] \item the brain mask and $b=0$ to T1w registration \item a directionally encoded color fractional anisotropy (DEC-FA) image laid over the $b=0$ volume. \end{enumerate*} See Appendix~\ref{app:web-apps} for an example of the \emph{dmriprep-viewer} interface. The distribution of scores given by the experts demonstrates that the gold standard dataset included a range of data quality (Figure~\ref{fig:expert-qc:scatter:hist}). Mean expert ratings correlated with the three \emph{QSIPrep} automated QC metrics that were most informative for the XGB model described in the next section: neighboring DWI correlation \cite{yeh2019-kb} (Figure~\ref{fig:expert-qc:scatter:ndc}), maximum relative translation (Figure~\ref{fig:expert-qc:scatter:translation}), and number of outlier slices (Figure~\ref{fig:expert-qc:scatter:outliers}). The neighboring DWI correlation characterizes the pairwise spatial correlation between pairs of DWI volumes that sample neighboring points in $q$-space. Since lower values indicate reduced data quality, it is reassuring that the neighboring DWI correlation correlated directly with expert ratings (Pearson CC: $0.797$). Conversely, high relative translation and a high number of motion outlier slices reflect poor data quality and these metrics were inversely related to mean expert rating (Pearson CC: $-0.692$ and Pearson CC: $-0.695$, respectively). In addition to agreeing qualitatively with \emph{QSIPrep}'s automated QC metrics on average, the expert raters also tended to agree with each other (Figure~\ref{fig:expert-qc:irr}). We assessed inter-rater reliability (IRR) using the pairwise Cohen's $\kappa$ \cite{di-eugenio2004-bb}, computed using the \emph{scikit-learn} \cite{scikit-learn} \texttt{cohen\_kappa\_score} function with quadratic weights. The pairwise $\kappa$ exceeded 0.52 in all cases, with a mean value of 0.648. In addition to the pairwise Cohen's $\kappa$, we also computed the intra-class correlation (ICC) \cite{hallgren2012-ze} as a measure of IRR, using the \emph{pingouin} statistical package \cite{vallat2018pingouin} \texttt{intraclass\_corr} function. ICC3k is the appropriate variant of the ICC to use when a fixed set of $k$ raters each code an identical set of participants, as is the case here. ICC3k for inter-rater reliability among the experts was 0.930 (95\% CI: [0.91, 0.94]), which is qualitatively considered an ``excellent'' level of IRR \cite{Cicchetti1994-fz}. The high IRR provides confidence that the average of the expert ratings for each image in the gold standard is an appropriate target to use for training a machine learning model that predicts the expert scores. \begin{figure}[tbp] {\phantomsubcaption\label{fig:expert-qc:scatter:hist}} {\phantomsubcaption\label{fig:expert-qc:scatter:ndc}} {\phantomsubcaption\label{fig:expert-qc:scatter:translation}} {\phantomsubcaption\label{fig:expert-qc:scatter:outliers}} {\phantomsubcaption\label{fig:expert-qc:irr}} \begin{subfigure}{.6\linewidth} \centering \includegraphics[width=\linewidth]{community-qc/expert-qsiprep-pairplot.pdf} \end{subfigure} \begin{subfigure}{.4\linewidth} \centering \includegraphics[width=\linewidth]{community-qc/expert-raters-cohens-kappa.pdf} \end{subfigure} \caption{% {\bf Expert QC results}: Six dMRI experts rated a subset of \num{200} participants. Experts agreed with \emph{QSIPrep}'s automated QC metrics. Here we show the distribution of mean expert QC ratings \textbf{(a)} and associations between the mean expert QC rating and the \emph{QSIPrep} metrics \textbf{(b)} neighboring diffusion-weighted imaging (DWI) correlation \cite{yeh2019-kb}, \textbf{(c)} maximum relative translation, and \textbf{(d)} number of outlier slices. As expected, neighboring DWI correlation is directly correlated with expert rating while the other two metrics are inversely correlated with expert rating. % \textbf{(e)} Experts agreed with each other. Here we show the pairwise Cohen's $\kappa$ measure of inter-rater reliability (see text for ICC calculations). The XGB model has an inter-rater reliability (quantified here as Cohen's $\kappa$) that is indistinguishable from the other raters } \label{fig:expert-qc} \end{figure} \subsubsection*{Community scientist quality control} Although the expert raters achieved high IRR and yielded intuitive associations with \emph{QSIPrep}'s automated QC metrics, generating expert QC labels for the entire HBN-POD2 dataset would be prohibitively time consuming. To assess the image quality of the remaining participants, we deployed \emph{Fibr} (\url{https://fibr.dev}), a community science web application in which users assigned binary pass/fail labels assessing the quality of horizontal slice DEC-FA images overlaid on the $b=0$ image (see Appendix~\ref{app:web-apps} for an example). Specifically, after a brief tutorial, \emph{Fibr} users saw individual slices or an animated sequence of ten slices taken from the entire DEC-FA volume that the expert raters saw. The \emph{Fibr} users, therefore, saw only a subset of the imaging data that the dMRI experts had access to for a given participant, but they saw data from many more participants. In total, \num{374} community scientists provided \num{587778} ratings for a mean of $>50$ ratings per slice (or $>200$ ratings per participant) from \num{1653} participants. Of the community scientists, \num{145} raters provided $>3,000$ ratings each and are included in the \emph{Fibr} Community Science Consortium as co-authors on this paper \cite{Ward-Fear2020-zq} (see Appendix~\ref{app:fibr-consortium} for a list of consortium co-authors). There are three issues to account for when comparing \emph{Fibr} and expert QC ratings. First, the unadjusted \emph{Fibr} ratings were overly optimistic; i.e., on average, community scientists were not as conservative as the expert raters (Figure~\ref{fig:fibr-qc:scatter:fibr}). Second, different community scientists provide data of differing accuracy. That is, they were less consistent across different views of the same image, and/or were less consistent with expert ratings for the same data). This means that data from some \emph{Fibr} raters was more informative than others. Third, important information about data quality was provided in the \emph{QSIPrep} data quality metrics, which were not available to \emph{Fibr} raters. To account for rater variability and take advantage of the information provided by \emph{QSIPrep}, we trained gradient boosted decision trees \cite{chen2016-eb} to predict expert scores, scaled to the range $[0, 1]$ and binarized with a $0.5$ threshold, based on a combination of community science ratings and 31 automated \emph{QSIPrep} data quality metrics. See Appendix~\ref{app:feature-importance} for a list of these automated metrics and a measure of their global feature importance in the gradient boosting models. One can think of the gradient boosting model as assigning more weight to \emph{Fibr} raters who reliably agree with the expert raters, thereby resolving the aforesaid issues with community rater accuracy. We refer to this gradient boosting model as XGB. \begin{figure}[tbp] {\phantomsubcaption\label{fig:fibr-qc:scatter:fibr}} {\phantomsubcaption\label{fig:fibr-qc:scatter:xgb}} {\phantomsubcaption\label{fig:fibr-qc:roc}} \begin{subfigure}{.63\linewidth} \centering \includegraphics[width=\linewidth]{community-qc/fibr-rating-scatter-plot.pdf} \end{subfigure} \begin{subfigure}{.37\linewidth} \centering \includegraphics[width=\linewidth]{community-qc/xgb-roc-curve.pdf} \end{subfigure} \caption{% {\bf Community science predictions of the expert ratings}: Scatter plots showing the relationship between mean expert rating and both mean \emph{Fibr} rating \textbf{(a)} and XGB prediction \textbf{(b)}. \emph{Fibr} raters overestimated the quality of images compared to expert raters. But the XGB prediction compensated for this by incorporating automated QC metrics and weighting more valuable \emph{Fibr} raters. % \textbf{(c)} ROC curves for the XGB, XGB-q, and XGB-f models. Translucent bands represent one standard deviation from the mean of the cross-validation splits. } \label{fig:fibr-qc} \end{figure} All gradient boosting models were implemented as binary classifiers using the XGBoost library \cite{xgboost}. The targets for these classifiers were the mean expert ratings in the gold standard dataset, rescaled to the range $[0, 1]$ and binarized with a threshold of $0.5$. Using repeated stratified K-fold cross-validation, with three splits and two repeats, we evaluated the models' performance in predicting the gold standard ratings. In each fold, the best model hyperparameters were chosen using the scikit-optimize \cite{scikit-optimize} \texttt{BayesSearchCV} class. Since each split resulted in a different XGB model and we required a single QC score to train the deep learning model, we combined the models from each cross-validation split using a voting classifier, computing a weighted averaged of the predicted probability of passing from each model, weighted by its out-of-sample ROC-AUC. This was implemented using scikit-learn's \texttt{VotingClassifier} class. To clarify the contributions of the automated QC metrics and the community science raters, we trained two additional gradient boosting models \begin{enumerate*}[% label=(\roman*),% before=\unskip{: },% itemjoin={{, }},% itemjoin*={{ and }}] \item one trained only on the automated \emph{QSIPrep} data quality metrics, which we call XGB-q \item one trained on only the \emph{Fibr} ratings, which we call XGB-f. \end{enumerate*} XGB-f may be viewed as a data-driven weighting of community scientists' ratings, while XGB-q may be viewed as a generalization of data quality metric exclusion criteria. XGB, combining information from both \emph{Fibr} ratings and \emph{QSIPrep} data quality metrics attained a cross-validated area under the receiver operating curve (ROC-AUC) of $0.96 \pm 0.01$ on the ``gold standard,'' where the $\pm$ indicates the standard deviation of scores from repeated $k$-fold cross-validation (Figure~\ref{fig:fibr-qc:scatter:xgb}). In contrast, XGB-q attained an ROC-AUC of $0.91 \pm 0.03$ and XGB-f achieved an ROC-AUC of $0.84 \pm 0.04$. The enhanced performance of XGB-q over XGB-f shows that community scientists alone are not as accurate as automated data quality metrics are at predicting expert ratings. And yet, the increased performance of XGB over XGB-q demonstrates that there is additional image quality information to be gained by incorporating community scientist input. As a way of evaluating the quality of the XGB predictions, consider the fact that the average Cohen's $\kappa$ between XGB and the expert raters was 0.74, which is higher than the average Cohen's $\kappa$ between any of the other raters and their human peers (Figure~\ref{fig:expert-qc}). This is not surprising, given that the XGB model was fit to optimize this match, but further demonstrates the goodness of fit of this model. Nevertheless, this provides confidence in using the XGB scores in the next step of analysis, where we treat the XGB model as an additional coder and extend XGB ratings to participants without \emph{Fibr} ratings. In this case, when a subset of participants is coded by multiple raters and the reliability of their ratings is meant to generalize to other participants rated by only one coder, the single-measure ICC3, as opposed to ICC3k, should be used. When adding XGB to the existing expert raters as a seventh expert, we achieved $\textbf{ICC3} = 0.709 (95\% CI: [0.66, 0.75])$. The high ICC3 value after inclusion of the XGB model justifies using the XGB scores as the target for training an image-based deep learning network. \subsubsection*{Automated quality control labelling through deep learning} While the XGB ``rater'' does a good job of extending QC ratings to the entire community science subset, this approach requires \emph{Fibr} scores; without community science \emph{Fibr} scores, only the less accurate XGB-q prediction can be employed. Consequently, a new, fully automated QC approach is needed that can be readily applied to future data releases from HBN. We therefore trained deep convolutional neural networks to predict binarized XGB ratings directly from \emph{QSIPrep} outputs. We modified an existing 3D convolutional neural network (CNN) architecture \cite{zunair2020-bs}---previously applied to the ImageCLEF Tuberculosis Severity Assessment 2019 benchmark \cite{dicente2019clef}---to accept multichannel input generated from the preprocessed dMRI: the $b=0$ reference diffusion image, each of the three cardinal axis components of the DEC-FA image, and, optionally, automated QC metrics from \emph{QSIPrep}. We trained these networks on XGB scores and validated it against the gold standard expert-scored dataset. We refer to the convolutional neural network model trained only on imaging data as CNN-i and the model that incorporates automated QC metrics as CNN-i+q. These model architectures and training procedures are described in more detail in Appendix \ref{app:deep-learning-architectures}. The two models performed nearly identically and achieved an ROC-AUC of $0.947 \pm 0.004$ (Figure~\ref{fig:dl-qc:roc}). The near-identical performance suggests that \emph{QSIPrep}'s automated data quality metrics provided information that was redundant with information available in the imaging data. Both CNN-i and CNN-i+q outperformed XGB-q, which was trained only on automated QC metrics, but both modestly underperformed relative to the full XGB model, that uses \emph{Fibr} scores in addition to the \emph{QSIPrep} data quality metrics. \begin{figure}[!t] {\phantomsubcaption\label{fig:dl-qc:roc}} {\phantomsubcaption\label{fig:dl-qc:joint}} {\phantomsubcaption\label{fig:dl-qc:hist:sex}} {\phantomsubcaption\label{fig:dl-qc:hist:site}} \begin{subfigure}{.5\linewidth} \centering \includegraphics[width=\linewidth]{deep-learning-qc/dl_roc_auc_curve.pdf} \end{subfigure} \begin{subfigure}{.5\linewidth} \centering \includegraphics[width=\linewidth]{bundle-profiles/qc-age-jointplot.pdf} \end{subfigure} \begin{subfigure}{\linewidth} \centering \includegraphics[width=\linewidth]{bundle-profiles/qc-hist.pdf} \end{subfigure} \caption{% {\bf Deep learning QC scores}: \textbf{(a)} ROC curves for two deep learning models trained on imaging data: one trained with additional automated data quality metrics from \emph{QSIPrep} (blue) and one trained without (orange). The models performed roughly identically, reflecting that the data quality metrics are derived from the imaging data and are therefore redundant. Both outperformed the XGB-q predictions, indicating the added value of the diffusion weighted images. However, both models underperformed the XGB predictions, which also incorporate information from \emph{Fibr} ratings for each scan. The error bands represent one standard deviation from the mean of the cross-validation splits. % \textbf{(b)} Joint distributions showing a strong direct association between age and QC score (Pearson CC: $0.31$). This likely reflects the well-known negative association between age and head motion in pediatric neuroimaging. The dots encode the mean QC score for each year of age with error bands representing a bootstrapped 95\% confidence interval. The line depicts a linear regression relating age and QC score with translucent bands encoding a bootstrapped 95\% confidence interval. % Histograms showing the relationship between participants QC scores and their sex \textbf{(c)} and scan site \textbf{(d)}. QC distributions are independent of sex and scanning site. } \label{fig:dl-qc} \end{figure} The openly available HBN-POD2 data released with this paper provides four QC ratings: the mean expert QC ratings, XGB-q and XGB predicted scores, as well as the CNN-i predicted score. However, we treat the CNN-i score as the definitive QC score because it is available for all participants, can be easily calculated for new participants in future HBN releases, and is more accurate than XGB-q in predicting expert ratings in the ``gold standard'' report set. When we refer to a participant's QC score without specifying a generating model, the CNN-i score is assumed. Figure~\ref{fig:dl-qc} depicts the distribution of these QC scores by age (Figure~\ref{fig:dl-qc:joint}), sex (Figure~\ref{fig:dl-qc:hist:sex}), and scanning site (Figure~\ref{fig:dl-qc:hist:site}). QC distributions are similar for each scan site and for male and female participants \footnote{% Responses for the sex variable in HBN phenotypic data are limited to ``male'' and ``female.'' }. \subsubsection*{Tractometry} To further validate the importance of quality control, we used tract profiling \cite{yeatman2012-rc,jones2005pasta,colby2012along,odonnell2009tract, kruper2021evaluating}, which is a subset of tractometry \cite{jones2005pasta,bells2011tractometry}. In particular, tract profiling uses the results of dMRI tractography to quantify properties of the white matter along major pathways. We used the Python Automated Fiber Quantification toolbox (pyAFQ) as previously described \cite{kruper2021evaluating}. Briefly, probabilistic tractography was performed using constrained spherical deconvolution fiber orientation distribution functions \cite{tournier2008csd}, as implemented in DIPY \cite{dipy}. Twenty-four major tracts, which are enumerated in Figure~\ref{fig:qc-profiles:md}, were identified using multiple criteria: inclusion ROIs and exclusion ROIs \cite{Wakana2007-nw}, combined with a probabilistic atlas \cite{Hua2008-di}. Each streamline was resampled to 100 nodes and the robust mean at each location was calculated by estimating the 3D covariance of the location of each node and excluding streamlines that are more than 5 standard deviations from the mean location in any node. Finally, a bundle profile of tissue properties in each bundle was created by interpolating the value of MRI maps of these tissue properties to the location of the nodes of the resampled streamlines designated to each bundle. In each of 100 nodes, the values were summed across streamlines, weighting the contribution of each streamline by the inverse of the Mahalanobis distance of the node from the average of that node across streamlines. Bundle profiles of mean diffusivity (MD) and fractional anisotropy (FA) from the diffusional kurtosis imaging (DKI) model \cite{jensen2005-ta}, implemented in DIPY \cite{Henriques2021-lk}, were used in technical validation of the data and evaluation of the impacts of QC. We used the previously mentioned \emph{cloudknot} cloud-computing library \cite{cloudknot} to parallelize the pyAFQ tractometry pipeline over individual participants on spot instances in the Amazon Web Services Batch service. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section*{Data Records} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection*{Curated imaging data} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Curated BIDS data and their corresponding \emph{QSIPrep} outputs are provided in the FCP-INDI Amazon Web Services (AWS) S3 bucket as indicated in Table~\ref{tab:data-records}. This public resource can be accessed by anyone using standard S3 access tools. The processed diffusion derivatives are \href{ https://qsiprep.readthedocs.io/en/latest/preprocessing.html#outputs-of-qsiprep}{ standard \emph{QSIPrep} outputs}, which contain preprocessed imaging data along with the corresponding QC metrics: \begin{itemize} \item \emph{Anatomical Data} Preprocessed images, segmentations and transforms for spatial normalization are located in the \texttt{anat/} directory of each session. The gray matter, white matter and cerebrospinal fluid (\texttt{GM}, \texttt{WM}, \texttt{CSF}) probabilistic segmentations are provided in nifti format with the \texttt{\_probtissue} suffix. The deterministic segmentation is in \texttt{\_dseg.nii.gz}. All images are in alignment with AC-PC-aligned \texttt{sub-X\_desc-preproc\_T1w.nii.gz} image unless they have \texttt{space-MNI152NLin2009cAsym} in their file name, in which case they are aligned to the MNI Nonlinear T1-weighted asymmetric brain template (version 2009c)\cite{Fonov2009-ze}. The spatial transform between the AC-PC T1w image and MNI space is in the ITK/ANTs format file named \texttt{sub-X\_from-MNI152NLin2009cAsym\_to-T1w\_mode-image\_xfm.h5}. The brain mask from \texttt{ANTsBrainExtraction.sh} is included in the file with the \texttt{\_desc-brain\_mask.nii.gz} suffix. \item \emph{Diffusion Data} The preprocessed dMRI scan and accompanying metadata are in the \texttt{dwi} directory of each session. The fully-preprocessed dMRI data is follows the naming pattern \texttt{sub-X\_space-T1w\_desc-preproc\_dwi.nii.gz}. These images all have an isotropic voxel size of \qty{1.7}{\mm} and are aligned in world coordinates with the anatomical image located at \texttt{anat/sub-X\_desc-preproc\_T1w.nii.gz}. Gradient information is provided in \texttt{bval/bvec} format compatible with DIPY and DSI Studio and the \texttt{.b} format compatible with MRtrix3. Volume-wise QC metrics including head motion parameters are included in the \texttt{confounds.tsv} file. Automatically computed quality measures for the entire image series are provided in the \texttt{ImageQC.csv} file, which includes the neighboring DWI Correlation, number of bad slices and head motion summary statistics. Figure~\ref{fig:metric-dist} depicts pairwise distributions for the three of these automated data quality metrics that were most informative in QC models described later (see Appendix~\ref{app:feature-importance} for further details). The \texttt{desc-brain\_mask} file is a dMRI-based brain mask that should only be used when the T1w-based brain mask is inappropriate (i.e. when no susceptibility distortion correction has been applied). \end{itemize} \begin{figure}[tbp] {\phantomsubcaption\label{fig:metric-dist:age}} {\phantomsubcaption\label{fig:metric-dist:ndc-slices}} {\phantomsubcaption\label{fig:metric-dist:ndc-translation}} {\phantomsubcaption\label{fig:metric-dist:slices-translation}} \centering \includegraphics[width=\linewidth]{bundle-profiles/qsiprep-metric-distributions.pdf} \caption{% {\bf Demographic and \emph{QSIPrep} quality metric distributions}: \textbf{(a)} HBN age distributions by sex for each scanning site. Dashed lines indicate age quartiles. % The remaining plots show associations between \textbf{(b)} neighboring diffusion-weighted imaging (DWI) correlation \cite{yeh2019-kb} and the number of outlier slices, \textbf{(c)} neighboring DWI correlation and maximum relative translation, and \textbf{(d)} the number of outlier slices and maximum relative translation. % The number of outlier slices is positively associated with the maximum relative translation, while neighboring DWI correlation is negatively associated with the other two metrics. % These plots are colored by age, and reveal that older participants generally have higher quality data. } \label{fig:metric-dist} \end{figure} \begin{longtable}{p{4.85cm}p{2.6cm}p{9cm}} \caption{HBN-POD2 data records} \label{tab:data-records} \\ \toprule Data Resource & Repository & Location \\ \midrule \endfirsthead \toprule Data Resource & Repository & Location \\ \midrule \endhead \midrule \multicolumn{2}{r}{{Continued on next page}} \\ \midrule \endfoot \hline \hline \multicolumn{3}{l}{{$\dagger$ FCP-INDI: all paths are relative to the root \url{s3://fcp-indi/data/Projects/HBN/BIDS_curated/}}} \\ \multicolumn{3}{l}{{* participants.tsv: located on FCP-INDI at relative path \texttt{derivatives/qsiprep/participants.tsv}}} \\ \multicolumn{3}{l}{{$\ddagger$ OSF: DOI \href{https://doi.org/10.17605/OSF.IO/8CY32}{10.17605/OSF.IO/8CY32}, all paths are relative to the root \texttt{HBN-POD2 QC/OSF Storage}}} \\ \bottomrule \endlastfoot BIDS Curated Imaging & FCP-INDI\textsuperscript{$\dagger$} & \texttt{/} \\ \emph{QSIPrep} preprocessed DWI & FCP-INDI\textsuperscript{$\dagger$} & \texttt{/derivatives/qsiprep/} \\ CuBIDS variant assignment & \texttt{participants}* & \texttt{site\_variant} column \\ Raw expert ratings & OSF\textsuperscript{$\ddagger$} & \texttt{/expert-qc/} \\ Expert QC scores & \texttt{participants}* & \texttt{expert\_qc\_score} column \\ Raw community ratings & OSF\textsuperscript{$\ddagger$} & \texttt{/community-qc/} \\ Community QC scores & \texttt{participants}* & \texttt{xgb\_qc\_score} column \\ QSIQC QC scores & \texttt{participants}* & \texttt{xgb\_qsiprep\_qc\_score} column \\ QSIQC quality rating model & GitHub & DOI: \href{https://doi.org/10.5281/zenodo.5949269}{10.5281/zenodo.5949269} \\ Deep learning input images & FCP-INDI\textsuperscript{$\dagger$} & \texttt{/derivatives/qsiprep/derivatives/dlqc/} \\ Deep learning model checkpoints & OSF\textsuperscript{$\ddagger$} & \texttt{/deep-learning-qc/saved-models} \\ Deep learning QC scores & \texttt{participants}* & \texttt{dl\_qc\_score} column \\ Deep learning attribution maps & OSF\textsuperscript{$\ddagger$} & \texttt{/deep-learning-qc/integrated-gradients} \\ pyAFQ tractography \& tractometry & FCP-INDI\textsuperscript{$\dagger$} & \texttt{/derivatives/afq/} \\ pyAFQ tract profiles & FCP-INDI\textsuperscript{$\dagger$} & \texttt{/derivatives/afq/combined\_tract\_profiles.csv} \\ \end{longtable} \subsection*{CuBIDS Variants} The specific variant of each scanning session is provided as a column in the HBN-POD2 participant.tsv file and a summary of variants with participant counts is provided in Appendix~\ref{app:variants}. Users may test their BIDS-Apps on a subset of participants that represent the full range of acquisition parameters that are present. \subsection*{Quality control data} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% We provide four separate QC scores in the \texttt{participants.tsv} file described in Table~\ref{tab:data-records}. The mean expert ratings are available in the ``expert\_qc\_score'' column. These ratings are scaled to the range \numrange{0}{1}, so that a mean rating from \numrange{0}{0.2} corresponds to an expert rating of ``definitely fail'', a mean rating from \numrange{0.2}{0.4} corresponds to ``probably fail'', from \numrange{0.4}{0.6} corresponds to ``not sure'', from \numrange{0.6}{0.8} corresponds to ``probably pass'', and \numrange{0.8}{1.0} corresponds to ``definitely pass.'' The XGB model's positive class probabilities are available in the ``xgb\_qc\_score'' column, while the XGB-q model's positive class probabilities are available in the ``xgb\_qsiprep\_qc\_score'' column. Finally, the CNN-i+q model's positive class probabilities are available in the ``dl\_qc\_score'' column. \subsection*{Tractography and tractometry} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% The outputs of the pyAFQ tractometry pipeline, including tractography and tract profiles, are provided in a BIDS derivative directory on FCP-INDI as specified in Table~\ref{tab:data-records}. In particular the FA and MD tract profiles for each participants are available on S3 at \url{s3://fcp-indi/data/Projects/HBN/BIDS_curated/derivatives/afq/combined_tract_profiles.csv}. For each subject, intermediate data derivatives of the pyAFQ pipeline are also provided. \begin{itemize} \item A brain mask and mean b=0 image are saved with ``\texttt{\_brain\_mask.nii.gz}'' and ``\texttt{\_b0.nii.gz}'' file-name suffixes. A set of diffusion modeling derivatives are saved for each of three different diffusion models: DTI, DKI and CSD. Diffusion model parameters are saved with the ``\texttt{\_diffmodel.nii.gz}'' suffix. Derived model scalars are saved with suffixes that indicate the model and the scalar. For example, the FA derived from the DTI model is saved with the ``\texttt{\_DTI\_FA.nii.gz}'' suffix. \item Masks used to initialize tractography are saved with the ``\texttt{seed\_mask.nii.gz}'' suffix, while those used to determine the stopping criterion for tractography are stored with the ``\texttt{stop\_mask.nii.gz}'' suffix. \item Files that define a non-linear transformation between the individual subject anatomy and the MNI template for the purpose of waypoint ROI placement are stored with ``\texttt{mapping\_from-DWI\_to\_MNI\_xfm.nii.gz}'' (non-linear component) and ``\texttt{prealing\_from-DWI\_to\_MNI\_xfm.npy}'' (affine component) suffixes. The waypoint ROIs, transformed to the subject anatomy through this non-linear transformation are also stored in the ``ROIs'' sub-directory. \item Tractography derivatives are stored with the ``\texttt{\_tractography.trk}''. The whole-brain tractography, which serves as the input data for bundle segmentation, is stored with the ``\texttt{\_CSD\_desc-prob\_tractography.trk}'' suffix. Streamlines that were selected for inclusion in one of the major bundles are stored in separate files in the ``bundles'' sub-directory and saved in a consolidated file with the ``\texttt{CSD\_desc-prob-afq\_tractography.trk}'' suffix. The streamlines selected for inclusion and also additionally cleaned through a process of outlier removal are stored with the ``\texttt{CSD\_desc-prob-afq-clean\_tractography.trk}'' suffix and also in a ``clean\_bundles'' sub-directory. \item An interactive visualization of bundles relative to the individual anatomy is stored with the ``\texttt{\_viz.html}'' suffix and summaries of streamline counts in each bundle are stored with the ``\texttt{\_sl\_count.csv}''. Additional visualizations are provided in the ``tract\_profile\_plots'' and ``viz\_bundles'' sub-directory. \item Individual tract profiles are stored with the ``\texttt{afq\_profiles.csv}'' suffix. This information is redundant with the one provided in aggregate format in the ``\texttt{combined\_tract\_profiles.csv}'' file. \end{itemize} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section*{Technical Validation} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsubsection*{Attribution masks for the deep learning classifier} We generated post-hoc attribution maps that highlight regions of the input volume that are relevant for the deep learning generated QC scores. The integrated gradient method \cite{sundararajan2017axiomatic} is a gradient-based attribution method \cite{ancona2019gradient} that aggregates gradients for synthetic images interpolating between a baseline image and the input image. It has been used to interpret deep learning models applied to retinal imaging in diabetic retinopathy \cite{sayres2019using} and glaucoma \cite{Mehta2021-zp} prediction, as well as in multiple sclerosis prediction from brain MRI \cite{wargnier-dauchelle2021interpretable}. Our goal is to confirm that the CNN-i model was driven by the same features that would drive the expert rating, thereby bolstering the decision to apply it to new data. To generate the attribution maps, we followed Tensorflow's integrated gradients tutorial \cite{integrated-gradients-tutorial} with a black baseline image and 128 steps in the Riemann sum approximation of the integral (i.e. \texttt{m\_steps = 128}). Figure~\ref{fig:ig} shows attribution maps for example participants from each confusion class: true positive, true negative, false positive, and false negative. The columns correspond to the different channels of the deep learning input volume: the $b=0$ reference image and the DEC-FA in the $x$, $y$, and $z$ directions. The blue voxels indicate positive attribution, that is, data that supports a passing QC classification. Conversely, the red voxels indicate negative attribution, data that supports a failing QC classification. The true positive map indicates that the network was looking at the entire brain rather than focusing on any one anatomical region (Figure~\ref{fig:ig:true-pos}). Moreover, the model identified white matter fascicles that travel along the direction of the input channel: lateral for $x$, anterior-posterior for $y$, and superior-inferior for $z$. The true negative attribution map (Figure~\ref{fig:ig:true-neg}) reveals that when the reference $b=0$ volume contains motion artifacts, such as banding, the network ignored the otherwise positive attributions for the clearly identifiable white matter tracts in the DEC-FA channels. The false positive map (Figure~\ref{fig:ig:false-pos}) and the false negative map (Figure~\ref{fig:ig:false-neg}) should be interpreted differently since they come from low confidence predictions; the probability of passing hovered on either side of the pass/fail threshold. For example, in the false positive case, the network was confused enough that it treated voxels that are outside of the brain to be as informative as voxels in the major white matter bundles. \begin{figure}[tbp] \centering \adjustbox{minipage=3.5em}{\subcaption{true positive}\label{fig:ig:true-pos}}% \begin{subfigure}{\dimexpr\linewidth-3.5em\relax} \centering \includegraphics[width=\linewidth]{deep-learning-qc/attribution-maps-true-pos.pdf} \end{subfigure} \adjustbox{minipage=3.5em}{\subcaption{true negative}\label{fig:ig:true-neg}}% \begin{subfigure}{\dimexpr\linewidth-3.5em\relax} \centering \includegraphics[width=\linewidth]{deep-learning-qc/attribution-maps-true-neg.pdf} \end{subfigure} \adjustbox{minipage=3.5em}{\subcaption{false positive}\label{fig:ig:false-pos}}% \begin{subfigure}{\dimexpr\linewidth-3.5em\relax} \centering \includegraphics[width=\linewidth]{deep-learning-qc/attribution-maps-false-pos.pdf} \end{subfigure} \adjustbox{minipage=3.5em}{\subcaption{false negative}\label{fig:ig:false-neg}}% \begin{subfigure}{\dimexpr\linewidth-3.5em\relax} \centering \includegraphics[width=\linewidth]{deep-learning-qc/attribution-maps-false-neg.pdf} \end{subfigure} \caption{% {\bf Integrated gradients attribution maps for the deep learning classifier}: Each column depicts a different channel of the input tensor: the $b=0$ DWI volume and the DEC-FA images in the $x$, $y$, and $z$ directions. The first three columns show an axial slice while the last column shows a coronal slice. Blue voxels indicate positive attribution (i.e. evidence for passing the participant), while red voxels indicate negative attribution (i.e. evidence for QC failure). The underlying grayscale depicts the input channel. Each row depicts a representative participant from each confusion class: % \textbf{(a)} Attribution maps for a true positive prediction. The model looked at the entire brain and focused on known white matter bundles in the DEC-FA channels. In particular, it focused on lateral bundles in the $x$ direction, anterior-posterior bundles in the $y$ direction, and superior-inferior bundles in the $z$ direction. % \textbf{(b)} Attribution maps for a true negative prediction. The model focused primarily on the $b=0$ channel, suggesting that it ignores DEC-FA when motion artifacts like banding are present. % \textbf{(c)} Attribution maps for a false positive prediction. Both the false positive and negative predictions were low confidence predictions. This is reinforced by the fact that the model viewed some voxels that are outside of the brain as just as informative as those in major white matter tracts. % \textbf{(d)} Attribution maps for a false negative prediction. The model failed to find long-range white matter tracts in the anterior-posterior and lateral directions. We also speculate that the model expected left-right symmetry in the DEC-FA channels and assigned negative attribution to asymmetrical features. } \label{fig:ig} \end{figure} \subsubsection*{QC prediction models can generalize to unseen sites} Site harmonization is a major issue for any multisite neuroimaging study and developing automated QC tools that generalize between sites has been a perennial issue \cite{esteban2017mriqc}. Furthermore, the ability to generalize between sites in a single multisite study would signal the promise of generalizing to other datasets altogether. To better understand the ability of our QC models to generalize across scanning sites, we trained multiple versions of XGB-q and CNN-i on partitions of the data with different scanning sites held out and then evaluated those models on the held out sites (Figure~\ref{fig:site-generalization} and Table~\ref{tab:site-generalization}). These models were therefore evaluated on data from ``unseen'' sites. We constructed these train/evaluate splits from combinations of the HBN sites with \qty{3}{\tesla} scanners (RU, CBIC, and CUNY), and excluded CUNY as a standalone training or test site because of its low number of participants ($N=74$). This left four combinations of site-generated training splits: CBIC~+~CUNY (eval: RU), CBIC (eval: RU~+~CUNY), RU~+~CUNY (eval: CBIC), and RU (eval: CBIC~+~CUNY). We trained eight models (with distinct random seeds) from the CNN-i family of models using the global XGB scores as targets, just as with the full CNN-i model. Similarly, we trained twenty models (with distinct random seeds) from the XGB-q family of models using the expert scores as targets, just as with the full XGB-q model. For each model, we reported three evaluation metrics: ROC-AUC, accuracy, and balanced accuracy. Because the distribution of QC scores was imbalanced (Figures~\ref{fig:expert-qc:scatter:hist} and \ref{fig:dl-qc:hist:site}), we included balanced accuracy as an evaluation metric. Balanced accuracy avoids inflated accuracy estimates on imbalanced data \cite{velez2007balanced}, and in the binary classification case, it is the mean of the sensitivity and specificity. For the CNN-i family, we further decomposed the evaluation split into a report set, for which expert scores were available, and a test set, with participants who were not in the ``gold standard'' dataset. For the report set, we evaluated the model using the expert scores as the ground truth. For the test set, we evaluated each model using the XGB scores as ground truth. Aside from the specification of train and evaluation splits, model training followed exactly the same procedure as for the full dataset. For example, we use the same cross validation and hyperparameter optimization procedure for the XGB-q family as for the original XGB-q model and the same architecture, input format, and early stopping criteria for the CNN-i family as for the CNN-i model. ROC-AUC for generalization is uniformly high for both the XGB-q and the CNN-i models. However, more importantly, accuracy and balanced accuracy vary substantially: depending on the site that was used for training, balanced accuracy could be as low as guess rate, particularly for the CNN-i model. Notably, it seems that including the RU site in the training data led to relatively high balanced accuracy in both models. The XGB-q model balanced accuracy was less dependent on the specific sites used for training, but also displayed some variability across permutations of this experiment. In particular, the benefit from including the ``right site'' in the training data, namely RU, eclipsed the slight benefit conferred by including more than one site in the training data. \begin{figure}[tbp] {\phantomsubcaption\label{fig:site-generalization:dl}} {\phantomsubcaption\label{fig:site-generalization:xgb}} \centering \includegraphics[width=\linewidth]{site-generalization/site_generalization.pdf} \caption{% {\bf Generalization of QC scores to unseen sites}: In each experiment, CNN-i (\textbf{a}) and XGB-q (\textbf{b}) models were trained with some sites held out and evaluated only on data from these held out sites. Model performance is quantified as ROC-AUC (blue), accuracy (orange) and balanced accuracy (green). For XGB-q, the targets are the expert ratings on data from the held out site. For CNN-i, performance is scored against XGB scores (as used before; test set in filled circles), or expert ratings on the data from the held out site (report set in crosses). Summary statistics for this plot are listed in Table~\ref{tab:site-generalization}. } \label{fig:site-generalization} \vspace{1em} \captionof{table}{% {\bf Site generalization summary statistics}: Below we list the mean $\pm$ standard deviation of the site generalization evaluation metrics displayed in Figure~\ref{fig:site-generalization}. For each of the CNN-i and XGB-q model families and each of the site generalization splits, we report the accuracy, balanced accuracy, and ROC-AUC. } \begin{tabular}{lllll} \toprule & & Accuracy & Balanced accuracy & ROC-AUC \\ Model & Site & & & \\ \midrule CNN-i & train: CBIC + CUNY, test: RU & $0.748 \pm 0.086$ & $0.652 \pm 0.112$ & $0.930 \pm 0.015$ \\ & train: CBIC, test: RU + CUNY & $0.696 \pm 0.095$ & $0.574 \pm 0.123$ & $0.791 \pm 0.169$ \\ & train: RU + CUNY, test: CBIC & $0.859 \pm 0.033$ & $0.847 \pm 0.030$ & $0.912 \pm 0.013$ \\ & train: RU, test: CBIC + CUNY & $0.851 \pm 0.018$ & $0.753 \pm 0.029$ & $0.910 \pm 0.014$ \\ XGB-q & train: CBIC+CUNY, test: RU & $0.763 \pm 0.071$ & $0.805 \pm 0.052$ & $0.895 \pm 0.006$ \\ & train: CBIC, test: RU+CUNY & $0.725 \pm 0.079$ & $0.779 \pm 0.058$ & $0.886 \pm 0.019$ \\ & train: RU+CUNY, test: CBIC & $0.894 \pm 0.024$ & $0.838 \pm 0.036$ & $0.931 \pm 0.018$ \\ & train: RU, test: CBIC+CUNY & $0.886 \pm 0.030$ & $0.816 \pm 0.048$ & $0.940 \pm 0.017$ \\ \bottomrule \end{tabular} \label{tab:site-generalization} \end{figure} \subsection*{Quality control improves inference} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% To demonstrate the effect that quality control has on inference, we analyzed tract profile data derived from HBN-POD2 data. \todo[inline]{Add SLFL MD and FA profiles to Figure 8} Missing values were imputed using median imputation as implemented by \emph{scikit-learn}'s \texttt{SimpleImputer} class. Because the HBN-POD2 bundle profiles exhibit strong site effects \cite{richie-halford2021multidimensional}, we used the ComBat harmonization method to robustly adjust for site effects in the tract profiles \cite{Johnson2007-kl, fortin2018-hk, fortin2017-be, nielson2018detecting}, using the \emph{neurocombat\_sklearn} library \cite{neurocombat-sklearn}. In Figure~\ref{fig:age-prediction}, we plot the mean diffusivity (MD) and fractional anisotropy (FA) profiles along the left superior longitudinal fasciculus (SLFL) grouped into four QC bins. The SLFL exhibits strong differences between QC bins. Low QC scores tend to flatten the MD and FA profiles, indicating that MD and FA appear artifactually homogeneous across the bundle. The effect of QC score on white matter bundle profiles indicates that researchers using HBN-POD2 should incorporate QC in their analyses, either by applying a QC cutoff when selecting participants or by explicitly adding QC score to their inferential models. Failure to do so may cause spurious associations or degrade predictive performance. To demonstrate this, we selected participant age as a representative phenotypic benchmark because \begin{enumerate*}[% label=(\roman*),% before=\unskip{ },% itemjoin={{, }},% itemjoin*={{ and }}] \item it operates on a natural scale with meaningful units \item despite the unique methodological challenges it presents for biomarker identification \cite{nelson2020biomarkers}, brain age prediction may be diagnostic of overall brain health \cite{franke2010estimating, cole2019brain, richie-halford2021multidimensional}. \end{enumerate*} We observed the effect of varying QC cutoff on the predictive performance of an age prediction model (Figure~\ref{fig:age-prediction}). \begin{figure}[tbp] \centering \includegraphics[width=\linewidth]{age-prediction/qc_sweep.pdf} \caption{% {\bf Imposing a QC cutoff improves age prediction}: Cross validated $R^2$ scores (left axis, blue dots) from an age prediction model increase after screening participants by QC score. We see the most dramatic increase in $R^2$ after imposing even the lowest cutoff of $0.05$. Thereafter, the $R^2$ scores trend upward until a cutoff of $\sim 0.95$, where the training set size (right axis, orange line) becomes too small to sustain model performance. The error bands represent a bootstrapped 95\% confidence interval. } \label{fig:age-prediction} \end{figure} We evaluated this effect by observing cross-validated $R^2$ values of gradient boosted trees models implemented using XGBoost. The input feature space for each model consisted of \num{4800} features per participant, comprising 100 nodes for each of MD and FA in the twenty-four major tracts. We imputed missing bundles and harmonized the different scanning sites as above. The XGBoost models' hyperparameters were hand-tuned to values that have been performant in the authors' previous experience. Within the limited age range of the HBN study, MD and FA follow logarithmic maturation trajectories \cite{yeatman2014lifespan}. We therefore log-transformed each participant's age before prediction using the \texttt{TransformedTargetRegressor} class from \emph{scikit-learn}. For each value of the QC cutoff between 0 and 0.95, in steps of 0.05, we computed the cross-validated $R^2$ values using \emph{scikit-learn}'s \texttt{cross\_val\_score} function with repeated K-fold cross-validation using five folds and five repeats. Cross-validated $R^2$ scores for an age prediction model varied depending on the QC cutoff (Figure~\ref{fig:age-prediction}). An initial large improvement was achieved by excluding the 200 participants with the lowest QC scores, followed by a gradual increase in performance. Finally, when a large number of participants is excluded, performance deteriorated again. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section*{Usage Notes} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% HBN-POD2 is one of the largest child and adolescent diffusion imaging datasets with preprocessed derivatives that is currently openly available. The dataset was designed to comply with the best practices of the field. For example, it complies with the current draft of the BIDS diffusion derivative specification \cite{Pestilli2021}. It will grow continuously as the HBN study acquires more data, eventually reaching its \num{5000} participant goal. \subsection*{Preprocessing and quality control increase the impact of openly-available data} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% The HBN-POD2 data is amenable to many different analyses, including tractometry \cite{yeatman2012-rc, yeatman2018browser, kruper2021evaluating}, graph theoretical analysis \cite{yeh2020-nu}, and combinations with functional MRI data and other data types for the same participants. The availability of standardized preprocessed diffusion data will allow researchers to create and test hypotheses on the white matter properties underlying behavior and disease, from reading and math acquisition to childhood adversity and mental health. As such, this dataset will accelerate discovery at the nexus of white matter microstructure and neurodevelopmental and learning disorders. In large developmental datasets, it is critically important to perform accurate and reliable QC of the data. QC is associated not just with age, but with many phenotypic variables of interest in cognition and psychopathology \cite{siegel2017quality}. HBN-POD2 provides four separate QC scores alongside its large dataset of pediatric neuroimaging diffusion derivatives, paving the way for users of the data to incorporate considerations of data quality into their analysis of the processed data. Unsurprisingly, QC scores are strongly correlated with age (Figure~\ref{fig:dl-qc}). This accords with the negative association between head motion and age in developmental studies, which is well established both in general \cite{power2012spurious,satterthwaite2012impact,fair2012distinct,yendiki2014spurious} and specifically for resting-state fMRI in the HBN dataset \cite{alexander2017-yc}. Moreover, it is important that QC has bundle-specific and spatially localized effects (Figure~\ref{fig:qc-profiles:md}). Analysis of this data that does not incorporate QC is likely to find replicable but invalid effects. For example, in patient-control studies, patients are likely to have lower quality data. And analysis of such patient data that does not control for QC will find spatially-localized and replicable group differences that are due to data quality, not necessarily underlying neuroanatomical differences. We further demonstrated the impact of QC in a benchmark age prediction task (Figure~\ref{fig:age-prediction}). In this case, the increase in model performance from imposing a QC cutoff is intuitive: we know from Figure~\ref{fig:qc-profiles:md} that participants with low QC scores have reduced MD, but MD also decreases as participants mature \cite{yeatman2014lifespan,richie-halford2021multidimensional}. Eliminating participants with low QC therefore removes the ones who may look artificially older from the analysis, improving overall performance. The most noticeable improvement in performance comes after imposing the most modest cutoff of $0.05$, suggesting that inferences may benefit from \emph{any} QC screening. On the other hand, QC screening inherently introduces a tradeoff between the desire for high quality data and the desire for a large sample size. In this case, after a QC cutoff of around $0.9$, the training set size is reduced such that it degrades predictive performance. Importantly, we do not expect the sensitivity analysis of an age prediction model to generalize to other analyses and therefore recommend that researchers using HBN-POD2 choose the most appropriate QC cutoff for their research question and consider including QC score as a model covariate in their analyses. \subsection*{Automated quality control: scalability, interpretability, and generalization} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% The predictive performance of the CNN-i model (Figure~\ref{fig:dl-qc:roc}) gives us confidence that it could accurately classify unseen data from the same sites, justifying its extension to the entire HBN-POD2 dataset and to future releases of HBN. However, one limitation of this model is that it does not satisfactorily explain its decisions. As deep learning models have been increasingly applied to medical image analysis, there is an evolving interest in the interpretability of these models \cite{salahuddin2022transparency, lipton2017doctor, Zech2018-ki, Ghassemi2021-zg}. While an exhaustive interpretation of deep learning QC models is beyond the scope of this work, we provided a preliminary qualitative interpretation of the CNN-i model (Figure~\ref{fig:ig}) that demonstrates the intuitive nature of its decisions. The accuracy in generalizing to unseen data from HBN also suggested the tantalizing possibility that the QC models would be able to generalize to similar data from other datasets. To assess this, we trained the models with unseen sites held out (Figure \ref{fig:site-generalization}). Both the CNN-i model and the XGB-q model do sometimes generalize to data from unseen sites, suggesting that they would be able to generalize to some other datasets as well. However, they do not reliably generalize, implying that they should not currently be used in this way. Future work could build upon the work that we have done here to establish a procedure whereby the models that we fit in HBN would be applied to data from other studies, but comprehensive calibration and validation would have to be undertaken as part of this procedure. We recognize that decisions about QC inclusion must balance accuracy, interpretability, generalization to new data, and scalability to ever larger datasets. We therefore provide three additional scores \begin{enumerate*}[% label=(\roman*),% before=\unskip{: },% itemjoin={{, }},% itemjoin*={{, and }}] \item the mean expert QC score for the 200 participants in the gold standard dataset \item the scores predicted by the XGB model, which outperformed all other models when evaluated against the gold standard ratings, but which are only available for participants that have community science scores \item the scores predicted by the XGB-q model, which underperformed the deep learning generated scores, but which rely only on the automated QC metrics output by \emph{QSIPrep}. \end{enumerate*} We view the XGB-q scores, which are available for all participants, as a more interpretable and scalable fallback because the XGB-q model ingests \emph{QSIPrep} output without any further postprocessing. XGB-q also provides slightly more uniform performance in generalization to unseen HBN sites (Figure~\ref{fig:site-generalization}). Because the XGB-q model most readily generalizes to other \emph{QSIPrep} outputs, we package it as an independent QC service in the QSIQC software package \cite{richiehalford2022qsiqc}, available both as a docker image at \texttt{ghcr.io/richford/qsiqc} and as a Streamlit app at \url{https://share.streamlit.io/richford/qsiqc/main/app.py}. The decision to use a more interpretable but slightly less performant method of generating QC scores was also advocated by \cite{tobe2021longitudinal}, who noted that the Euler number of T1-weighed images \cite{rosen2018quantitative} in the NKI-Rockland dataset can reliably predict scores generated with \emph{Braindr}, the community science application developed in our previous work \cite{keshavan2019-er}. We also note that the issue of algorithmic impact in choosing a QC method is not exclusive to the deep learning model. We have chosen models that most reliably reproduce the gold standard ratings, but a reliable algorithm might still negatively influence researcher's decisions. For example, excluding participants by QC score could spur them to exclude populations deserving of study, as when QC score is highly correlated with age or socio-economic status. We therefore caution researchers to examine interactions between the QC scores we provide and their phenotype of interest. More generally, QC in the dataset that we have produced is fundamentally anchored to the decisions made by the expert observers. While Cohen's $\kappa$ between some pairs of experts can be as low as 0.52, IRR quantified across all of the experts with ICC3k is excellent. Nevertheless, it is possible that improvements to the final QC scores could be obtained through improvements to IRR, or by designing a more extensive expert QC protocol. The tradeoff between more extensive QC for each participant and more superficial QC on more participants was not explored in this study, but could also be the target for future research. \subsection*{Transparent pipelines provide an extensible baseline for future methods} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% While the primary audience of HBN-POD2 is researchers in neurodevelopment who will use the dMRI derivatives in their studies, other researchers may use HBN-POD2 to develop new preprocessing algorithms or quality control methods. In this respect, HBN-POD2 follows \cite{avesani2019-ey}, who recognized the diverse interests that different scientific communities have in reusing neuroimaging data and coined the term \emph{data upcycling} to promote multiple-use data sharing for purposes secondary to those of the original project. Complementing the approach taken in Avesani et al.'s work, which provided dMRI from a small number of participants preprocessed with many pipelines, HBN-POD2 contains many participants, all processed with a single state of the art pipeline, \emph{QSIPrep}. For researchers developing new preprocessing algorithms, HBN-POD2 provides a large, openly available baseline to which they can compare their results. Similarly, neuroimaging QC methods developers will benefit from a large benchmark dataset of expert, community science, and automated QC ratings, with which to test new methods. Importantly, the architecture and parameters of the deep learning network used for QC are also provided as part of this work, allowing application of this network to future releases of HBN data, and allowing other researchers to build upon our efforts. Indeed, in this work, we have extended our previous work on what we now call ``hybrid QC''. This approach, which we originally applied to the first two releases of the HBN T1-weighted data \cite{keshavan2019-er} (using the \emph{Braindr} web app: \url{https://braindr.us}) was extended here in several respects. First, the \emph{Braindr} study used a smaller dataset of approximately 700 participants, while we extended this approach to well over \num{2000} participants. Second, \emph{Braindr} relied on approximately \num{80000} ratings from \num{261} users. Here, we received more than \num{500000} ratings from \num{374} community scientists. As our understanding of the role of community scientist contributions has evolved, we decided that we would include as collective co-authors community scientists who contributed more than \num{3000} ratings \cite{Ward-Fear2020-zq}. Third, \emph{Braindr} used data from only a single site. Here, multi-site data was used. This opens up multiple possibilities for deeper exploration of between-site quality differences, and also for harmonization of QC across sites, as we have attempted here. Last, the most challenging extension of hybrid QC from \emph{Braindr} to this study entailed developing an approach that would encompass multi-volume dMRI data. On the one hand, this meant that the task performed by the expert observers was more challenging, because it required examination of the full dMRI time-series for every scan. To wit, expert inter-rater reliability was considerably higher for the T1-weighted only data in \cite{keshavan2019-er} than for the dMRI data used (Figure~\ref{fig:expert-qc:irr}). On the other hand, it also meant that the 4D data had to be summarized into 2D data to be displayed in the \emph{Fibr} web application. This was achieved by summarizing the entire time-series as a DEC-FA + $b=0$ image and presenting community scientists with animated sections of these images that showed how the data extended over several horizontal slices. In addition, the extension to 4D data required developing new deep learning architectures for analysis of 4D images, including upstream contributions to \emph{Nobrainer}, a community-developed software library for deep learning in neuroimaging data \cite{nobrainer}. These extensions demonstrate that the hybrid QC approach generalizes very well to a variety of different circumstances. Future applications of this approach could generalize to functional MRI data, as well as other large datasets from other kinds of measurements and other research domains. \subsection*{Future work and open problems} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% The HBN study plans to acquire imaging data for over \num{5000} participants, necessitating future data releases. Since future releases of HBN will also require future releases of HBN-POD2, a plan for these is essential. This is a general issue affecting multi-year neuroimaging projects for which derivative data is being released before study completion. The use of \emph{QSIPrep}, \emph{cloudknot} and the containerization of the QC score assignment process facilitate running the exact pipeline described in this paper on newly released participants. However, this approach is somewhat unsatisfactory because it fails to anticipate improvements in preprocessing methodology. That is, what should we do when \emph{QSIPrep} is inevitably updated between HBN releases? Enforce standardization by using an outdated pipeline or use state-of-the-art preprocessing at the expense of standardized processing between releases? Because the use of \emph{cloudknot} and AWS Spot Instances renders preprocessing fast and relatively inexpensive, we propose a third way: if improvements to the preprocessing pipeline are available with a new HBN release, we plan to execute the improved pipeline on the entire HBN dataset, while preserving the previous baseline release in an archived BIDS derivative dataset. Undertaking the processing and QC effort to generate HBN-POD2 required construction and deployment of substantial informatics infrastructure, including tools for cloud computing, web applications for expert annotation and for community science rating and analysis software. All of these tools are provided openly, so that this approach can be generalized even more widely in other projects and in other scientific fields. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section*{Code Availability} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% To facilitate replicability, Jupyter notebooks \cite{kluyver2016jupyter} and Dockerfiles \cite{merkel2014docker} necessary to reproduce the methods described herein are provided in the HBN-POD2 GitHub repository at \url{https://github.com/richford/hbn-pod2-qc}. The specific version of the repository used in this study is documented in \cite{richiehalford2022hbnpod2qc}. Most of the code in this repository uses Pandas \cite{mckinney-proc-scipy-2010,reback2020pandas}, Numpy \cite{harris2020array}, Matplotlib \cite{hunter2007matplotlib}, and Seaborn \cite{waskom2021seaborn}. The \texttt{make} or \texttt{make help} commands will list the available commands and \texttt{make build} will build the requisite Docker images to analyze HBN-POD2 QC data. \todo[inline]{Update zenodo ref for the HBN-POD2 GitHub repo.} In order to separate data from analysis code \cite{Wilson2017-rj}, we provide intermediate data necessary to analyze the QC results in an OSF \cite{Foster-MSLS2017-rl} project \cite{hbn-pod2-osf}, the contents of which can be downloaded using the \texttt{make data} command in the root of the HBN-POD2 GitHub repository. The NIFTI-1 files and TFRecord files provided as input to the CNN models may be separately downloaded using the \texttt{make niftis} and \texttt{make tfrecs} commands, respectively. The remaining \texttt{make} commands and Jupyter notebooks follow the major steps of the methods section: \begin{enumerate} \item The \emph{cloudknot} preprocessing function used to execute \emph{QSIPrep} workflows on curated data was a thin wrapper around \emph{QSIPrep}'s command line interface and is provided in a Jupyter notebook with suffix \texttt{preprocess-remaining-hbn-curated.ipynb} in the HBN-POD2 GitHub repository in the ``notebooks'' directory. \item The expert rating analysis can be replicated using the \texttt{make expert-qc} command in the HBN-POD2 GitHub repository. \item The \emph{Fibr} community science web application is based on the SwipesForScience framework \url{https://swipesforscience.org/}, which generates a web application for community science given an open repository of images to be labelled and a configuration file. The source code for the \emph{Fibr} web application is available at \url{https://github.com/richford/fibr}. \item The images that the \emph{Fibr} raters saw were generated using a \emph{DIPY} \cite{dipy} \texttt{TensorModel} in a \emph{cloudknot}-enabled Jupyter notebook that is available in the ``notebooks'' directory of the \emph{Fibr} GitHub repository. \emph{Fibr} saves each community rating to its Google Firebase backend, the contents of which have been archived to the HBN-POD2 OSF project as specified in Table~\ref{tab:data-records}. \item The community ratings analysis can be replicated using the \texttt{make community-qc} command in the HBN-POD2 GitHub repository. Saved model checkpoints for each of the XGB models are available in the HBN-POD2 OSF project and are automatically downloaded with the \texttt{make data} command. \item The input multichannel volumes for the CNN models were generated using \emph{DIPY} \cite{dipy} and \emph{cloudknot} \cite{cloudknot} and saved as NIfTI-1 files \cite{nifti}. These NIfTI files were then converted to the Tensorflow TFRecord format using the \emph{Nobrainer} deep learning framework \cite{nobrainer}. The Jupyter notebooks used to create these NIfTI and TFRecord files are available in the ``notebooks'' directory of the HBN-POD2 GitHub repository, with suffixes \texttt{save-b0-tensorfa-nifti.ipynb} and \texttt{save-tfrecs.ipynb}, respectively. \item We trained the CNN models using the Google Cloud AI Platform Training service; the HBN-POD2 GitHub repository contains Docker services to launch training (with \texttt{make dl-train}) and prediction (with \texttt{make dl-predict}) jobs on Google Cloud, if the user has provided the appropriate credentials in an environment file and placed the TFRecord files on Google Cloud Storage. Further details on how to organize these files and write an environment file are available in the HBN-POD2 GitHub repository's \texttt{README\_GCP.md} file. To generate the figures depicting the deep learning QC pipeline and results, use the \texttt{make deep-learning-figures} command. \item We provide a Docker service to compute integrated gradient attribution maps on Google Cloud, which can be invoked using the \texttt{make dl-integrated-gradients} command. This step also requires the setup steps described in \texttt{README\_GCP.md}. \item We provide a Docker service to conduct the CNN-i site generalization experiments Google Cloud, which can be invoked using the \texttt{make dl-site-generalization} command, which, again, requires the setup steps described in \texttt{README\_GCP.md}. The XGB-q site generalization experiments can be replicated locally using the \texttt{make site-generalization} command, which will also plot the results of the CNN-i experiments. \item The tractometry pipeline was executed using pyAFQ and \emph{cloudknot} in a Jupyter notebook with suffix \texttt{afq-hbn-curated.ipynb}, provided in the HBN-POD2 GitHub repository in the ``notebooks'' directory. The pyAFQ documentation contains a more pedagogical example of \href{https://yeatmanlab.github.io/pyAFQ/auto_examples/cloudknot_example.html}{using pyAFQ with cloudknot to analyze a large openly available dataset}. \item The bundle profile and age prediction analyses can be replicated using the \texttt{make bundle-profiles} and \texttt{make inference} commands, respectively. \end{enumerate} \bibliography{hbn-pod2} \section*{Acknowledgments} We would like to thank Anisha Keshavan for useful discussions of community science and web-based quality control and for her work on SwipesForScience. This manuscript was prepared using a limited access dataset obtained from the Child Mind Institute Biobank, The Healthy Brain Network dataset. This manuscript reflects the views of the authors and does not necessarily reflect the opinions or views of the Child Mind Institute. This work was supported via BRAIN Initiative grant 1RF1MH121868-01 from the National Institutes of Mental Health. Additional support was provided by grant 1R01EB027585-01 from the National Institutes of Biomedical Imaging and Bioengineering (PI: Eleftherios Garyfallidis). Additional support was provided by R01MH120482 and the Penn/CHOP Lifespan Brain Institute. \section*{Author contributions statement} The last two authors named share senior authorship. The first two authors named share lead authorship. The remaining authors are listed in alphabetical order, with the exception of the \emph{Fibr} Community Science Consortium, whose members provided community science QC ratings and are listed in Appendix~\ref{app:fibr-consortium}. We describe contributions to the paper using the CRediT taxonomy \cite{brand2015-vd,allen2014-oc}: Conceptualization: A.R-H., A.R., T.S., and M.C.; Methodology: A.R-H. and A.R.; Software: A.R-H., M.C., and S.C.; Validation: A.R-H., M.C., and S.C.; Formal Analysis: A.R-H. and M.C.; Investigation: A.R-H. and M.C.; Resources: A.R., T.S., and M.M.; Data Curation: S.C., M.C., V.J.S., I.I.K., B.A-P. and L.A.; Writing – Original Draft: A.R-H. and A.R.; Writing – Review \& Editing: A.R-H., A.R., M.C., A.F., T.S., V.J.S., I.I.K, B.A-P., and S.C.; Visualization: A.R-H.; Supervision: A.R. and T.S.; Project Administration: A.R-H. and A.R.; Funding Acquisition: A.R. and T.S. \section*{Competing interests} \todo[inline]{% The corresponding author is responsible for providing a \href{https://www.nature.com/sdata/policies/editorial-and-publishing-policies#competing}{competing interests statement} on behalf of all authors of the paper. This statement must be included in the submitted article file. } %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% APPENDICES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \newpage \appendix \section{CuBIDS variant annotation} \label{app:variants} We identified 20 unique dMRI acquisitions across HBN-POD2, which are summarized in Table~\ref{tab:variants}. Site CBIC has two acquisition types: ``64dir,'' which shares it's pulse sequence with sites RU and CUNY, and ``ABCD64dir,'' with acquisition parameters that better match the ABCD study (TE=\qty{0.089}{\second} and TR=\qty{4.1}{\second}). The ``Most\_Common'' variant identifies the most common combination of acquisition parameters for a given site and acquisition. The ``Low\_Volume'' variant identifies participants from all sites with less that 129 DWI volumes, which is the number of volumes in the most common variants. All remaining variants names identify the acquisition parameter(s) that differ from those of the most common variant. For example, the ``MultibandAccelerationFactor'' variant has a different multiband acceleration factor than that of the the most common variant but all participants within that variant share the same multiband acceleration factor. Variants that differ by multiple acquisition parameters have names that are composed of concatenated parameters. For example, the variant ``Dim3SizeVoxelSizeDim3'' varies both in the number of voxels in dimension 3 (``Dim3Size'') and in the voxel size in dimension 3 (``VoxelSizeDim3''). \begin{table}[htbp] \centering \begin{tabular}{lllr} \toprule Site & Acquisition & Variant & Count \\ \midrule CBIC & 64dir & Most\_Common & 828 \\ CBIC & 64dir & Obliquity & 32 \\ CBIC & 64dir & VoxelSizeDim1VoxelSizeDim2 & 1 \\ CBIC & ABCD64dir & Most\_Common & 15 \\ CBIC & ABCD64dir & HasFmap & 2 \\ CBIC & ABCD64dir & MultibandAccelerationFactor & 1 \\ CBIC & ABCD64dir & Obliquity & 1 \\ CUNY & 64dir & Most\_Common & 68 \\ CUNY & 64dir & Dim3SizeVoxelSizeDim3 & 4 \\ CUNY & 64dir & Obliquity & 2 \\ RU & 64dir & Most\_Common & 859 \\ RU & 64dir & NoFmap & 5 \\ RU & 64dir & Obliquity & 8 \\ RU & 64dir & PhaseEncodingDirection & 1 \\ SI & 64dir & EchoTime & 1 \\ SI & 64dir & EchoTimePhaseEncodingDirection & 9 \\ SI & 64dir & Most\_Common & 269 \\ SI & 64dir & NoFmap & 2 \\ SI & 64dir & Obliquity & 12 \\ All Sites & All Acquisitions & Low\_Volume\_Count & 14 \\ \bottomrule \end{tabular} \caption{% Participant counts for HBN-POD2 variants. \label{tab:variants} } \end{table} \section{HBN-POD2 quality control instruments} \label{app:web-apps} We created quality control web applications for both community raters and expert raters. These apps are publicly accessible at \url{https://fibr.dev}, for the community science instrument and at \url{http://www.nipreps.org/dmriprep-viewer/} for the expert rating instrument. We encourage readers to try these web applications on their own but have included screenshots and a summary of the interfaces in Figure~\ref{fig:web-apps}. \includegraphics[width=0.95\textwidth]{hbn-pod2-qc-instruments.pdf} \captionof{figure}{ {\bf HBN-POD2 quality control instruments}: {\bf (A)} The user interface for community science QC app \emph{Fibr}. After a tutorial, users are asked to give binary pass/fail ratings to each subject's DEC-FA image. The intuitive swipe or click interface allows community scientists to review more images than is practical for expert reviewers. Expert reviewers use the more advanced \emph{dmriprep-viewer} interface, where they can {\bf (B)} view the distribution of data quality metrics for the entire study using interactive scatterplots and violin plots, and {\bf (C)} inspect individual participants' preprocessing results, including corrected dMRI images, frame displacement, q-space sampling distributions, registration information, and a DTI model. \label{fig:web-apps} } \section{XGB feature importance} \label{app:feature-importance} SHAP is a method to explain individual predictions based on game theoretically optimal Shapley values \cite{lundberg2017unified}. To estimate global feature importance for the XGB and XGB-q models, we use the \texttt{shap} library's \texttt{TreeExplainer} \cite{lundberg2020local} and average the absolute Shapley value per feature across each individual prediction. Tables~\ref{tab:xgb-shap} and \ref{tab:xgb-q-shap} list the \emph{QSIPrep} automated QC metric features in order of decreasing mean absolute shap value for the XGB and XGB-q models, respectively. We chose the top three metrics from Table~\ref{tab:xgb-shap} to plot metric distributions in Figure~\ref{fig:metric-dist} and correlations with the expert QC results in Figure~\ref{fig:expert-qc}. \begin{multicols}{2} \centering \begin{tabular}{lr|} \toprule {} & mean abs shap \\ feature & \\ \midrule raw\_neighbor\_corr & 0.666429 \\ max\_rel\_translation & 0.348662 \\ raw\_num\_bad\_slices & 0.288937 \\ t1\_neighbor\_corr & 0.282198 \\ raw\_incoherence\_index & 0.229733 \\ raw\_coherence\_index & 0.162103 \\ max\_rel\_rotation & 0.118963 \\ mean\_fd & 0.116457 \\ max\_fd & 0.099359 \\ max\_rotation & 0.078774 \\ t1\_coherence\_index & 0.035553 \\ t1\_dice\_distance & 0.034510 \\ max\_translation & 0.032323 \\ t1\_incoherence\_index & 0.030225 \\ raw\_voxel\_size\_x & 0.000000 \\ raw\_voxel\_size\_y & 0.000000 \\ raw\_voxel\_size\_z & 0.000000 \\ raw\_num\_directions & 0.000000 \\ raw\_max\_b & 0.000000 \\ raw\_dimension\_y & 0.000000 \\ raw\_dimension\_z & 0.000000 \\ t1\_voxel\_size\_x & 0.000000 \\ t1\_dimension\_x & 0.000000 \\ t1\_dimension\_y & 0.000000 \\ t1\_dimension\_z & 0.000000 \\ t1\_voxel\_size\_y & 0.000000 \\ t1\_voxel\_size\_z & 0.000000 \\ t1\_max\_b & 0.000000 \\ t1\_num\_bad\_slices & 0.000000 \\ t1\_num\_directions & 0.000000 \\ raw\_dimension\_x & 0.000000 \\ \bottomrule \end{tabular} \captionof{table}{% XGB mean absolute shap values \label{tab:xgb-shap} } {\nolinenumbers \begin{tabular}{|lr} \toprule {} & mean abs shap \\ feature & \\ \midrule raw\_neighbor\_corr & 0.767536 \\ raw\_incoherence\_index & 0.453897 \\ raw\_num\_bad\_slices & 0.430422 \\ t1\_coherence\_index & 0.382218 \\ max\_rel\_translation & 0.363052 \\ raw\_coherence\_index & 0.320438 \\ t1\_neighbor\_corr & 0.250948 \\ t1\_dice\_distance & 0.248104 \\ t1\_incoherence\_index & 0.242348 \\ max\_rel\_rotation & 0.135590 \\ mean\_fd & 0.128642 \\ max\_translation & 0.120815 \\ max\_fd & 0.119739 \\ max\_rotation & 0.101209 \\ t1\_num\_bad\_slices & 0.007075 \\ raw\_dimension\_y & 0.000000 \\ raw\_dimension\_z & 0.000000 \\ raw\_voxel\_size\_x & 0.000000 \\ raw\_voxel\_size\_y & 0.000000 \\ raw\_voxel\_size\_z & 0.000000 \\ raw\_max\_b & 0.000000 \\ t1\_voxel\_size\_x & 0.000000 \\ raw\_num\_directions & 0.000000 \\ t1\_dimension\_x & 0.000000 \\ t1\_dimension\_y & 0.000000 \\ t1\_dimension\_z & 0.000000 \\ t1\_voxel\_size\_y & 0.000000 \\ t1\_voxel\_size\_z & 0.000000 \\ t1\_max\_b & 0.000000 \\ t1\_num\_directions & 0.000000 \\ raw\_dimension\_x & 0.000000 \\ \bottomrule \end{tabular} \captionof{table}{% XGB-q mean absolute shap values \label{tab:xgb-q-shap} } } \end{multicols} \section{The \emph{Fibr} Community Science Consortium} \label{app:fibr-consortium} The following community raters provided $>3,000$ ratings each and elected to be included in the \emph{Fibr} Community Science Consortium as co-authors on this paper. \begin{longtable}{ll} \toprule Name & ORCID iD \\ \midrule \endfirsthead \toprule Name & ORCID iD \\ \midrule \endhead \midrule \multicolumn{2}{r}{{Continued on next page}} \\ \midrule \endfoot \bottomrule \endlastfoot Nicholas J. Abbott & 0000-0003-1466-0352 \\ John A. E. Anderson & 0000-0001-6511-1957 \\ Gagana B. & \\ MaryLena Bleile & 0000-0002-0762-2596 \\ Peter S. Bloomfield & 0000-0002-8356-7701 \\ Vince Bottom & \\ Josiane Bourque & \\ Rory Boyle & 0000-0003-0787-6892 \\ Julia K. Brynildsen & 0000-0002-1627-6576 \\ Navona Calarco & 0000-0002-4391-0472 \\ Jaime J. Castrellon & 0000-0001-5834-7101 \\ Natasha Chaku & 0000-0003-0944-6159 \\ Bosi Chen & 0000-0002-0117-9757 \\ Sidhant Chopra & 0000-0003-0866-3477 \\ Emily B. J. Coffey & 0000-0001-8249-7396 \\ Nigel Colenbier & 0000-0003-0928-2668 \\ Daniel J. Cox & \\ James Elliott Crippen & \\ Jacob J. Crouse & 0000-0002-3805-2936 \\ Szabolcs David & 0000-0003-0316-3895 \\ Benjamin De Leener & 0000-0002-1378-2756 \\ Gwyneth Delap & \\ Zhi-De Deng & 0000-0001-8925-0871 \\ Jules Roger Dugre & 0000-0003-4946-0350 \\ Anders Eklund & 0000-0001-7061-7995 \\ Kirsten Ellis & 0000-0002-7570-0939 \\ Arielle Ered & 0000-0002-8386-4423 \\ Harry Farmer & 0000-0002-3684-0605 \\ Joshua Faskowitz & 0000-0003-1814-7206 \\ Jody E. Finch & 0000-0003-2457-1345 \\ Guillaume Flandin & 0000-0003-0077-7859 \\ Matthew W. Flounders & 0000-0001-7014-4665 \\ Leon Fonville & 0000-0001-8874-7843 \\ Summer Frandsen & \\ Dea Garic & 0000-0003-3595-4210 \\ Patricia Garrido-Vásquez & 0000-0002-9561-8983 \\ Gabriel Gonzalez-Escamilla & 0000-0002-7209-1736 \\ Shannon E. Grogans & 0000-0003-0383-4601 \\ Mareike Grotheer & 0000-0002-8653-1157 \\ David C. Gruskin & 0000-0001-6504-191X \\ Guido I. Guberman & \\ Edda Briana Haggerty & 0000-0003-0597-7956 \\ Younghee Hahn & \\ Elizabeth H. Hall & \\ Jamie L. Hanson & 0000-0002-0469-8886 \\ Yann Harel & 0000-0002-8970-1983 \\ Bruno Hebling Vieira & 0000-0002-8770-7396 \\ Meike D. Hettwer & 0000-0002-7973-6752 \\ Corey Horien & 0000-0001-6738-1029 \\ Fan Huang & \\ Zeeshan M. Huque & \\ Anthony R. James & 0000-0002-5297-2229 \\ Isabella Kahhale & 0000-0002-0963-9738 \\ Sarah L. H. Kamhout & \\ Arielle S. Keller & 0000-0003-4708-1672 \\ Harmandeep Singh Khera & 0000-0001-6840-4616 \\ Gregory Kiar & 0000-0001-8915-496X \\ Peter Alexander Kirk & 0000-0003-0786-3039 \\ Simon H. Kohl & 0000-0003-0949-6754 \\ Stephanie A. Korenic & \\ Cole Korponay & 0000-0003-2562-9617 \\ Alyssa K. Kozlowski & \\ Nevena Kraljevic & 0000-0003-0869-648X \\ Alberto Lazari & 0000-0002-8688-581X \\ Mackenzie J. Leavitt & 0000-0002-6100-3235 \\ Zhaolong Li & 0000-0003-2246-4116 \\ Giulia Liberati & 0000-0002-5684-4443 \\ Elizabeth S. Lorenc & 0000-0003-1311-726X \\ Annabelle Julina Lossin & 0000-0001-5921-1353 \\ Leon D. Lotter & 0000-0002-2337-6073 \\ David M. Lydon-Staley & 0000-0001-8702-3923 \\ Christopher R. Madan & 0000-0003-3228-6501 \\ Neville Magielse & 0000-0002-6777-4225 \\ Hilary A. Marusak & 0000-0002-0771-6795 \\ Julien Mayor & 0000-0001-9827-542 \\ Amanda L. McGowan & 0000-0003-3422-0135 \\ Kahini P. Mehta & \\ Steven Lee Meisler & 0000-0002-8888-1572 \\ Cleanthis Michael & 0000-0002-5300-473X \\ Mackenzie E. Mitchell & 0000-0002-0225-6320 \\ Simon Morand-Beaulieu & 0000-0002-5880-3688 \\ Benjamin T. Newman & 0000-0002-0668-2853 \\ Jared A. Nielsen & 0000-0002-2717-193X \\ Shane M. O'Mara & \\ Amar Ojha & 0000-0002-1038-0225 \\ Adam Omary & \\ Evren Özarslan & 0000-0003-0859-1311 \\ Linden Parkes & 0000-0002-9329-7207 \\ Madeline Peterson & \\ Adam Robert Pines & \\ Claudia Pisanu & 0000-0002-9151-4319 \\ Ryan R. Rich & 0000-0001-9495-3184 \\ Ashish K. Sahoo & 0000-0003-1815-6655 \\ Amjad Samara & 0000-0002-6001-7395 \\ Farah Sayed & \\ Jonathan Thore Schneider & 0000-0002-1925-6669 \\ Lindsay S. Shaffer & 0000-0002-0642-1717 \\ Ekaterina Shatalina & 0000-0001-8900-0792 \\ Sara A. Sims & 0000-0001-7107-1891 \\ Skyler Sinclair & 0000-0003-3010-6431 \\ Jae W. Song & 0000-0002-3127-6427 \\ Griffin Stockton Hogrogian & 0000-0003-2877-078X \\ Christian K. Tamnes & 0000-0002-9191-6764 \\ Ursula A. Tooley & 0000-0001-6377-3885 \\ Vaibhav Tripathi & \\ Hamid B. Turker & 0000-0002-2670-4036 \\ Sofie Louise Valk & 0000-0003-2998-6849 \\ Matthew B. Wall & 0000-0002-0493-6274 \\ Cheryl K. Walther & \\ Yuchao Wang & 0000-0001-9871-3006 \\ Bertil Wegmann & 0000-0003-2193-6003 \\ Thomas Welton & 0000-0002-9503-2093 \\ Alex I. Wiesman & 0000-0003-0917-1570 \\ Andrew G. Wiesman & \\ Mark Wiesman & \\ Drew E. Winters & 0000-0002-0701-9658 \\ Ruiyi Yuan & \\ Sadie J. Zacharek & 0000-0001-8770-4614 \\ Chris Zajner & 0000-0002-0204-6497 \\ Ilya Zakharov & 0000-0001-7207-9641 \\ Gianpaolo Zammarchi & 0000-0002-9733-380X \\ Dale Zhou & 0000-0001-9240-1327 \\ Benjamin Zimmerman & 0000-0003-2570-8198 \\ Kurt Zoner & \\ \end{longtable} \section{Deep learning model architectures and loss curves} \label{app:deep-learning-architectures} Both the CNN-i and CNN-i+q models were implemented in Tensorflow 2 \cite{tensorflow} using the Keras module \cite{keras}. The image processing part of the model architecture was identical for both models: a modification of an existing 3D CNN \cite{zunair2020-bs} previously applied to assess tuberculosis severity \cite{dicente2019clef}. It accepts a 3D volume as input with four channels \begin{enumerate*}[% label=(\roman*),% before=\unskip{: },% itemjoin={{, }},% itemjoin*={{ and }}] \item the $b=0$ reference volume \item DEC-FA in the $x$-direction \item DEC-FA in the $y$-direction \item DEC-FA in the $z$-direction. \end{enumerate*} The \emph{QSIPrep}'s automated QC metrics were included as an additional fifth channel. The CNN-i+q model architecture is summarized in Figure~\ref{fig:dl-architecture}. Upon input, the CNN-i+q model extracts the imaging channels and passes them through the CNN architecture. The remaining data quality metrics channel is flattened and passed ``around'' the CNN architecture and concatenated with the output of the convolutional layers. This concatenated output is then passed through a fully-connected layer to produce a single output, the probability of passing QC. This architecture has \num{1438783} trainable parameters. \begin{figure}[tbp] \begin{subfigure}[t]{0.6\linewidth} \centering \includegraphics[width=\linewidth]{deep-learning-qc/model.pdf} \caption{Slicing and combining the input channels} \label{fig:dl-architecture:complete} \end{subfigure} \begin{subfigure}[t]{0.4\linewidth} \centering \includegraphics[width=\linewidth]{deep-learning-qc/image_model.pdf} \caption{CNN architecture} \label{fig:dl-architecture:cnn} \end{subfigure} \caption{% {\bf Deep learning model architecture}: \textbf{(a)} The CNN-i+q model accepts multichannel input that combined four imaging channels with a fifth channel containing 31 \emph{QSIPrep} automated data quality metrics. The imaging channels are separated from the data quality channel using \texttt{Lambda} layers. The imaging channels are passed through a CNN \textbf{(b)}, the output of which is concatenated with the data quality metrics, batch normalized and passed through two fully-connected (FC) layers, with rectified linear unit (ReLu) activation functions and with 512 and 128 units respectively. Each FC layer is followed by a dropout layer which drops 40\% of the input units. The final layer contains a single unit with a sigmoid activation function and outputs the probability of passing QC. % \textbf{(b)} The CNN portion of the model passes the imaging input through four convolutional blocks. Each block consists of a 3D convolutional layer with a kernel size of 3 and a ReLu activation, a 3D max pooling layer with a pool size of 2, and a batch normalization layer with Tensorflow's default parameters. The number of filters in the convolutional layers in each block are 64, 64, 128, and 256 respectively. The output of the final block is passed through a 3D global average pooling layer with Tensorflow's default parameters. } \label{fig:dl-architecture} \end{figure} To estimate the variability in model training, we trained ten separate models using different training and validation splits of the data. The gold standard dataset was not included in any of these splits and was reserved for reporting final model performance. Models were optimized for binary crossentropy loss using the Adam optimizer \cite{kingma2017adam} with an initial learning rate of 0.0001. We reduced the learning rate by a factor of 0.5 when the validation loss plateaued for more than two epochs. We also stopped training when the validation loss failed to improve by more than 0.001 for twenty consecutive epochs. These two adjustments were made using the \texttt{ReduceLROnPlateau} and \texttt{EarlyStopping} callbacks in Tensorflow 2 \cite{tensorflow} respectively. The training and validation loss curves for both the CNN-i and CNN-i+q models are depicted in Figure~\ref{fig:dl-loss}. While the CNN-i+q model achieved better validation loss, it did not outperform the CNN-i model on the held out gold standard dataset. \begin{figure}[tbp] \hfill \includegraphics[width=0.45\linewidth]{deep-learning-qc/dl_learning_curve_with_qc.pdf} \hfill \includegraphics[width=0.45\linewidth]{deep-learning-qc/dl_learning_curve_without_qc.pdf} \hfill \caption[Deep learning model loss curves]{% {\bf Deep learning model loss curves}: The binary cross-entropy loss (top), accuracy (middle), and ROC-AUC (bottom) for \textbf{(a)} the CNN-i+q model and \textbf{(a)} the CNN-i model. Model performance typically plateaued after twenty epochs but was allowed continue until meeting the early stopping criterion. The error bands represent a bootstrapped 95\% confidence interval. } \label{fig:dl-loss} \end{figure} \section{Bundle profiles} \label{app:profiles} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% We plot mean diffusivity tract profiles (MD, Figure~\ref{fig:qc-profiles:md}) and fractional anisotropy profiles (FA, Figure~\ref{fig:qc-profiles:fa}) grouped into four QC bins along the length of twenty-four bundles. While some bundles, such as the cingulum cingulate (CGC) and the inferior longitudinal fasciculus (ILF), appear insensitive to QC score, others, such as the uncinate (UNC) and the orbital portion of the corpus callosum, exhibit strong differences between QC bins. In most bundles, low QC scores tend to flatten the MD profile, indicating that MD appears artifactually homogeneous across the bundle. \begin{figure}[tbp] \centering \includegraphics[width=\linewidth]{bundle-profiles/qc-bins-dki-md.pdf} \caption{% {\bf MD bundle profiles show large QC group differences}: MD profiles binned by QC score in twenty-four major while matter bundles. The $x$-axis represents distance along the length of the fiber bundle. % The left and right uncinate bundles were the most sensitive to QC score. Generally, QC score tended to flatten bundle profiles. % Error bands represent bootstrapped 95\% confidence intervals. Bundle abbreviations for lateralized bundles contain a trailing ``L'' or ``R'' indicating the hemisphere. Bundle abbreviations: inferior fronto-occipital fasciculus (IFO), uncinate (UNC), thalamic radiation (ATR), corticospinal (CST), arcuate (ARC), superior longitudinal fasciculus (SLF). inferior longitudinal fasciculus (ILF), cingulum cingulate (CGC), orbital corpus callosum (Orbital), anterior frontal corpus callosum (AntFrontal), superior frontal corpus callosum (SupFrontal), motor corpus callosum (Motor), superior parietal corpus callosum (SupParietal), temporal corpus callosum (Temporal), post-parietal corpus callosum (PostParietal), and occipital corpus callosum (Occipital). } \label{fig:qc-profiles:md} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=\linewidth]{bundle-profiles/qc-bins-dki-fa.pdf} \caption[FA bundle profiles]{% {\bf FA bundle profiles binned by QC score}: FA profiles binned by QC score in twenty-four major while matter bundles. The $x$-axis represents distance along the length of the fiber bundle. Error bands represent bootstrapped 95\% confidence intervals. Bundle abbreviations are as in Figure~\ref{fig:qc-profiles:md} } \label{fig:qc-profiles:fa} \end{figure} \end{document}
{ "alphanum_fraction": 0.7328207197, "avg_line_length": 66.998896856, "ext": "tex", "hexsha": "565f8e6bd4ce265dc202206da189bcc1a70472a6", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-12-16T17:40:56.000Z", "max_forks_repo_forks_event_min_datetime": "2021-12-16T17:40:56.000Z", "max_forks_repo_head_hexsha": "b783825b1be862bfe6a2143c047bc54b9672f6d0", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "richford/hbn-pod2-paper", "max_forks_repo_path": "main.tex", "max_issues_count": 8, "max_issues_repo_head_hexsha": "b783825b1be862bfe6a2143c047bc54b9672f6d0", "max_issues_repo_issues_event_max_datetime": "2021-12-24T14:26:55.000Z", "max_issues_repo_issues_event_min_datetime": "2021-12-15T23:22:22.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "richford/hbn-pod2-paper", "max_issues_repo_path": "main.tex", "max_line_length": 1901, "max_stars_count": null, "max_stars_repo_head_hexsha": "b783825b1be862bfe6a2143c047bc54b9672f6d0", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "richford/hbn-pod2-paper", "max_stars_repo_path": "main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 30373, "size": 121469 }
\subsubsection*{Design} As we want our bit blitter to be nothing more than the memory-to-memory blocking in the Atari ST. We are going to split this process up into two parts. One for the font drawing and one for the sprite drawing, as they have minor differences in how the data looks like. Both of the will consist of eight by eight pixel information that has to be copied, the difference between them is that the font only consists of an alpha mask, the area which will be drawn, and one color for the letter to be drawn in. The sprite on the other hand has beside the alpha mask, the sprite data that needs to be drawn. This consists of an eight by eight grid of color information. \subsubsection*{Implementation} A separate object is made to copy a letter from the font into the frame buffer. In its state machine we differentiate between the idle state and the copy state, seen in \cref{fontfsm}. When the idle state gets triggered it waits till the start flag is set, similar as in \cref{subsec:des_bresenham}. Once it is triggered to draw it sets a counter to zero and switches into the copy state. \begin{lstlisting}[language=scala, caption={States of the Font Blitter}, label=fontfsm] val idle = new State with EntryPoint val copy = new State \end{lstlisting} In the copy state we evaluate each pixel of the alpha mask in a big switch-case. When the value of the pixel is set to high we write the color information for the corresponding pixel. When the whole alpha mask is done, for our case when the counter reaches 63, we go back into the idle state. to know to which pixel to draw to, the first three bits act as a part of the x-coordinate and the last three bits count for y-coordinate. \begin{lstlisting}[language=scala, caption={Copying the sprite color information into the frame buffer}, label={blitbuffer}] for (i <- 4 to 67) { is(i) { vga.io.wData(0) := buffer(10 downto 0).resized vga.io.wData(1) := buffer(21 downto 11).resized vga.io.wData(2) := buffer(31 downto 22).resized vga.io.wAddress := (!switchVGA ## (storeVals1(1) + temp(5 downto 3)) ## (storeVals1(0) + temp(2 downto 0))).asUInt vga.io.wValid := alpha(63 - (i - 4)) counter := counter + 1 valid := True goto(readRam) } } \end{lstlisting} The sprite blitter is part of our core, which is discussed in \cref{subsec:core}. The way we have to copy the data functions similarly to how it is done in the font copying. We have to check if the alpha mask is set at a certain pixel. If it is we can draw the pixel of the sprite, this can be seen in \cref{blitbuffer}. Here we also use a switch-case to check the corresponding pixels.
{ "alphanum_fraction": 0.7507541478, "avg_line_length": 85.5483870968, "ext": "tex", "hexsha": "51b17527df01eba133073b5c0816bb142403998a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "830ac33120425abe6eb5f60f088d407dc6fe9f01", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "azrael1206/AEGIS", "max_forks_repo_path": "doc/src/desimpl_blit.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "830ac33120425abe6eb5f60f088d407dc6fe9f01", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "azrael1206/AEGIS", "max_issues_repo_path": "doc/src/desimpl_blit.tex", "max_line_length": 394, "max_stars_count": 4, "max_stars_repo_head_hexsha": "830ac33120425abe6eb5f60f088d407dc6fe9f01", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "azrael1206/AEGIS", "max_stars_repo_path": "doc/src/desimpl_blit.tex", "max_stars_repo_stars_event_max_datetime": "2021-11-11T04:44:05.000Z", "max_stars_repo_stars_event_min_datetime": "2019-11-15T11:08:29.000Z", "num_tokens": 691, "size": 2652 }
\chapter{Conclusions} I've successfully developed a pleiotropy test for multiparental populations. I discussed our new methods in Chapter~\ref{sec:testing}. In developing a pleiotropy test for multiparental populations, our novel contributions included accommodation of multiple alleles and incorporation of polygenic random effects to account for complicated patterns of relatedness. In Chapter~\ref{sec:applications}, we illustrated the test's use in three vignettes. The first vignette compared pleiotropy testing with mediation analysis in the dissection of expression trait QTL hotspots. I learned that the pleiotropy test provides information about the number of underlying QTL even when mediation analyses don't identify intermediates. Pleiotropy testing also serves as a useful screen before applying mediation analyses for a collection of putative intermediates. The second vignette examined my test's power to detect separate QTL in pairs of local expression traits. I learned that both interlocus distance and univariate LOD scores impact test statistic values. I was unable to find a strong relationship between allele effects patterns and statistical power. In the last vignette, I applied my test to two gut microbiome-related traits. From this analysis, I learned that the two traits share a pleiotropic QTL and, thus, it is reasonable to conduct further causal modeling studies for these two traits. Chapter~\ref{sec:software} demonstrates features of the \texttt{qtl2pleio} R package. \texttt{qtl2pleio} provides functions for multi-dimensional, multi-QTL scans. It also creates profile LOD plots and performs bootstrap tests to get p-values for the pleiotropy test statistics. This package uses the data structures in the R package \texttt{qtl2} \citep{broman2019rqtl2}. I now conclude the thesis with brief discussions of limitations and future research. \section{Limitations} \subsection{A $d$-variate pleiotropy test} Pleiotropy tests for two traits at a time have a valuable role in complex trait genetics. However, to fully use the tens of thousands of experimentally measured traits, we need to consider testing more than two traits at a time. Suppose that five traits map to a single region spanned by 100 markers. One might perform a series of $\binom{5}{2} = 10$ bivariate QTL scans and 10 pairwise tests for pleiotropy. Each bivariate scan would require $100^2 = 10,000$ model fits by generalized least squares. Alternatively, one could perform a d-variate QTL scan, with $d=5$ in this case. With the results of the d-variate scan, a variety of statistical hypotheses could be tested. For example, one could formulate a test for the null hypothesis that all five traits share a pleiotropic QTL against the alternative that the first two traits share a single QTL and the last three traits share a distinct, pleiotropic QTL. The d-variate scan over the 100-marker region, would require $100^d = 100^5 = 10$ billion model fits via generalized least squares. With distributed computing resources, including the resources at the University of Wisconsin's Center for High-throughput Computing, this is not an unreasonable volume of computing. The use of C++, instead of R, for generalized least squares calculations decreases the computing time for each model fit. My \texttt{qtl2pleio} R package contains code that performs $d$-variate, $d$-QTL scans for a genomic region. In this thesis, I set $d = 2$ for all analyses, yet the code and theory accommodate $d > 2$. The major hurdle in performing $d$-variate, $d$-QTL scans, as the above calculations suggest, is the computing time. Yet, even without modifying the current \texttt{qtl2pleio} code base, I can use computing clusters to complete multivariate, multi-QTL scans in reasonable time periods when $d$ is 3, 4, or 5. Before applying the $d$-variate pleiotropy test to experimental data, I would characterize its statistical properties, like I did for the bivariate test in Chapter~\ref{sec:testing}. I would examine power and type I error rate for a variety of settings and distinct values of $d$. \subsection{Pleiotropy test power and allele effects patterns} Based on findings from \citet{macdonald2007joint} and \citet{king2012genetic} I anticipated finding a stronger relationship between allele effects patterns and pleiotropy test power (Section~\ref{sec:power-analyses} and Figure~\ref{fig:cor}). \citet{macdonald2007joint} and \citet{king2012genetic} argue that similar allele effects patterns for two traits in multiparental populations favor pleiotropy over separate QTL when the QTL is bi-allelic. I would like to investigate this question with simulated traits in which we know the true genetic architecture. In Chapter~\ref{sec:power-analyses}, we used experimental data where I don't know the true number of QTL alleles. To address this question, I would perform a simulation study. I would first study pleiotropic (simulated) traits to see if they show evidence of similar allele effects, as measured by correlation between fitted values, when the QTL is bi-allelic. Because the Diversity Outbred mice have eight alleles at every locus, I want to consider the 22 partititions of eight alleles. The partition number (22 for eight objects) is the number of ways to form nonempty subsets with (unlabeled) objects. Because I'm concerned with merely the number of objects in each subset, rather than the labels of the objects, I only need to examine the 22 partitions. For example, one of the 22 partitions of eight objects is to have two subsets, where one subset has one allele and the other has seven alleles. A second partition of the 22 partitions of eight objects is to have eight subsets, with each subset containing exactly one allele. \section{Future research} \subsection{Selection bias} Selection bias is a known concern in QTL studies \citep{lande1990efficiency}. Sometimes termed the ``Beavis effect'', after a researcher who described it in QTL studies \citep{beavis1991quantitative,beavis1994power}, selection bias arises when characterizing the QTL effect on the trait of interest. Given that a QTL is discovered, the estimated effect, in terms of proportion of trait variance explained, tends to overestimate the true effect \citep{broman2009guide}. Additionally, the number of detected QTL is biased downward in a genome-wide study \citep{beavis1998qtl}. Originally described in two-parent crosses, \citet{king2017beavis} found evidence for the Beavis effect in multiparental \emph{Drosophila melanogaster} populations. QTL studies in Diversity Outbred mice likely exhibit similar phenomena. \citet{xu2003theoretical} attributed the Beavis effect to the observation that QTL are only reported when the evidence in favor of a QTL exceeds a quantitative threshold. When \citet{xu2003theoretical} considered the appropriate truncated distributions, the experimental findings agreed with the theoretical expected results. In my studies, I identify traits of interest as pairs in which each shows sufficiently strong evidence of univariate QTL in a single genomic region. Identifying the traits of interest, then, is subject to the Beavis effect. To my knowledge, the impact of the Beavis effect on pleiotropy testing has not been studied. The direction of the Beavis effect on my pleiotropy test remains unclear at the time of this writing. Recall that our pleiotropy test statistic is the difference in log likelihood values under the alternative and under the null hypothesis. If there truly are two distinct QTL for a pair of traits, then it may be that the Beavis effect inflates the pleiotropy test statistic because the maximum log likelihood value in the two-dimensional grid may be more inflated than the maximum log likelihood along the diagonal (\emph{i.e.}, under pleiotropy). This question could be studied with simulated phenotypes using the genotypic data from \citet{keller2018genetic}. \subsection{Determining significance thresholds for LOD difference and LOD difference proportion statistics} One methodological question that arose in Chapter~\ref{sec:applications} is that of determining significance thresholds in mediation analyses. \citet{chick2016defining}, in a landmark investigation of mediation methods for sytems genetics studies, approximated the null LOD difference statistic distribution with an empirical distribution of sham intermediates. \citet{keller2018genetic} used an arbitrary threshold of 1.5 for declaring significant LOD difference statistics. To accommodate signals of differing strengths, I calculated LOD difference proportion statistics for the traits that \citet{keller2018genetic} studied. However, I made no effort to determine a significance threshold for LOD difference proportion statistics. To address the issue of determining a significance threshold for LOD difference proportion statistics, one could borrow from \citet{chick2016defining} the idea of using sham mediators, and calculate the LOD difference proportion for each sham mediator. The collection of LOD difference proportion statistics for sham mediators would provide an empirical null distribution with which to compare the observed statistics. Instead of using experimentally obtained sham mediators, one could also simulate sham mediators. \subsection{Collapsing eight alleles to two to enhance power} \citet{qtl2pattern} developed the R package \texttt{qtl2pattern} in which he collapses eight founder allele dosages into two ``pattern probabilities''. The assumption here is that, for some genomic regions, there may be only two alleles at each marker. If I can recognize the binary ``SNP distribution pattern'' for the genomic region, I could then collapse the eight alleles into two groups. For example, it may be that A, B, C, D, E, and F lines all have the same alleles at a sequence of markers, while G and H share a different set of alleles over the same interval. I would then partition the eight founder alleles into two groups, 1. ABCDEF and 2. GH, and determine the pattern probabilities. It might be that my pleiotropy test power would increase if I were to use the binary allele pattern probabilities instead of using the eight founder allele dosages. At the present time, this is an open research question. I would begin with a simulation study using Diversity Outbred mouse genotypic data from \citep{keller2018genetic}. With a collection of simulated traits, I would know the true allele patterns. I would then compare statistical power when I perform my pleiotropy test with each of the two encodings of genotype data, the eight founder allele dosages and the pattern probabilities. \subsection{Multiple testing in mediation analysis} I learned in Chapter~\ref{sec:applications} that my pleiotropy test is a useful screen before performing mediation analyses. One application of this fact is in reducing the number of mediation analyses when examining an expression QTL hotspot. Doing fewer mediation analyses leads to a less strict significance threshold for the LOD difference and related test statistics. I envision a procedure in which one performs a collection of pleiotropy tests pairing local expression traits with nonlocal expression traits, like I did in Section~\ref{sec:hotspot-dissection}. With the results of these tests, one may identify local expression trait - nonlocal expression trait pairs that are consistent with a single pleiotropic locus. Subsequent mediation analysis, for only those trait pairs that arise from pleiotropic loci, would be done. In this manner, I would reduce the number of mediation analyses, and reduce the impact of inflated family-wise error rate.
{ "alphanum_fraction": 0.8119344374, "avg_line_length": 109.476635514, "ext": "tex", "hexsha": "bba33fe2cd8e471ff99821553611e4a5d04cc143", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0698f667ed776f2a5a6768eb92d7681e0ed8825a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "fboehm/diss-latex", "max_forks_repo_path": "ch5.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0698f667ed776f2a5a6768eb92d7681e0ed8825a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "fboehm/diss-latex", "max_issues_repo_path": "ch5.tex", "max_line_length": 1138, "max_stars_count": null, "max_stars_repo_head_hexsha": "0698f667ed776f2a5a6768eb92d7681e0ed8825a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "fboehm/diss-latex", "max_stars_repo_path": "ch5.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2565, "size": 11714 }
\subsection{Policies} A policy maps the state onto the action \(a_t=\pi (s_t)\) The policy does not need to change over time, as discounting is constant. That is, if the policy should be different in future, it should be different now. The policy affects the transition model, and so we have \(P_\pi \). \subsubsection{Optimal policy} There exists a policy that is better than any other policy, under any starting state. There is no closed form solution to finding the optimal policy. There are instead iterative methods.
{ "alphanum_fraction": 0.7669172932, "avg_line_length": 26.6, "ext": "tex", "hexsha": "8613486cbb61f9f80c47749690ebdacb9f631083", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/computer/dynamic/01-03-policy.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/computer/dynamic/01-03-policy.tex", "max_line_length": 155, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/computer/dynamic/01-03-policy.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 120, "size": 532 }
\documentclass[10pt,a4paper,oneside]{scrartcl} \usepackage[latin1]{inputenc} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{makeidx} \usepackage{graphicx} \usepackage{booktabs} \usepackage[ style=ieee ] {biblatex} \usepackage{mathtools} \author{} \title{Storage State} \date{} \addbibresource{~/modules/References.bib} \begin{document} \maketitle \paragraph{Notation}: \texttt{ss} \paragraph{Description}: The storage state refers to information particular to a given account while that account's associated EVM code runs. \printbibliography \end{document}
{ "alphanum_fraction": 0.7783333333, "avg_line_length": 23.0769230769, "ext": "tex", "hexsha": "685f46467922c059411e985b851cf8dff2095612", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "739acec76a56f478678d2bc49d95315f7afc407b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "chronaeon/Modules", "max_forks_repo_path": "modules/static/state_database/storage_state/storage_state.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "739acec76a56f478678d2bc49d95315f7afc407b", "max_issues_repo_issues_event_max_datetime": "2018-01-03T08:35:20.000Z", "max_issues_repo_issues_event_min_datetime": "2017-12-15T20:39:00.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "chronaeon/Modules", "max_issues_repo_path": "modules/static/state_database/storage_state/storage_state.tex", "max_line_length": 141, "max_stars_count": 1, "max_stars_repo_head_hexsha": "739acec76a56f478678d2bc49d95315f7afc407b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "chronaeon/Modules", "max_stars_repo_path": "modules/static/state_database/storage_state/storage_state.tex", "max_stars_repo_stars_event_max_datetime": "2017-11-11T01:47:38.000Z", "max_stars_repo_stars_event_min_datetime": "2017-11-11T01:47:38.000Z", "num_tokens": 178, "size": 600 }
\chapter{REQUIREMENTS AND SPECIFICATION} \label{chap:requirements} % GradeCalc aims to be the first choice of students. To be in that precious spot, it will have to fit the student's needs perfectly. The following ones are the requirements I collected from asking colleagues and filtering brainstormed ideas. This section defines and explains the requirements of the application. They are obtained by analyzing how students manage their grades and coming up with solutions that would help them be more productive. They are also obtained by directly asking what potential users would want in the app. \input{sections/requirements-funct} \input{sections/requirements-non} \input{sections/conceptual-model} \input{sections/tasks}
{ "alphanum_fraction": 0.8178082192, "avg_line_length": 60.8333333333, "ext": "tex", "hexsha": "e5f87f2fc81996b40dcff974ec722f6f141f44f0", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0fa78d5709b31024bafdfd0428c972cf0cec3ffb", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mauriciabad/TFG", "max_forks_repo_path": "chapters/requirements.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0fa78d5709b31024bafdfd0428c972cf0cec3ffb", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mauriciabad/TFG", "max_issues_repo_path": "chapters/requirements.tex", "max_line_length": 290, "max_stars_count": null, "max_stars_repo_head_hexsha": "0fa78d5709b31024bafdfd0428c972cf0cec3ffb", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mauriciabad/TFG", "max_stars_repo_path": "chapters/requirements.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 146, "size": 730 }
\documentclass[12pt]{article} \usepackage{graphicx} \usepackage{subfig} \usepackage{hyperref} \usepackage{float} \usepackage[margin=1in]{geometry} \title{CarND Behavioral Cloning Project Writeup} \author{Tiffany Huang} \date{\today} \begin{document} \maketitle \section{Behavioral Cloning Project} This project consists on the following steps/tasks: \begin{itemize} \item {Use the simulator to collect data of good driving behavior.} \item {Build a convolutional neural network in Keras that predicts steering angles from images.} \item {Train and validate the model with a training and validation set.} \item {Test that the model successfully drives around track one without leaving the road.} \item {Summarize the results in a written report.} \end{itemize} My project code can be found here: \url{https://github.com/tahuang/Behavioral_Cloning}. Next, I will consider each of the rubric points individually and describe how I addressed each point in my implementation. \section{Files Submitted \& Code Quality} My project includes the following files: \begin{itemize} \item {\texttt{model.py} containing the script to create and train the model} \item {\texttt{drive.py} for driving the car in autonomous mode} \item {\texttt{model.h5} containing the trained convolutional neural network} \item {\texttt{writeup\_report.pdf} summarizing the results} \end{itemize} Using the Udacity provided simulator and my \texttt{drive.py} file, the car can be driven autonomously around the track by executing: \texttt{python drive.py model.h5}. The \texttt{model.py} file contains the code for training and saving the convolutional neural network. The file shows the pipeline I used for training and validating the model, and it contains comments to explain how the code works. \section{Model Architecture and Training Strategy} \subsection{Final Model Architecture} My final model is outlined in lines 83-95 of \texttt{model.py}. Preprocessing of the data is done with normalization of the images using a Keras lambda layer (line 78). The input images are also cropped to remove irrelevant pixels in the driving images that could distract the model (line 80). The final model consists of the following layers: \begin{center} \begin{tabular}{|c|p{10cm}|} \hline \textbf{Layer} & \textbf{Description} \\ \hline Input & 160 x 320 x 3 RGB Image \\ \hline Convolutional & 24 5 x 5 filters, 2 x 2 stride, VALID padding, ReLU activation \\ \hline Convolutional & 36 5 x 5 filters, 2 x 2 stride, VALID padding, ReLU activation \\ \hline Convolutional & 48 5 x 5 filters, 2 x 2 stride, VALID padding, ReLU activation \\ \hline Convolutional & 64 3 x 3 filters, 1 x 1 stride, VALID padding, ReLU activation \\ \hline Convolutional & 64 3 x 3 filters, 1 x 1 stride, VALID padding, ReLU activation \\ \hline Flatten & \\ \hline Fully Connected & Outputs 100 x 1\\ \hline Dropout & Drop 0.2 (one fifth) of samples \\ \hline Fully Connected & Outputs 50 x 1\\ \hline Dropout & Drop 0.2 (one fifth) of samples \\ \hline Fully Connected & Outputs 10 x 1\\ \hline Dropout & Drop 0.2 (one fifth) of samples \\ \hline Fully Connected & Outputs 1 x 1 \\ \hline \end{tabular} \end{center} \subsection{Attempts to Reduce Overfitting} The model contains dropout layers in order to reduce overfitting (lines 90, 92, 94). The model was tested by running it through the simulator and ensuring that the vehicle stayed on the track for at least one lap. \subsection{Model Parameter Tuning} The model used an Adam optimizer, so the learning rate was not tuned manually (line 97). \subsection{Appropriate Training Data} Training data was chosen to keep the vehicle on the road. I used a combination of center lane driving and recovering from the left and right sides of the road. More details can be found in the next section. \section{Model Architecture \& Training Strategy} \subsection{Solution Design Approach} The overall strategy for deriving a model architecture was to try some well-known architectures and tweak them to get the car successfully driving around the track. My first step was to use a covolutional neural network similar to LeNet[1] since I've used it before as a basis for the Traffic Sign Classifier project. This model could be appropriate because the inputs in this problem are also RGB images and it's a powerful network that has proven successful for various tasks. In order to gauge how well the model was working, I split my image and steering angle data into a training and validation set. I found that my first model had a low mean squared error on the training set but a high mean squared error on the validation set. This implied that the model was overfitting. To combat the overfitting, I modified the model by adding a few dropout layers between the fully connected layers near the end of the network. However, the car was still struggling with certain sections so I decided to try the even more powerful Nvidia architecture[2] because this architecture has been used for very similar applications, namely driving a car autonomously. After changing the network architecture, I kept the dropout layers to reduce overfitting. However, there were still some spots on the track where the vehicle was driving off the road where there was no barrier. To improve the driving behavior in these cases, I added more training data to reinforce staying on the road. I flipped the left and right camera images in addition to the center ones to increase the amount of data and I recorded 2 more laps of driving including some driving where I am recovering back to the center of the road from the left or right side. After adding more training data, performance significantly improved. There was still a small case where the car would roll up on the ledge on the section of the road with red and white stripes on the sides in the right turn after the bridge. To improve this behavior, I tuned the steering angle correction for the left and right side camera images. At the end of the process, the vehicle can drive autonomously around the track without leaving the road! \subsection{Creation of the Training Set \& Training Process} To capture good driving behavior, I first recorded one lap around the track using center lane driving. Here is an example of center lane driving: \begin{figure}[h] \centering \includegraphics[scale = 1]{data/IMG/center_2017_11_29_20_11_58_441.jpg} \caption{Center lane driving} \end{figure} \vspace{50mm} After a few iterations of building my network, I recorded 2 more laps around the track and also recorded the vehicle recovering from the left and right side of the track. Here's some examples of recoveries from the side of the track: \begin{figure}[!h] \centering \includegraphics[scale = 1]{data/IMG/center_2017_12_21_20_08_10_962.jpg} \caption{Recovery from left side part 1} \end{figure} \begin{figure}[H] \centering \includegraphics[scale = 1]{data/IMG/center_2017_12_21_20_08_11_370.jpg} \caption{Recovery from left side part 2} \end{figure} \begin{figure}[h] \centering \includegraphics[scale = 1]{data/IMG/center_2017_12_21_20_08_11_643.jpg} \caption{Recovery from left side part 3} \end{figure} \begin{figure}[!h] \centering \includegraphics[scale = 1]{data/IMG/center_2017_12_21_20_08_14_315.jpg} \caption{Recovery from right side part 1} \end{figure} \begin{figure}[H] \centering \includegraphics[scale = 1]{data/IMG/center_2017_12_21_20_08_15_417.jpg} \caption{Recovery from right side part 2} \end{figure} \vspace{50mm} \begin{figure}[h] \centering \includegraphics[scale = 1]{data/IMG/center_2017_12_21_20_08_16_785.jpg} \caption{Recovery from right side part 3} \end{figure} To help get a slightly larger variety of data, I added the left and right camera images and used the center steering angle measurement plus or minus a correction for the label. Here are examples of a left and right camera image: \begin{figure}[!h] \centering \includegraphics[scale = 1]{data/IMG/left_2017_12_21_20_07_29_607.jpg} \caption{Left camera image} \end{figure} \begin{figure}[H] \centering \includegraphics[scale = 1]{data/IMG/center_2017_12_21_20_07_29_607.jpg} \caption{Center camera image} \end{figure} \begin{figure}[h] \centering \includegraphics[scale = 1]{data/IMG/right_2017_12_21_20_07_29_607.jpg} \caption{Right camera image} \end{figure} I also augmented the data set by flipping all the camera images and steering angle measurements. Here's an example of a flipped image: \begin{figure}[H] \centering \includegraphics[scale = 1]{data/IMG/center_2017_12_21_20_06_58_582.jpg} \caption{Original image} \end{figure} \begin{figure}[h] \centering \includegraphics[scale = 1]{flipped_center_2017_12_21_20_06_58_582.jpg} \caption{Flipped image} \end{figure} After the collection process I had 24,902 images. I preprocessed this data by normalizing and cropping 50 pixels off the top and 20 off the bottom of the image. I shuffled the data and set aside 20\% of the data for validation. I used this training data for training the model. The validation set helped determine if the model was over or underfitting. The ideal number of epochs was 5 as evidenced by the following graph. After 5 epochs, performance starts to get worse. I used an Adam optimizer so that manually training the learning rate wasn't necessary. \begin{figure}[H] \centering \includegraphics[scale = 1]{train_validation_loss.png} \end{figure} [1] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-Based Learning Applied to Document Recognition," \textit{Proc. IEEE}, vol.86, no. 11, pp. 2278-2324, Nov. 1998. [2] Bojarski, Mariusz, et al. "End to end learning for self-driving cars." arXiv preprint arXiv:1604.07316 (2016). \end{document}
{ "alphanum_fraction": 0.7848951338, "avg_line_length": 50.1502590674, "ext": "tex", "hexsha": "8e230c9f04c01df59e5d92caa0b65463372b2326", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d8e312dd111b5fd7655bb59b471038a47ff96448", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "tahuang/Behavioral_Cloning", "max_forks_repo_path": "writeup_report.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d8e312dd111b5fd7655bb59b471038a47ff96448", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "tahuang/Behavioral_Cloning", "max_issues_repo_path": "writeup_report.tex", "max_line_length": 567, "max_stars_count": null, "max_stars_repo_head_hexsha": "d8e312dd111b5fd7655bb59b471038a47ff96448", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "tahuang/Behavioral_Cloning", "max_stars_repo_path": "writeup_report.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2566, "size": 9679 }
\documentclass[12pt,longbibliography]{revtex4-1} \usepackage{ulem} \usepackage{url} \usepackage{epsfig} \usepackage{graphicx,color}% Include figure files \usepackage{epstopdf} \DeclareGraphicsRule{.tif}{png}{.png}{`convert #1 `basename #1 .tif`.png} \usepackage[psamsfonts]{amssymb} \usepackage{amsmath} \usepackage{indentfirst} %\usepackage{cite} \newenvironment{revision}{\color{blue}}{\color{black}} \newcommand{\be}{\begin{equation}} \newcommand{\ee}{\end{equation}} \newcommand{\ba}{\begin{eqnarray}} \newcommand{\ea}{\end{eqnarray}} \begin{document} \title{\large Given Enough Eyeballs, All Bugs Are Shallow? \\ Revisiting Eric Raymond with Bug Bounty Programs} \author{Thomas Maillart} \email{[email protected]} \affiliation{University of Geneva, Geneva, Switzerland} \author{Mingyi Zhao} %\email{} \affiliation{Pennsylvania State University, USA} \author{Jens Grossklags} \affiliation{Technical University Munich, Germany} \author{John Chuang} \affiliation{University of California, Berkeley, USA} \date{\today} \begin{abstract} \vspace{1cm} \input{../sections/abstract} \end{abstract} \maketitle %\input{sections/intro} %\input{sections/background} %\input{sections/methods} \input{../sections/intro} \input{../sections/related} \input{../sections/data} \input{../sections/method} \input{../sections/results} \input{../sections/discussion} \input{../sections/conclusion} %\subsubsection*{Acknowledgements} %\acknowledgements{This research was supported in part by the National Science Foundation through award CCF-0424422 (TRUST - Team for Research in Ubiquitous Secure Technology). One of the authors (TM) acknowledges support from the Swiss National Science Foundation (SNSF; Grants PA00P2\_145368 and P300P2\_158462). The authors would like to thank Aron Laszka for his valuable comments, as well as the 3 anonymous reviewers who provided insightful comments and thus contributed to a considerable improvement of the manuscript.} %\bibliographystyle{apsrev4-1} \bibliography{../references} \end{document}
{ "alphanum_fraction": 0.771331058, "avg_line_length": 26.9868421053, "ext": "tex", "hexsha": "5b2ceae7126c39a6d25009db8ff246e7f414e394", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3b2f09de463268f22d3cb65e364b7fe380062169", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "wazaahhh/bountyhunters", "max_forks_repo_path": "manuscripts/jcs/jcs.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3b2f09de463268f22d3cb65e364b7fe380062169", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "wazaahhh/bountyhunters", "max_issues_repo_path": "manuscripts/jcs/jcs.tex", "max_line_length": 527, "max_stars_count": null, "max_stars_repo_head_hexsha": "3b2f09de463268f22d3cb65e364b7fe380062169", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "wazaahhh/bountyhunters", "max_stars_repo_path": "manuscripts/jcs/jcs.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 564, "size": 2051 }
% Default to the notebook output style % Inherit from the specified cell style. \documentclass[11pt]{article} \usepackage[T1]{fontenc} % Nicer default font (+ math font) than Computer Modern for most use cases \usepackage{mathpazo} % Basic figure setup, for now with no caption control since it's done % automatically by Pandoc (which extracts ![](path) syntax from Markdown). \usepackage{graphicx} % We will generate all images so they have a width \maxwidth. This means % that they will get their normal width if they fit onto the page, but % are scaled down if they would overflow the margins. \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth \else\Gin@nat@width\fi} \makeatother \let\Oldincludegraphics\includegraphics % Set max figure width to be 80% of text width, for now hardcoded. \renewcommand{\includegraphics}[1]{\Oldincludegraphics[width=.8\maxwidth]{#1}} % Ensure that by default, figures have no caption (until we provide a % proper Figure object with a Caption API and a way to capture that % in the conversion process - todo). \usepackage{caption} \DeclareCaptionLabelFormat{nolabel}{} \captionsetup{labelformat=nolabel} \usepackage{adjustbox} % Used to constrain images to a maximum size \usepackage{xcolor} % Allow colors to be defined \usepackage{enumerate} % Needed for markdown enumerations to work \usepackage{geometry} % Used to adjust the document margins \usepackage{amsmath} % Equations \usepackage{amssymb} % Equations \usepackage{textcomp} % defines textquotesingle % Hack from http://tex.stackexchange.com/a/47451/13684: \AtBeginDocument{% \def\PYZsq{\textquotesingle}% Upright quotes in Pygmentized code } \usepackage{upquote} % Upright quotes for verbatim code \usepackage{eurosym} % defines \euro \usepackage[mathletters]{ucs} % Extended unicode (utf-8) support \usepackage[utf8x]{inputenc} % Allow utf-8 characters in the tex document \usepackage{fancyvrb} % verbatim replacement that allows latex \usepackage{grffile} % extends the file name processing of package graphics % to support a larger range % The hyperref package gives us a pdf with properly built % internal navigation ('pdf bookmarks' for the table of contents, % internal cross-reference links, web links for URLs, etc.) \usepackage{hyperref} \usepackage{longtable} % longtable support required by pandoc >1.10 \usepackage{booktabs} % table support for pandoc > 1.12.2 \usepackage[inline]{enumitem} % IRkernel/repr support (it uses the enumerate* environment) \usepackage[normalem]{ulem} % ulem is needed to support strikethroughs (\sout) % normalem makes italics be italics, not underlines % Colors for the hyperref package \definecolor{urlcolor}{rgb}{0,.145,.698} \definecolor{linkcolor}{rgb}{.71,0.21,0.01} \definecolor{citecolor}{rgb}{.12,.54,.11} % ANSI colors \definecolor{ansi-black}{HTML}{3E424D} \definecolor{ansi-black-intense}{HTML}{282C36} \definecolor{ansi-red}{HTML}{E75C58} \definecolor{ansi-red-intense}{HTML}{B22B31} \definecolor{ansi-green}{HTML}{00A250} \definecolor{ansi-green-intense}{HTML}{007427} \definecolor{ansi-yellow}{HTML}{DDB62B} \definecolor{ansi-yellow-intense}{HTML}{B27D12} \definecolor{ansi-blue}{HTML}{208FFB} \definecolor{ansi-blue-intense}{HTML}{0065CA} \definecolor{ansi-magenta}{HTML}{D160C4} \definecolor{ansi-magenta-intense}{HTML}{A03196} \definecolor{ansi-cyan}{HTML}{60C6C8} \definecolor{ansi-cyan-intense}{HTML}{258F8F} \definecolor{ansi-white}{HTML}{C5C1B4} \definecolor{ansi-white-intense}{HTML}{A1A6B2} % commands and environments needed by pandoc snippets % extracted from the output of `pandoc -s` \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}} % Add ',fontsize=\small' for more characters per line \newenvironment{Shaded}{}{} \newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{{#1}}}} \newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.56,0.13,0.00}{{#1}}} \newcommand{\DecValTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}} \newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}} \newcommand{\FloatTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}} \newcommand{\CharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\StringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\CommentTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textit{{#1}}}} \newcommand{\OtherTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{{#1}}} \newcommand{\AlertTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{{#1}}}} \newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.02,0.16,0.49}{{#1}}} \newcommand{\RegionMarkerTok}[1]{{#1}} \newcommand{\ErrorTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{{#1}}}} \newcommand{\NormalTok}[1]{{#1}} % Additional commands for more recent versions of Pandoc \newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.53,0.00,0.00}{{#1}}} \newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.73,0.40,0.53}{{#1}}} \newcommand{\ImportTok}[1]{{#1}} \newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.73,0.13,0.13}{\textit{{#1}}}} \newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} \newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} \newcommand{\VariableTok}[1]{\textcolor[rgb]{0.10,0.09,0.49}{{#1}}} \newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{{#1}}}} \newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.40,0.40,0.40}{{#1}}} \newcommand{\BuiltInTok}[1]{{#1}} \newcommand{\ExtensionTok}[1]{{#1}} \newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.74,0.48,0.00}{{#1}}} \newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.49,0.56,0.16}{{#1}}} \newcommand{\InformationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} \newcommand{\WarningTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} % Define a nice break command that doesn't care if a line doesn't already % exist. \def\br{\hspace*{\fill} \\* } % Math Jax compatability definitions \def\gt{>} \def\lt{<} % Document parameters \title{The Notebook, but not that one starring Ryan Gosling } % Pygments definitions \makeatletter \def\PY@reset{\let\PY@it=\relax \let\PY@bf=\relax% \let\PY@ul=\relax \let\PY@tc=\relax% \let\PY@bc=\relax \let\PY@ff=\relax} \def\PY@tok#1{\csname PY@tok@#1\endcsname} \def\PY@toks#1+{\ifx\relax#1\empty\else% \PY@tok{#1}\expandafter\PY@toks\fi} \def\PY@do#1{\PY@bc{\PY@tc{\PY@ul{% \PY@it{\PY@bf{\PY@ff{#1}}}}}}} \def\PY#1#2{\PY@reset\PY@toks#1+\relax+\PY@do{#2}} \expandafter\def\csname PY@tok@w\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.73,0.73}{##1}}} \expandafter\def\csname PY@tok@c\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@cp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.74,0.48,0.00}{##1}}} \expandafter\def\csname PY@tok@k\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kt\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.69,0.00,0.25}{##1}}} \expandafter\def\csname PY@tok@o\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@ow\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.67,0.13,1.00}{##1}}} \expandafter\def\csname PY@tok@nb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@nf\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@nc\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@nn\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@ne\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.82,0.25,0.23}{##1}}} \expandafter\def\csname PY@tok@nv\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@no\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.53,0.00,0.00}{##1}}} \expandafter\def\csname PY@tok@nl\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.63,0.63,0.00}{##1}}} \expandafter\def\csname PY@tok@ni\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.60,0.60,0.60}{##1}}} \expandafter\def\csname PY@tok@na\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.49,0.56,0.16}{##1}}} \expandafter\def\csname PY@tok@nt\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@nd\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.67,0.13,1.00}{##1}}} \expandafter\def\csname PY@tok@s\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@sd\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@si\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.53}{##1}}} \expandafter\def\csname PY@tok@se\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.13}{##1}}} \expandafter\def\csname PY@tok@sr\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.53}{##1}}} \expandafter\def\csname PY@tok@ss\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@sx\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@m\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@gh\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}} \expandafter\def\csname PY@tok@gu\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.50,0.00,0.50}{##1}}} \expandafter\def\csname PY@tok@gd\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.63,0.00,0.00}{##1}}} \expandafter\def\csname PY@tok@gi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.63,0.00}{##1}}} \expandafter\def\csname PY@tok@gr\endcsname{\def\PY@tc##1{\textcolor[rgb]{1.00,0.00,0.00}{##1}}} \expandafter\def\csname PY@tok@ge\endcsname{\let\PY@it=\textit} \expandafter\def\csname PY@tok@gs\endcsname{\let\PY@bf=\textbf} \expandafter\def\csname PY@tok@gp\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}} \expandafter\def\csname PY@tok@go\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.53,0.53,0.53}{##1}}} \expandafter\def\csname PY@tok@gt\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.27,0.87}{##1}}} \expandafter\def\csname PY@tok@err\endcsname{\def\PY@bc##1{\setlength{\fboxsep}{0pt}\fcolorbox[rgb]{1.00,0.00,0.00}{1,1,1}{\strut ##1}}} \expandafter\def\csname PY@tok@kc\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kd\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kn\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kr\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@bp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@fm\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@vc\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@vg\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@vi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@vm\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@sa\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@sb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@sc\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@dl\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@s2\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@sh\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@s1\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@mb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@mf\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@mh\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@mi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@il\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@mo\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@ch\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@cm\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@cpf\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@c1\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@cs\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \def\PYZbs{\char`\\} \def\PYZus{\char`\_} \def\PYZob{\char`\{} \def\PYZcb{\char`\}} \def\PYZca{\char`\^} \def\PYZam{\char`\&} \def\PYZlt{\char`\<} \def\PYZgt{\char`\>} \def\PYZsh{\char`\#} \def\PYZpc{\char`\%} \def\PYZdl{\char`\$} \def\PYZhy{\char`\-} \def\PYZsq{\char`\'} \def\PYZdq{\char`\"} \def\PYZti{\char`\~} % for compatibility with earlier versions \def\PYZat{@} \def\PYZlb{[} \def\PYZrb{]} \makeatother % Exact colors from NB \definecolor{incolor}{rgb}{0.0, 0.0, 0.5} \definecolor{outcolor}{rgb}{0.545, 0.0, 0.0} % Prevent overflowing lines due to hard-to-break entities \sloppy % Setup hyperref package \hypersetup{ breaklinks=true, % so long urls are correctly broken across lines colorlinks=true, urlcolor=urlcolor, linkcolor=linkcolor, citecolor=citecolor, } % Slightly bigger margins than the latex defaults \geometry{verbose,tmargin=1in,bmargin=1in,lmargin=1in,rmargin=1in} \begin{document} \maketitle \begin{figure} \centering \includegraphics{attachment:Adele-Hello.jpg} \caption{Adele-Hello.jpg} \end{figure} \section{Topics}\label{topics} \begin{itemize} \tightlist \item Data Presentation \item Visualization \item Optimization \item Machine Learning \item Misc. \end{itemize} \section{Data Visualization}\label{data-visualization} \subsection{This is a Jupyter Notebook with RISE functionality}\label{this-is-a-jupyter-notebook-with-rise-functionality} \begin{figure} \centering \includegraphics{Pictures/jupyter.png} \caption{jupyter} \end{figure} \subsubsection{You might have noticed I'm running this in my browser. And yes, that does mean I can run it off a server (internal or external) and show off results to other people.}\label{you-might-have-noticed-im-running-this-in-my-browser.-and-yes-that-does-mean-i-can-run-it-off-a-server-internal-or-external-and-show-off-results-to-other-people.} \begin{figure} \centering \includegraphics{Pictures/browser.jpg} \caption{browser} \end{figure} \subsubsection{Jupyter is nice in that it allows for multiple programming languages. It's name is a portmanteau of Julia, Python and R. I use Python, but it normally looks like this:}\label{jupyter-is-nice-in-that-it-allows-for-multiple-programming-languages.-its-name-is-a-portmanteau-of-julia-python-and-r.-i-use-python-but-it-normally-looks-like-this} \begin{figure} \centering \includegraphics{Pictures/Spyder.jpg} \caption{spyder} \end{figure} Python has a lot of useful interactions with other programming languages that could be Hatch relevant \begin{itemize} \tightlist \item SQL is the most obvious one. I can pull results from queries directly into Pandas dataframes for analysis \item It also supports JS, so I could in theory run the D3.js libraries directly out of this for really nice visuals \item It is possible to generate HTML with Python as well \item Python programs can be made into applications which can be run on servers (e.g. live reporting) \end{itemize} Python has several cool visualization tools \begin{itemize} \tightlist \item Matplotlib \item Seaborn \item Bokeh \end{itemize} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}1}]:} \PY{c+c1}{\PYZsh{}Here\PYZsq{}s a fairly basic matplotlib histogram. This is live code.} \PY{c+c1}{\PYZsh{}This is just setup to get data in} \PY{k+kn}{import} \PY{n+nn}{numpy} \PY{k}{as} \PY{n+nn}{np} \PY{k+kn}{import} \PY{n+nn}{matplotlib}\PY{n+nn}{.}\PY{n+nn}{pyplot} \PY{k}{as} \PY{n+nn}{plt} \PY{k+kn}{import} \PY{n+nn}{pandas} \PY{k}{as} \PY{n+nn}{pd} \PY{n}{id\PYZus{}df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{read\PYZus{}csv}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{C:/Users/Perry/.spyder\PYZhy{}py3/indeed\PYZus{}hw\PYZus{}data.csv}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{header}\PY{o}{=}\PY{l+m+mi}{0}\PY{p}{)} \PY{n}{cols} \PY{o}{=} \PY{n}{id\PYZus{}df}\PY{o}{.}\PY{n}{columns}\PY{o}{.}\PY{n}{values}\PY{o}{.}\PY{n}{astype}\PY{p}{(}\PY{n+nb}{str}\PY{p}{)} \PY{n}{colct} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{arange}\PY{p}{(}\PY{n+nb}{len}\PY{p}{(}\PY{n}{cols}\PY{p}{)}\PY{p}{)} \PY{n}{nancol} \PY{o}{=} \PY{p}{[}\PY{p}{]} \PY{c+c1}{\PYZsh{}missing revenue the same as zero } \PY{n}{id\PYZus{}df}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{revenue}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{=} \PY{n}{id\PYZus{}df}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{revenue}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{fillna}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{)} \PY{n}{id\PYZus{}age\PYZus{}ex} \PY{o}{=} \PY{n}{id\PYZus{}df}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{n}{id\PYZus{}df}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{age}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{\PYZlt{}}\PY{o}{=} \PY{l+m+mi}{0}\PY{p}{]} \PY{n}{id\PYZus{}age\PYZus{}ex\PYZus{}sub} \PY{o}{=} \PY{n}{id\PYZus{}age\PYZus{}ex}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{n}{id\PYZus{}age\PYZus{}ex}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{age}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{\PYZlt{}} \PY{l+m+mi}{0}\PY{p}{]} \PY{c+c1}{\PYZsh{}limit to only zero age } \PY{n}{id\PYZus{}age\PYZus{}ex} \PY{o}{=} \PY{n}{id\PYZus{}age\PYZus{}ex}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{n}{id\PYZus{}age\PYZus{}ex}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{age}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{0}\PY{p}{]} \PY{n}{id\PYZus{}df} \PY{o}{=} \PY{n}{id\PYZus{}df}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{n}{id\PYZus{}df}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{age}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{\PYZgt{}}\PY{o}{=} \PY{l+m+mi}{0}\PY{p}{]} \PY{n}{id\PYZus{}df} \PY{o}{=} \PY{n}{id\PYZus{}df}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{n}{id\PYZus{}df}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{revenue}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{\PYZlt{}} \PY{l+m+mi}{3500000000}\PY{p}{]} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}2}]:} \PY{c+c1}{\PYZsh{}Now I can draw the figure } \PY{n}{num\PYZus{}bins} \PY{o}{=} \PY{l+m+mi}{100} \PY{n}{gdt} \PY{o}{=} \PY{n}{id\PYZus{}df}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{revenue}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{n}{id\PYZus{}df}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{revenue}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{\PYZgt{}} \PY{l+m+mi}{0}\PY{p}{]} \PY{n}{gdt} \PY{o}{=} \PY{n}{gdt}\PY{p}{[}\PY{n}{gdt} \PY{o}{\PYZlt{}}\PY{o}{=} \PY{l+m+mi}{150000000}\PY{p}{]} \PY{n}{n}\PY{p}{,} \PY{n}{bins}\PY{p}{,} \PY{n}{patches} \PY{o}{=} \PY{n}{plt}\PY{o}{.}\PY{n}{hist}\PY{p}{(}\PY{n}{gdt}\PY{p}{,} \PY{n}{num\PYZus{}bins}\PY{p}{,} \PY{n}{density} \PY{o}{=} \PY{l+m+mi}{1}\PY{p}{,} \PY{n}{facecolor}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{blue}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{alpha}\PY{o}{=}\PY{l+m+mf}{0.5}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{ticklabel\PYZus{}format}\PY{p}{(}\PY{n}{style}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{plain}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{axis}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{x}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{scilimits}\PY{o}{=}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,}\PY{l+m+mi}{0}\PY{p}{)}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{title}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Revenue}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{xlabel}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Value}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{ylabel}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Frequency}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{fig} \PY{o}{=} \PY{n}{plt}\PY{o}{.}\PY{n}{gcf}\PY{p}{(}\PY{p}{)} \PY{n}{fig}\PY{o}{.}\PY{n}{set\PYZus{}size\PYZus{}inches}\PY{p}{(}\PY{l+m+mf}{18.5}\PY{p}{,} \PY{l+m+mf}{10.5}\PY{p}{)} \end{Verbatim} \begin{center} \adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{The Notebook, but not that one starring Ryan Gosling _files/The Notebook, but not that one starring Ryan Gosling _8_0.png} \end{center} { \hspace*{\fill} \\} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}3}]:} \PY{c+c1}{\PYZsh{}We can quickly make this look nicer with Seaborn styles} \PY{k+kn}{import} \PY{n+nn}{seaborn} \PY{k}{as} \PY{n+nn}{sns} \PY{n}{sns}\PY{o}{.}\PY{n}{set}\PY{p}{(}\PY{p}{)} \PY{n}{n}\PY{p}{,} \PY{n}{bins}\PY{p}{,} \PY{n}{patches} \PY{o}{=} \PY{n}{plt}\PY{o}{.}\PY{n}{hist}\PY{p}{(}\PY{n}{gdt}\PY{p}{,} \PY{n}{num\PYZus{}bins}\PY{p}{,} \PY{n}{density} \PY{o}{=} \PY{l+m+mi}{1}\PY{p}{,} \PY{n}{alpha}\PY{o}{=}\PY{l+m+mf}{0.5}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{title}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Revenue}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{xlabel}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Value}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{ylabel}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Frequency}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{fig} \PY{o}{=} \PY{n}{plt}\PY{o}{.}\PY{n}{gcf}\PY{p}{(}\PY{p}{)} \PY{n}{fig}\PY{o}{.}\PY{n}{set\PYZus{}size\PYZus{}inches}\PY{p}{(}\PY{l+m+mf}{18.5}\PY{p}{,} \PY{l+m+mf}{10.5}\PY{p}{)} \end{Verbatim} \begin{center} \adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{The Notebook, but not that one starring Ryan Gosling _files/The Notebook, but not that one starring Ryan Gosling _9_0.png} \end{center} { \hspace*{\fill} \\} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}4}]:} \PY{c+c1}{\PYZsh{}It can be used to create some really nice looking visualizations} \PY{c+c1}{\PYZsh{}This is an example from their site (again this is live code)} \PY{n}{sns}\PY{o}{.}\PY{n}{set}\PY{p}{(}\PY{n}{style}\PY{o}{=}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{white}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{rc}\PY{o}{=}\PY{p}{\PYZob{}}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{axes.facecolor}\PY{l+s+s2}{\PYZdq{}}\PY{p}{:} \PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{)}\PY{p}{\PYZcb{}}\PY{p}{)} \PY{n}{rs} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{random}\PY{o}{.}\PY{n}{RandomState}\PY{p}{(}\PY{l+m+mi}{1979}\PY{p}{)} \PY{n}{x} \PY{o}{=} \PY{n}{rs}\PY{o}{.}\PY{n}{randn}\PY{p}{(}\PY{l+m+mi}{500}\PY{p}{)} \PY{n}{g} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{tile}\PY{p}{(}\PY{n+nb}{list}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{ABCDEFGHIJ}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}\PY{p}{,} \PY{l+m+mi}{50}\PY{p}{)} \PY{n}{df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{n+nb}{dict}\PY{p}{(}\PY{n}{x}\PY{o}{=}\PY{n}{x}\PY{p}{,} \PY{n}{g}\PY{o}{=}\PY{n}{g}\PY{p}{)}\PY{p}{)} \PY{n}{m} \PY{o}{=} \PY{n}{df}\PY{o}{.}\PY{n}{g}\PY{o}{.}\PY{n}{map}\PY{p}{(}\PY{n+nb}{ord}\PY{p}{)} \PY{n}{df}\PY{p}{[}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{x}\PY{l+s+s2}{\PYZdq{}}\PY{p}{]} \PY{o}{+}\PY{o}{=} \PY{n}{m} \PY{n}{pal} \PY{o}{=} \PY{n}{sns}\PY{o}{.}\PY{n}{cubehelix\PYZus{}palette}\PY{p}{(}\PY{l+m+mi}{10}\PY{p}{,} \PY{n}{rot}\PY{o}{=}\PY{o}{\PYZhy{}}\PY{o}{.}\PY{l+m+mi}{25}\PY{p}{,} \PY{n}{light}\PY{o}{=}\PY{o}{.}\PY{l+m+mi}{7}\PY{p}{)} \PY{n}{g} \PY{o}{=} \PY{n}{sns}\PY{o}{.}\PY{n}{FacetGrid}\PY{p}{(}\PY{n}{df}\PY{p}{,} \PY{n}{row}\PY{o}{=}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{g}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{hue}\PY{o}{=}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{g}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{aspect}\PY{o}{=}\PY{l+m+mi}{15}\PY{p}{,} \PY{n}{size}\PY{o}{=}\PY{o}{.}\PY{l+m+mi}{5}\PY{p}{,} \PY{n}{palette}\PY{o}{=}\PY{n}{pal}\PY{p}{)} \PY{n}{g}\PY{o}{.}\PY{n}{map}\PY{p}{(}\PY{n}{sns}\PY{o}{.}\PY{n}{kdeplot}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{x}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{clip\PYZus{}on}\PY{o}{=}\PY{k+kc}{False}\PY{p}{,} \PY{n}{shade}\PY{o}{=}\PY{k+kc}{True}\PY{p}{,} \PY{n}{alpha}\PY{o}{=}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{lw}\PY{o}{=}\PY{l+m+mf}{1.5}\PY{p}{,} \PY{n}{bw}\PY{o}{=}\PY{o}{.}\PY{l+m+mi}{2}\PY{p}{)} \PY{n}{g}\PY{o}{.}\PY{n}{map}\PY{p}{(}\PY{n}{sns}\PY{o}{.}\PY{n}{kdeplot}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{x}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{clip\PYZus{}on}\PY{o}{=}\PY{k+kc}{False}\PY{p}{,} \PY{n}{color}\PY{o}{=}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{w}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{lw}\PY{o}{=}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{bw}\PY{o}{=}\PY{o}{.}\PY{l+m+mi}{2}\PY{p}{)} \PY{n}{g}\PY{o}{.}\PY{n}{map}\PY{p}{(}\PY{n}{plt}\PY{o}{.}\PY{n}{axhline}\PY{p}{,} \PY{n}{y}\PY{o}{=}\PY{l+m+mi}{0}\PY{p}{,} \PY{n}{lw}\PY{o}{=}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{clip\PYZus{}on}\PY{o}{=}\PY{k+kc}{False}\PY{p}{)} \PY{k}{def} \PY{n+nf}{label}\PY{p}{(}\PY{n}{x}\PY{p}{,} \PY{n}{color}\PY{p}{,} \PY{n}{label}\PY{p}{)}\PY{p}{:} \PY{n}{ax} \PY{o}{=} \PY{n}{plt}\PY{o}{.}\PY{n}{gca}\PY{p}{(}\PY{p}{)} \PY{n}{ax}\PY{o}{.}\PY{n}{text}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,} \PY{o}{.}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{label}\PY{p}{,} \PY{n}{fontweight}\PY{o}{=}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{bold}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{color}\PY{o}{=}\PY{n}{color}\PY{p}{,} \PY{n}{ha}\PY{o}{=}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{left}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{va}\PY{o}{=}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{center}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{transform}\PY{o}{=}\PY{n}{ax}\PY{o}{.}\PY{n}{transAxes}\PY{p}{)} \PY{n}{g}\PY{o}{.}\PY{n}{map}\PY{p}{(}\PY{n}{label}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{x}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{g}\PY{o}{.}\PY{n}{fig}\PY{o}{.}\PY{n}{subplots\PYZus{}adjust}\PY{p}{(}\PY{n}{hspace}\PY{o}{=}\PY{o}{\PYZhy{}}\PY{o}{.}\PY{l+m+mi}{25}\PY{p}{)} \PY{n}{g}\PY{o}{.}\PY{n}{set\PYZus{}titles}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{g}\PY{o}{.}\PY{n}{set}\PY{p}{(}\PY{n}{yticks}\PY{o}{=}\PY{p}{[}\PY{p}{]}\PY{p}{)} \PY{n}{g}\PY{o}{.}\PY{n}{despine}\PY{p}{(}\PY{n}{bottom}\PY{o}{=}\PY{k+kc}{True}\PY{p}{,} \PY{n}{left}\PY{o}{=}\PY{k+kc}{True}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{outcolor}Out[{\color{outcolor}4}]:} <seaborn.axisgrid.FacetGrid at 0x28f13269668> \end{Verbatim} \begin{center} \adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{The Notebook, but not that one starring Ryan Gosling _files/The Notebook, but not that one starring Ryan Gosling _10_1.png} \end{center} { \hspace*{\fill} \\} \section{Bokeh is the one you'll be most interested in. It allows for the same sort of interactivity that D3.js does, but is in Python. I'd have to learn this, but it's a small lift since I do a lot of Python anyway}\label{bokeh-is-the-one-youll-be-most-interested-in.-it-allows-for-the-same-sort-of-interactivity-that-d3.js-does-but-is-in-python.-id-have-to-learn-this-but-its-a-small-lift-since-i-do-a-lot-of-python-anyway} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}5}]:} \PY{c+c1}{\PYZsh{}Here\PYZsq{}s an example from their site} \PY{k+kn}{from} \PY{n+nn}{bokeh}\PY{n+nn}{.}\PY{n+nn}{io} \PY{k}{import} \PY{n}{output\PYZus{}notebook} \PY{k+kn}{from} \PY{n+nn}{bokeh}\PY{n+nn}{.}\PY{n+nn}{plotting} \PY{k}{import} \PY{n}{figure}\PY{p}{,} \PY{n}{show} \PY{n}{N} \PY{o}{=} \PY{l+m+mi}{4000} \PY{n}{x} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{random}\PY{o}{.}\PY{n}{random}\PY{p}{(}\PY{n}{size}\PY{o}{=}\PY{n}{N}\PY{p}{)} \PY{o}{*} \PY{l+m+mi}{100} \PY{n}{y} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{random}\PY{o}{.}\PY{n}{random}\PY{p}{(}\PY{n}{size}\PY{o}{=}\PY{n}{N}\PY{p}{)} \PY{o}{*} \PY{l+m+mi}{100} \PY{n}{radii} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{random}\PY{o}{.}\PY{n}{random}\PY{p}{(}\PY{n}{size}\PY{o}{=}\PY{n}{N}\PY{p}{)} \PY{o}{*} \PY{l+m+mf}{1.5} \PY{n}{colors} \PY{o}{=} \PY{p}{[}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{\PYZsh{}}\PY{l+s+si}{\PYZpc{}02x}\PY{l+s+si}{\PYZpc{}02x}\PY{l+s+si}{\PYZpc{}02x}\PY{l+s+s2}{\PYZdq{}} \PY{o}{\PYZpc{}}\PY{p}{(}\PY{n+nb}{int}\PY{p}{(}\PY{n}{r}\PY{p}{)}\PY{p}{,}\PY{n+nb}{int}\PY{p}{(}\PY{n}{g}\PY{p}{)}\PY{p}{,}\PY{l+m+mi}{150}\PY{p}{)} \PY{k}{for} \PY{n}{r}\PY{p}{,} \PY{n}{g} \PY{o+ow}{in} \PY{n+nb}{zip}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{floor}\PY{p}{(}\PY{l+m+mi}{50}\PY{o}{+}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{x}\PY{p}{)}\PY{p}{,} \PY{n}{np}\PY{o}{.}\PY{n}{floor}\PY{p}{(}\PY{l+m+mi}{30}\PY{o}{+}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{y}\PY{p}{)}\PY{p}{)}\PY{p}{]} \PY{n}{output\PYZus{}notebook}\PY{p}{(}\PY{p}{)} \PY{n}{p} \PY{o}{=} \PY{n}{figure}\PY{p}{(}\PY{p}{)} \PY{n}{p}\PY{o}{.}\PY{n}{circle}\PY{p}{(}\PY{n}{x}\PY{p}{,} \PY{n}{y}\PY{p}{,} \PY{n}{radius} \PY{o}{=} \PY{n}{radii}\PY{p}{,} \PY{n}{fill\PYZus{}color}\PY{o}{=}\PY{n}{colors}\PY{p}{,} \PY{n}{fill\PYZus{}alpha}\PY{o}{=}\PY{l+m+mf}{0.6}\PY{p}{,} \PY{n}{line\PYZus{}color}\PY{o}{=}\PY{k+kc}{None}\PY{p}{)} \PY{n}{show}\PY{p}{(}\PY{n}{p}\PY{p}{)} \end{Verbatim} \section{Optimization}\label{optimization} \subsubsection{One of the more important skills I've picked up, Python allows for calculating the type of advanced curve optimizations I used to do in school with Matlab, but with access to better data structures and file IO}\label{one-of-the-more-important-skills-ive-picked-up-python-allows-for-calculating-the-type-of-advanced-curve-optimizations-i-used-to-do-in-school-with-matlab-but-with-access-to-better-data-structures-and-file-io} \begin{itemize} \tightlist \item purchase quantities \item bundles of items \begin{itemize} \tightlist \item creating packages that maximize our profit \item getting the most dense carts for customers \end{itemize} \item labor allocations \begin{itemize} \tightlist \item CICs \item buildroom \end{itemize} \item account distribution \end{itemize} Smash.GG Fantasy Given team budget and allowed to pick certain number of players. Very similar to auction leagues for fantasy football This is my team for the Norcal Regionals Event. I came in second: \begin{figure} \centering \includegraphics{Pictures/fantasy.jpg} \caption{NCR} \end{figure} \section{I pick my teams by using linear optimization on player expected values}\label{i-pick-my-teams-by-using-linear-optimization-on-player-expected-values} \section{I composite expected values from prior results with some manipulation based on my anecdotal knowledge of players}\label{i-composite-expected-values-from-prior-results-with-some-manipulation-based-on-my-anecdotal-knowledge-of-players} \section{There are multiple types of these optimizers beyond linear, but linear works well for this problem space and compiles quickly}\label{there-are-multiple-types-of-these-optimizers-beyond-linear-but-linear-works-well-for-this-problem-space-and-compiles-quickly} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}6}]:} \PY{c+c1}{\PYZsh{}This retroactively looked at the results from a major to calculate the best possible team} \PY{c+c1}{\PYZsh{}I use the same code with almost zero changes to evaluate teams; it\PYZsq{}s immaterial to switch} \PY{k+kn}{import} \PY{n+nn}{numpy} \PY{k}{as} \PY{n+nn}{np} \PY{k+kn}{import} \PY{n+nn}{pandas} \PY{k}{as} \PY{n+nn}{pd} \PY{k+kn}{from} \PY{n+nn}{pulp} \PY{k}{import} \PY{o}{*} \PY{n}{df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{read\PYZus{}csv}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{players1.csv}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{header}\PY{o}{=}\PY{k+kc}{None}\PY{p}{)} \PY{n}{a} \PY{o}{=} \PY{n}{df}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \PY{n}{b} \PY{o}{=} \PY{n}{df}\PY{p}{[}\PY{l+m+mi}{4}\PY{p}{]} \PY{n}{c} \PY{o}{=} \PY{n}{df}\PY{p}{[}\PY{l+m+mi}{5}\PY{p}{]} \PY{n}{tbl} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{n}{a}\PY{p}{,} \PY{n}{b}\PY{p}{,} \PY{n}{c}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{T} \PY{n}{fantasy\PYZus{}budget} \PY{o}{=} \PY{n}{pulp}\PY{o}{.}\PY{n}{LpProblem}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{fantasy optimization}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{pulp}\PY{o}{.}\PY{n}{LpMaximize}\PY{p}{)} \PY{n}{players} \PY{o}{=} \PY{n}{tbl}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{l+m+mi}{0}\PY{p}{]} \PY{n}{x} \PY{o}{=} \PY{n}{pulp}\PY{o}{.}\PY{n}{LpVariable}\PY{o}{.}\PY{n}{dict}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{x\PYZus{}}\PY{l+s+si}{\PYZpc{}s}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{players}\PY{p}{,} \PY{n}{lowBound} \PY{o}{=}\PY{l+m+mi}{0}\PY{p}{,} \PY{n}{cat}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Integer}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{price} \PY{o}{=} \PY{n+nb}{dict}\PY{p}{(}\PY{n+nb}{zip}\PY{p}{(}\PY{n}{players}\PY{p}{,} \PY{n}{tbl}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{l+m+mi}{2}\PY{p}{]}\PY{p}{)}\PY{p}{)} \PY{n}{ev} \PY{o}{=} \PY{n+nb}{dict}\PY{p}{(}\PY{n+nb}{zip}\PY{p}{(}\PY{n}{players}\PY{p}{,} \PY{n}{tbl}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{)}\PY{p}{)} \PY{n}{fantasy\PYZus{}budget} \PY{o}{+}\PY{o}{=} \PY{n+nb}{sum}\PY{p}{(}\PY{p}{[} \PY{p}{(}\PY{n}{x}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{o}{*}\PY{n}{price}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{)}\PY{o}{*}\PY{n}{ev}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n}{players}\PY{p}{]}\PY{p}{)} \PY{n}{fantasy\PYZus{}budget} \PY{o}{+}\PY{o}{=} \PY{n+nb}{sum}\PY{p}{(}\PY{p}{[} \PY{p}{(}\PY{n}{x}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{o}{*}\PY{n}{price}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{)} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n}{players}\PY{p}{]}\PY{p}{)} \PY{o}{\PYZlt{}}\PY{o}{=} \PY{l+m+mi}{1200} \PY{n}{fantasy\PYZus{}budget} \PY{o}{+}\PY{o}{=} \PY{n+nb}{sum}\PY{p}{(}\PY{p}{[} \PY{p}{(}\PY{n}{x}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{)} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n}{players}\PY{p}{]}\PY{p}{)} \PY{o}{==} \PY{l+m+mi}{12} \PY{k}{for} \PY{n}{player} \PY{o+ow}{in} \PY{n}{players}\PY{p}{:} \PY{n}{fantasy\PYZus{}budget} \PY{o}{+}\PY{o}{=} \PY{n}{x}\PY{p}{[}\PY{n}{player}\PY{p}{]} \PY{o}{\PYZlt{}}\PY{o}{=} \PY{n}{price}\PY{p}{[}\PY{n}{player}\PY{p}{]} \PY{k}{for} \PY{n}{player} \PY{o+ow}{in} \PY{n}{players}\PY{p}{:} \PY{n}{fantasy\PYZus{}budget} \PY{o}{+}\PY{o}{=} \PY{n}{x}\PY{p}{[}\PY{n}{player}\PY{p}{]} \PY{o}{\PYZlt{}}\PY{o}{=} \PY{l+m+mi}{1} \PY{n}{fantasy\PYZus{}budget}\PY{o}{.}\PY{n}{solve}\PY{p}{(}\PY{p}{)} \PY{n}{hld} \PY{o}{=} \PY{p}{[}\PY{p}{]} \PY{k}{for} \PY{n}{player} \PY{o+ow}{in} \PY{n}{players}\PY{p}{:} \PY{n}{t} \PY{o}{=} \PY{n}{x}\PY{p}{[}\PY{n}{player}\PY{p}{]} \PY{n}{u} \PY{o}{=} \PY{n}{x}\PY{p}{[}\PY{n}{player}\PY{p}{]}\PY{o}{.}\PY{n}{value}\PY{p}{(}\PY{p}{)} \PY{n}{v} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{n}{t}\PY{p}{,} \PY{n}{u}\PY{p}{]}\PY{p}{)} \PY{n}{hld}\PY{o}{.}\PY{n}{append}\PY{p}{(}\PY{n}{v}\PY{p}{)} \PY{n}{df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{n}{hld}\PY{p}{,} \PY{n}{index}\PY{o}{=}\PY{k+kc}{None}\PY{p}{,} \PY{n}{columns}\PY{o}{=}\PY{k+kc}{None}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{n}{df}\PY{p}{[}\PY{p}{(}\PY{n}{df}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{\PYZgt{}} \PY{l+m+mi}{0}\PY{p}{)}\PY{p}{]}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] 0 1 0 x\_SonicFox 1.0 1 x\_GO1 1.0 2 x\_Dogura 1.0 3 x\_NyChrisG 1.0 4 x\_Kazunoko 1.0 30 x\_TwiTchAy 1.0 44 x\_PSYKENonTWITCH 1.0 46 x\_Des! 1.0 47 x\_Nice 1.0 61 x\_Coolestred 1.0 62 x\_Coffee\_prince 1.0 63 x\_Shogun 1.0 \end{Verbatim} \section{This team would have performed \textgreater{}25\% better than the single best human picked team even without bonus questions}\label{this-team-would-have-performed-25-better-than-the-single-best-human-picked-team-even-without-bonus-questions} \section{I of course got too clever for my own good and forced a player on my team and did not finish in prizes for this event}\label{i-of-course-got-too-clever-for-my-own-good-and-forced-a-player-on-my-team-and-did-not-finish-in-prizes-for-this-event} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}7}]:} \PY{k+kn}{from} \PY{n+nn}{IPython}\PY{n+nn}{.}\PY{n+nn}{display} \PY{k}{import} \PY{n}{HTML} \PY{c+c1}{\PYZsh{} Youtube} \PY{n}{HTML}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZlt{}iframe width=}\PY{l+s+s1}{\PYZdq{}}\PY{l+s+s1}{560}\PY{l+s+s1}{\PYZdq{}}\PY{l+s+s1}{ height=}\PY{l+s+s1}{\PYZdq{}}\PY{l+s+s1}{315}\PY{l+s+s1}{\PYZdq{}}\PY{l+s+s1}{ src=}\PY{l+s+s1}{\PYZdq{}}\PY{l+s+s1}{https://www.youtube.com/embed/vP65VVoUm6E}\PY{l+s+s1}{\PYZdq{}}\PY{l+s+s1}{ frameborder=}\PY{l+s+s1}{\PYZdq{}}\PY{l+s+s1}{0}\PY{l+s+s1}{\PYZdq{}}\PY{l+s+s1}{ allow=}\PY{l+s+s1}{\PYZdq{}}\PY{l+s+s1}{autoplay; encrypted\PYZhy{}media}\PY{l+s+s1}{\PYZdq{}}\PY{l+s+s1}{ allowfullscreen\PYZgt{}\PYZlt{}/iframe\PYZgt{}}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{outcolor}Out[{\color{outcolor}7}]:} <IPython.core.display.HTML object> \end{Verbatim} \section{Machine Learning}\label{machine-learning} \section{Recently a friend invited me to do some Optical Character Recognition work to help process match VODs into a repository}\label{recently-a-friend-invited-me-to-do-some-optical-character-recognition-work-to-help-process-match-vods-into-a-repository} \subsection{The best players of the fighting game Guilty Gear live in Japan and play at the Mikado arcade}\label{the-best-players-of-the-fighting-game-guilty-gear-live-in-japan-and-play-at-the-mikado-arcade} \begin{figure} \centering \includegraphics{Pictures/mikado.jpg} \caption{mikado} \end{figure} \subsection{In the West we watch these VODs to learn how to play better. Unfortunately, what we get are unmarked videos of several hours in length that lack critical information}\label{in-the-west-we-watch-these-vods-to-learn-how-to-play-better.-unfortunately-what-we-get-are-unmarked-videos-of-several-hours-in-length-that-lack-critical-information} \section{Relevant Information}\label{relevant-information} \begin{itemize} \tightlist \item Characters \item Match Start Time \item Players \end{itemize} \subsubsection{Luckily for me work has been done on all 3, but the current solution to the third is a paid web service (Google Vision API)}\label{luckily-for-me-work-has-been-done-on-all-3-but-the-current-solution-to-the-third-is-a-paid-web-service-google-vision-api} \subsubsection{Also fortunately, all revelant information is available on one screen, even though quality is quite low}\label{also-fortunately-all-revelant-information-is-available-on-one-screen-even-though-quality-is-quite-low} \begin{figure} \centering \includegraphics{Pictures/screen3.jpg} \caption{screen} \end{figure} \section{I was going to evaluate the relative performance between a local solution called PyTesseract to Google Vision API (which is world class performance)}\label{i-was-going-to-evaluate-the-relative-performance-between-a-local-solution-called-pytesseract-to-google-vision-api-which-is-world-class-performance} \subsubsection{Tesseract is pretty good, and the new version uses LSTM trained sets, but it's pretty far behind Google theoretically as google is applying very sophisticated pre-processing to a cluster trained model based on 8+ layer convolutional neural networks (CNN)}\label{tesseract-is-pretty-good-and-the-new-version-uses-lstm-trained-sets-but-its-pretty-far-behind-google-theoretically-as-google-is-applying-very-sophisticated-pre-processing-to-a-cluster-trained-model-based-on-8-layer-convolutional-neural-networks-cnn} \subsubsection{Process Image}\label{process-image} \begin{figure} \centering \includegraphics{Pictures/3656.png} \caption{proc} \end{figure} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}8}]:} \PY{k+kn}{from} \PY{n+nn}{PIL} \PY{k}{import} \PY{n}{Image} \PY{k+kn}{import} \PY{n+nn}{pytesseract} \PY{k+kn}{import} \PY{n+nn}{argparse} \PY{k+kn}{import} \PY{n+nn}{cv2} \PY{k+kn}{import} \PY{n+nn}{os} \PY{n}{pytesseract}\PY{o}{.}\PY{n}{pytesseract}\PY{o}{.}\PY{n}{tesseract\PYZus{}cmd} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{C:/Program Files (x86)/Tesseract\PYZhy{}OCR/tesseract.exe}\PY{l+s+s1}{\PYZsq{}} \PY{n}{ap} \PY{o}{=} \PY{n}{argparse}\PY{o}{.}\PY{n}{ArgumentParser}\PY{p}{(}\PY{p}{)} \PY{n}{ap}\PY{o}{.}\PY{n}{add\PYZus{}argument}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{\PYZhy{}i}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{\PYZhy{}\PYZhy{}image}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{required}\PY{o}{=}\PY{k+kc}{True}\PY{p}{,} \PY{n}{help}\PY{o}{=}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{path to input image to be OCR}\PY{l+s+s2}{\PYZsq{}}\PY{l+s+s2}{d}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{ap}\PY{o}{.}\PY{n}{add\PYZus{}argument}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{\PYZhy{}p}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{\PYZhy{}\PYZhy{}preprocess}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n+nb}{type}\PY{o}{=}\PY{n+nb}{str}\PY{p}{,} \PY{n}{default}\PY{o}{=}\PY{k+kc}{None}\PY{p}{,} \PY{n}{help}\PY{o}{=}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{type of preprocessing to be done}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{args} \PY{o}{=} \PY{n+nb}{vars}\PY{p}{(}\PY{n}{ap}\PY{o}{.}\PY{n}{parse\PYZus{}args}\PY{p}{(}\PY{p}{)}\PY{p}{)} \PY{c+c1}{\PYZsh{}filename = os.path.join(skimage.data\PYZus{}dir, \PYZsq{}moon.png\PYZsq{})} \PY{c+c1}{\PYZsh{}filename = \PYZsq{}C:/Users/Perry/Pictures/screen3.jpg\PYZsq{}} \PY{n}{image} \PY{o}{=} \PY{n}{cv2}\PY{o}{.}\PY{n}{imread}\PY{p}{(}\PY{n}{args}\PY{p}{[}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{image}\PY{l+s+s2}{\PYZdq{}}\PY{p}{]}\PY{p}{)} \PY{n}{image} \PY{o}{=} \PY{n}{cv2}\PY{o}{.}\PY{n}{resize}\PY{p}{(}\PY{n}{image}\PY{p}{,} \PY{p}{(}\PY{l+m+mi}{1280}\PY{p}{,}\PY{l+m+mi}{720}\PY{p}{)}\PY{p}{,} \PY{n}{interpolation} \PY{o}{=} \PY{n}{cv2}\PY{o}{.}\PY{n}{INTER\PYZus{}AREA}\PY{p}{)} \PY{n}{image2} \PY{o}{=} \PY{n}{image}\PY{p}{[}\PY{l+m+mi}{492}\PY{p}{:}\PY{l+m+mi}{524}\PY{p}{,} \PY{l+m+mi}{46}\PY{p}{:}\PY{l+m+mi}{409}\PY{p}{]} \PY{n}{image} \PY{o}{=} \PY{n}{image}\PY{p}{[}\PY{l+m+mi}{480}\PY{p}{:}\PY{l+m+mi}{514}\PY{p}{,} \PY{l+m+mi}{825}\PY{p}{:}\PY{l+m+mi}{1118}\PY{p}{]} \PY{n}{imagec} \PY{o}{=} \PY{p}{[}\PY{p}{]} \PY{n}{imagec} \PY{o}{=} \PY{n}{imagec}\PY{o}{.}\PY{n}{append}\PY{p}{(}\PY{n}{image}\PY{p}{)} \PY{c+c1}{\PYZsh{}imagec = imagec.append(image2)} \PY{n}{gray} \PY{o}{=} \PY{n}{cv2}\PY{o}{.}\PY{n}{cvtColor}\PY{p}{(}\PY{n}{image}\PY{p}{,} \PY{n}{cv2}\PY{o}{.}\PY{n}{COLOR\PYZus{}BGR2GRAY}\PY{p}{)} \PY{k}{if} \PY{n}{args}\PY{p}{[}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{preprocess}\PY{l+s+s2}{\PYZdq{}}\PY{p}{]} \PY{o}{==} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{thresh}\PY{l+s+s2}{\PYZdq{}}\PY{p}{:} \PY{n}{gray} \PY{o}{=} \PY{n}{cv2}\PY{o}{.}\PY{n}{threshold}\PY{p}{(}\PY{n}{gray}\PY{p}{,} \PY{l+m+mi}{140}\PY{p}{,} \PY{l+m+mi}{255}\PY{p}{,} \PY{n}{cv2}\PY{o}{.}\PY{n}{THRESH\PYZus{}TOZERO}\PY{p}{)}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]} \PY{c+c1}{\PYZsh{}| cv2.THRESH\PYZus{}OTSU)[1]} \PY{c+c1}{\PYZsh{} noise} \PY{k}{elif} \PY{n}{args}\PY{p}{[}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{preprocess}\PY{l+s+s2}{\PYZdq{}}\PY{p}{]} \PY{o}{==} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{blur}\PY{l+s+s2}{\PYZdq{}}\PY{p}{:} \PY{n}{gray} \PY{o}{=} \PY{n}{cv2}\PY{o}{.}\PY{n}{medianBlur}\PY{p}{(}\PY{n}{gray}\PY{p}{,} \PY{l+m+mi}{3}\PY{p}{)} \PY{n}{gray} \PY{o}{=} \PY{n}{util}\PY{o}{.}\PY{n}{invert}\PY{p}{(}\PY{n}{gray}\PY{p}{)} \PY{c+c1}{\PYZsh{}gray = equalize\PYZus{}hist(gray)} \PY{n}{filename} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{.png}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{os}\PY{o}{.}\PY{n}{getpid}\PY{p}{(}\PY{p}{)}\PY{p}{)} \PY{n}{cv2}\PY{o}{.}\PY{n}{imwrite}\PY{p}{(}\PY{n}{filename}\PY{p}{,} \PY{n}{gray}\PY{p}{)} \PY{n}{text} \PY{o}{=} \PY{n}{pytesseract}\PY{o}{.}\PY{n}{image\PYZus{}to\PYZus{}string}\PY{p}{(}\PY{n}{Image}\PY{o}{.}\PY{n}{open}\PY{p}{(}\PY{n}{filename}\PY{p}{)}\PY{p}{,} \PY{n}{lang}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{jpn+eng}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{c+c1}{\PYZsh{}, config=\PYZsq{}\PYZhy{}\PYZhy{}psm 10\PYZsq{})} \PY{n}{os}\PY{o}{.}\PY{n}{remove}\PY{p}{(}\PY{n}{filename}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{n}{text}\PY{p}{)} \PY{c+c1}{\PYZsh{} show the output images} \PY{n}{cv2}\PY{o}{.}\PY{n}{imshow}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Image}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{image}\PY{p}{)} \PY{n}{cv2}\PY{o}{.}\PY{n}{imshow}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Output}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{gray}\PY{p}{)} \PY{n}{cv2}\PY{o}{.}\PY{n}{waitKey}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] usage: ipykernel\_launcher.py [-h] -i IMAGE [-p PREPROCESS] ipykernel\_launcher.py: error: the following arguments are required: -i/--image \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] An exception has occurred, use \%tb to see the full traceback. SystemExit: 2 \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] C:\textbackslash{}Users\textbackslash{}Perry\textbackslash{}AppData\textbackslash{}Local\textbackslash{}conda\textbackslash{}conda\textbackslash{}envs\textbackslash{}Classification\textbackslash{}lib\textbackslash{}site-packages\textbackslash{}IPython\textbackslash{}core\textbackslash{}interactiveshell.py:2918: UserWarning: To exit: use 'exit', 'quit', or Ctrl-D. warn("To exit: use 'exit', 'quit', or Ctrl-D.", stacklevel=1) \end{Verbatim} \section{This is a command line application, so I can't run it in this notebook, but this is basically what it outputs}\label{this-is-a-command-line-application-so-i-cant-run-it-in-this-notebook-but-this-is-basically-what-it-outputs} \begin{figure} \centering \includegraphics{Pictures/days.jpg} \caption{days} \end{figure} \begin{itemize} \tightlist \item Text output for use in creating upload script \item Unprocessed mask \item Processed mask \end{itemize} \subsubsection{How does this help Hatch?}\label{how-does-this-help-hatch} \subsubsection{Hatch has a paper process in order entry, or often has things written down that need to go into the CRM/ERP in text form that currently have to be typed in}\label{hatch-has-a-paper-process-in-order-entry-or-often-has-things-written-down-that-need-to-go-into-the-crmerp-in-text-form-that-currently-have-to-be-typed-in} \subsubsection{Current commercial solutions are usually poor performers or require API knowledge. I can cover the latter and enable use of Google and MS' world class systems (they are actually quite reasonable, a year of scanning forms for Hatch would probably cost less than 10,000USD)}\label{current-commercial-solutions-are-usually-poor-performers-or-require-api-knowledge.-i-can-cover-the-latter-and-enable-use-of-google-and-ms-world-class-systems-they-are-actually-quite-reasonable-a-year-of-scanning-forms-for-hatch-would-probably-cost-less-than-10000usd} \subsubsection{It doesnt even matter if it's typed or handwritten; I have ML strategies for handling that because I can train datasets based on Hatch employee's handwriting which would significantly}\label{it-doesnt-even-matter-if-its-typed-or-handwritten-i-have-ml-strategies-for-handling-that-because-i-can-train-datasets-based-on-hatch-employees-handwriting-which-would-significantly} \subsubsection{I can't match Google's precision, but I can beat Tesseract by using my own CNNs. In fact, I've trained my own using one of these:}\label{i-cant-match-googles-precision-but-i-can-beat-tesseract-by-using-my-own-cnns.-in-fact-ive-trained-my-own-using-one-of-these} \begin{figure} \centering \includegraphics{Pictures/1080ti.jpg} \caption{teneighty} \end{figure} \section{ML more generally}\label{ml-more-generally} \subsection{So I've spent the last year on ML}\label{so-ive-spent-the-last-year-on-ml} \subsection{Specifically, I've done signal classification of EEG signals for the purposes of detecting seizures in real time}\label{specifically-ive-done-signal-classification-of-eeg-signals-for-the-purposes-of-detecting-seizures-in-real-time} \subsection{Considering how SSG works, ML may be the only way to approximate the correct order of SSG games.}\label{considering-how-ssg-works-ml-may-be-the-only-way-to-approximate-the-correct-order-of-ssg-games.} \subsection{Because I built SSG progress into a SQL table (i.e. a 2-dimensional matrix), I can iteratively use each column to estimate the score kids would get on another game}\label{because-i-built-ssg-progress-into-a-sql-table-i.e.-a-2-dimensional-matrix-i-can-iteratively-use-each-column-to-estimate-the-score-kids-would-get-on-another-game} \subsection{Basically it's ripping off Netflix's recommendation system. It really doesn't matter what specific engine you use either as it's a pretty low-complexity problem, so you can use XGBoost or another tabular engine that runs quickly on CPUs to avoid additional hardware cost}\label{basically-its-ripping-off-netflixs-recommendation-system.-it-really-doesnt-matter-what-specific-engine-you-use-either-as-its-a-pretty-low-complexity-problem-so-you-can-use-xgboost-or-another-tabular-engine-that-runs-quickly-on-cpus-to-avoid-additional-hardware-cost} \subsection{For what it's worth, the same thing can be applied to recommending items to customers based on the length of time since they've purchased. It would be stronger if Hatch had more proprietary stuff, but it should probably work out anyway}\label{for-what-its-worth-the-same-thing-can-be-applied-to-recommending-items-to-customers-based-on-the-length-of-time-since-theyve-purchased.-it-would-be-stronger-if-hatch-had-more-proprietary-stuff-but-it-should-probably-work-out-anyway} \subsection{TheDialer/Skynet could be improved with ML by updating estimators based on call success}\label{thedialerskynet-could-be-improved-with-ml-by-updating-estimators-based-on-call-success} \subsection{There could theoretically be a Dragon v3}\label{there-could-theoretically-be-a-dragon-v3} \subsection{Since my work has mainly been in classification, I would want to get ML based predictions of account Expected Values}\label{since-my-work-has-mainly-been-in-classification-i-would-want-to-get-ml-based-predictions-of-account-expected-values} \section{Miscellaneous}\label{miscellaneous} \subsubsection{Image Classification}\label{image-classification} \subsubsection{Signal Processing specifically}\label{signal-processing-specifically} \subsubsection{Obviously I still have the skills to do reporting and tackle the same sort of SQL problems I did before (e.g. renewals revenue reporting)}\label{obviously-i-still-have-the-skills-to-do-reporting-and-tackle-the-same-sort-of-sql-problems-i-did-before-e.g.-renewals-revenue-reporting} \subsubsection{Questions?}\label{questions} % Add a bibliography block to the postdoc \end{document}
{ "alphanum_fraction": 0.613975841, "avg_line_length": 59.6821705426, "ext": "tex", "hexsha": "b5347b79ed8a5c6de9ed4619a52375bfd7e8e5c9", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "95ca225c4b5b9a16a898195705f6d1953b12a9ac", "max_forks_repo_licenses": [ "MIT-0" ], "max_forks_repo_name": "WavTiRep/wtest", "max_forks_repo_path": "Tex/The Notebook, but not that one starring Ryan Gosling .tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "95ca225c4b5b9a16a898195705f6d1953b12a9ac", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT-0" ], "max_issues_repo_name": "WavTiRep/wtest", "max_issues_repo_path": "Tex/The Notebook, but not that one starring Ryan Gosling .tex", "max_line_length": 647, "max_stars_count": null, "max_stars_repo_head_hexsha": "95ca225c4b5b9a16a898195705f6d1953b12a9ac", "max_stars_repo_licenses": [ "MIT-0" ], "max_stars_repo_name": "WavTiRep/wtest", "max_stars_repo_path": "Tex/The Notebook, but not that one starring Ryan Gosling .tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 21861, "size": 53893 }
\vsssub \subsubsection{Point output post-processor for GrADS} \label{sec:gxoutp} \vsssub \proddefH{gx\_outp}{gxoutp}{gx\_outp.ftn} \proddeff{Input}{gx\_outp.inp}{Traditional configuration file.}{10} (App.~\ref{sec:config181}) \proddefa{mod\_def.ww3}{Model definition file.}{20} \proddefa{out\_pnt.ww3}{Raw point output data.}{20} \proddeff{Output}{standard out}{Formatted output of program.}{6} \proddefa{ww3.spec.grads}{GrADS data file with spectra and source terms.}{30} \proddefa{ww3.mean.grads}{File with mean wave parameters.}{31} \proddefa{ww3.spec.ctl}{GrADS control file.}{32} \vspace{\baselineskip} \noindent This post-processor is intended to generate data files with which GrADS (see previous section) can plot polar plots of spectra and source terms. To achieve this, spectra and source terms are store as "longitude-latitude" grids. For each output point a different name is generated for the data, typically {\F loc{\it nnn}}. When the data file is loaded in GrADS, the variable {\F loc001} will contain a spectral grid for the first requested output point at level 1, the input source term at level 2, etc. For the second output point the data is stored in {\F loc002} etc. The actual output point names are passed to GrADS through the control file {\file ww3.spec.ctl}. Wave heights and environmental data are obtained from {\file ww3.mean.grads} The user, however, need not be aware of the details of the GrADS data files and data storage. The GrADS scripts {\file spec.gs}, {\file source.gs} and {\file 1source.gs} are provided to automatically generate spectral plots from the output files of this post-processor. Note: for the GrADS scripts to work properly, the names of the output points should not contain spaces. \pb
{ "alphanum_fraction": 0.7720504009, "avg_line_length": 49.8857142857, "ext": "tex", "hexsha": "3ac582c5fc987bcdaabc048372f50a544a2c075e", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-06-01T09:29:46.000Z", "max_forks_repo_forks_event_min_datetime": "2021-06-01T09:29:46.000Z", "max_forks_repo_head_hexsha": "3e8bbbe6652b702b61d2896612f6aa8e4aa6c803", "max_forks_repo_licenses": [ "Apache-2.0", "CC0-1.0" ], "max_forks_repo_name": "minsukji/ci-debug", "max_forks_repo_path": "WW3/manual/run/gx_outp.tex", "max_issues_count": 5, "max_issues_repo_head_hexsha": "3e8bbbe6652b702b61d2896612f6aa8e4aa6c803", "max_issues_repo_issues_event_max_datetime": "2021-06-04T14:17:45.000Z", "max_issues_repo_issues_event_min_datetime": "2021-05-31T15:49:26.000Z", "max_issues_repo_licenses": [ "Apache-2.0", "CC0-1.0" ], "max_issues_repo_name": "minsukji/ci-debug", "max_issues_repo_path": "WW3/manual/run/gx_outp.tex", "max_line_length": 94, "max_stars_count": null, "max_stars_repo_head_hexsha": "3e8bbbe6652b702b61d2896612f6aa8e4aa6c803", "max_stars_repo_licenses": [ "Apache-2.0", "CC0-1.0" ], "max_stars_repo_name": "minsukji/ci-debug", "max_stars_repo_path": "WW3/manual/run/gx_outp.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 484, "size": 1746 }
\chapter{Dependency SSAT} \label{chap:dependency-ssat} In this chapter, we lift SSAT to the NEXPTIME-complete complexity class by formulating \textit{dependency SSAT} (DSSAT). Most content in this chapter is based on our conference paper~\cite{LeeAAAI21DSSAT} published at AAAI\,'21. \input{dependency-ssat/preliminaries.tex} \input{dependency-ssat/technique.tex} \input{dependency-ssat/application.tex}
{ "alphanum_fraction": 0.8049382716, "avg_line_length": 45, "ext": "tex", "hexsha": "1b3d8cbf3101ecd45774a6d1b36b0c96f37d9ebd", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "061e22dd55b4e58b3de3b0e58bb1cbe11435decd", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "nianzelee/PhD-Dissertation", "max_forks_repo_path": "paper/dependency-ssat.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "061e22dd55b4e58b3de3b0e58bb1cbe11435decd", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "nianzelee/PhD-Dissertation", "max_issues_repo_path": "paper/dependency-ssat.tex", "max_line_length": 120, "max_stars_count": 1, "max_stars_repo_head_hexsha": "061e22dd55b4e58b3de3b0e58bb1cbe11435decd", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "nianzelee/PhD-Dissertation", "max_stars_repo_path": "paper/dependency-ssat.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-11T19:38:13.000Z", "max_stars_repo_stars_event_min_datetime": "2022-03-11T19:38:13.000Z", "num_tokens": 116, "size": 405 }
\chapter{Further insights from importances}\label{ch:applications} \begin{remark}{Outline} In this chapter, we build upon results from Chapter~\ref{ch:importances} to further study variable importances as computed from random forests. In Section~\ref{sec:7:redundant}, we first examine importances for variables that are redundant. In Section~\ref{sec:7:bias}, we revisit variable importances in the context of binary decision trees and ordered variables. In this framework, we highlight various sources of bias that may concurrently happen when importances are computed from actual random forests. Finally, we present in Section~\ref{sec:7:applications} some successful applications of variable importance measures. \end{remark} \noindent\textbf{Caution.} \textit{The work presented in this chapter is exploratory. Conclusions should be considered with a grain of salt, until further empirical verifications.} \section{Redundant variables} \label{sec:7:redundant} In most machine learning problems, it is typical for input variables to be correlated, at least to some extent, and to share common bits of information. In image classification for instance, pixels are usually highly correlated and individually represent nearly the same information as their neighbors. In that sense, variables are often \textit{partially redundant}, i.e., some of the variables may share some of the same information about the output variable $Y$. In the extreme case, redundancy is \textit{total} or \textit{complete}, with some of the variables redundantly conveying exactly the same information with respect to the output variable $Y$. In this section, we study redundancy in random forests and show that it may have a significant effect on both the accuracy of the ensemble and variable importance measures. As a guiding example for our discussion, let us consider a set of input variables and let us discuss the effect of adding redundant variables on the structure of randomized trees. Intuitively, two variables $X_i$ and $X_j$ are redundant if one can perfectly explains the other and vice-versa. Formally, we define redundancy as follows: \begin{definition} Two variables $X_i, X_j$ are \emph{totally redundant} if no additional information is required for describing $X_i$ given $X_j$ and vice-versa. I.e., if \begin{equation} H(X_i|X_j) = H(X_j|X_i) = 0. \end{equation} \end{definition} In particular, a variable $X_j$ and its copy, denoted $X_j^\prime$, are totally redundant. With respect to random forests, adding copies of variables (e.g., duplicating $X_j$, hence resulting in a new set of $p+1$ input variables) has no effect when the selection of the split is deterministic (e.g., in RF for $K$ set to the maximum value). No matter the number of totally redundant variables, the best split that is selected is always the same, even if the same splits need to be recomputed multiple times due to redundancy. When the choice of the best split is stochastic however (e.g., for $K$ strictly smaller than the total number of variables), adding multiple copies of a variable $X_j$ results in splits that may be biased towards this variable (or one of its copies), which in turn may have a significant effect on the resulting accuracy of the ensemble. For a fixed value of $K$, it is indeed not difficult to see that adding copies of $X_j$ increases the probability of $X_j$, or of one of its copies, to be in the random subset of $K$ input variables on which to look for splits. As a corollary, it therefore also simultaneously decreases the probability of any of the others to be selected, hence biasing the structure of the generated decision trees. Note that the resulting net effect on accuracy depends on the nature of duplicated variable. If $X_j$ is very informative with respect to the input, then favoring splits on $X_j$ by adding copies may result in an increase of accuracy. By contrast, if $X_j$ is irrelevant, then adding copies increases the risk of overfitting. With respect to variable importances, the effect of adding redundant variables can be derived both qualitatively and quantitatively using results from Chapter~\ref{ch:importances}. From Theorem~\ref{thm:relevant}, we already know that adding irrelevant variables does not change the resulting variable importances. Adding copies of a relevant variable however, has an effect on both the importance of the duplicated variable and on the importance of the remaining variables. As in the previous chapter, let us assume a set $V= \{X_1, ..., X_p\}$ of categorical input variables and a categorical output $Y$, for which we derive MDI importances, as computed from totally randomized and fully developed trees built on an infinitely large dataset. \begin{lemma}\label{lemma:red1} Let $X_i$ and $X_j$ be totally redundant variables. For any conditioning set $B$, \begin{align} & I(X_i;Y|B,X_j) = I(X_j;Y|B, X_i) = 0 \label{lemma:red1:eqn1} \\ & I(X_i;Y|B) = I(X_j;Y|B) \label{lemma:red1:eqn2}. \end{align} \end{lemma} \begin{proof} By symmetry of the mutual information, it comes \begin{align} I(X_i;X_j) &= H(X_i) - H(X_i|X_j) \nonumber \\ &= H(X_j) - H(X_j|X_i), \end{align} which implies that $H(X_i) = H(X_j)$ since $H(X_i|X_j) = H(X_j|X_i) = 0$ if $X_i$ and $X_j$ are totally redundant. Since $0 \leq H(X_i|X_j,B) \leq H(X_i|X_j)$ and $H(X_i|X_j)=0$, we also have $H(X_i|X_j,B)=0$ for any conditioning set $B$. Likewise, $H(X_j|X_i,B)=0$. By reusing the same argument for $I(X_i;X_j|B)$ (instead of $I(X_i;X_j)$), equality therefore extends to any conditioning set $B$, giving $H(X_i|B) = H(X_j|B)$. From these, it follows that, \begin{align} & I(X_i;Y|B,X_j) = H(X_i|B, X_j) - H(X_i|B,X_j,Y) = 0 - 0, \\ & I(X_j;Y|B,X_i) = H(X_j|B, X_i) - H(X_j|B,X_i,Y) = 0 - 0, \end{align} which proves Equation~\ref{lemma:red1:eqn1}. We also have \begin{align} I(X_i;Y|B) &= H(X_i|B) - H(X_i|B,Y) \\ &= H(X_j|B) - H(X_j|B,Y) \\ &= I(X_j;Y|B), \end{align} which proves Equation~\ref{lemma:red1:eqn2}. \end{proof} \begin{proposition}\label{prop:red:self} Let $X_j\in V$ be a relevant variable with respect to $Y$ and $V$ and let $X_j^\prime \notin V$ be a totally redundant variable with respect to $X_j$. The infinite sample size importance of $X_j$ as computed with an infinite ensemble of fully developed totally randomized trees built on $V\cup \{X_j^\prime\}$ is \begin{equation} \text{Imp}(X_j) = \sum_{k=0}^{p-1} \frac{p-k}{p+1} \frac{1}{C_p^k} \frac{1}{p-k} \sum_{B \in {\cal P}_k(V^{-j})} I(X_j;Y|B) \end{equation} \end{proposition} \begin{proof} From Theorem~\ref{thm:imp}, the variable importance of $X_j$ is \begin{align} \text{Imp}(X_j) &= \sum_{k=0}^{p-1+1} \frac{1}{C_{p+1}^k} \frac{1}{p+1-k} \sum_{B \in {\cal P}_k(V^{-j} \cup \{X_j^\prime\})} I(X_j;Y|B) \nonumber \\ &= \sum_{k=0}^{p-1} \frac{1}{C_{p+1}^k} \frac{1}{p+1-k} \sum_{B \in {\cal P}_k(V^{-j})} I(X_j;Y|B) \nonumber \\ &= \sum_{k=0}^{p-1} \frac{p-k}{p+1} \frac{1}{C_{p}^k} \frac{1}{p-k} \sum_{B \in {\cal P}_k(V^{-j})} I(X_j;Y|B), \end{align} since from Lemma~\ref{lemma:red1}, $I(X_j;Y|B\cup X_j^\prime)=0$ for all $B \in {\cal P}(V^{-j})$. \end{proof} \begin{lemma}\label{lemma:red2} Let $X_i$ and $X_j$ be totally redundant variables. For any conditioning set $B$ and for any variable $X_l$, \begin{equation} I(X_l;Y|B,X_i) = I(X_l;Y|B, X_j) = I(X_l;Y|B, X_i, X_j) \label{lemma:red2:eqn} \end{equation} \end{lemma} \begin{proof} From the chaining rule of the mutual information, we have \begin{align} I(X_i,X_j,X_l;Y|B) &= I(X_l;Y|B) + I(X_i,X_j;Y|B,X_l) \nonumber \\ &= I(X_l;Y|B) + I(X_i;Y|B,X_l) + I(X_i;Y|B, X_j, X_l) \nonumber \\ &= I(X_l;Y|B) + I(X_i;Y|B,X_l) \quad\text{(Lemma~\ref{lemma:red1})} \nonumber \\ &= I(X_i, X_l;Y|B) \nonumber \\ &= I(X_i;Y|B) + I(X_l;Y|B,X_i) \label{lemma:red2:eqn1}. \end{align} By symmetry, \begin{equation}\label{lemma:red2:eqn2} I(X_i,X_j,X_l;Y|B) = I(X_j;Y|B) + I(X_l;Y|B,X_j), \end{equation} which proves that $I(X_l;Y|B,X_i) = I(X_l;Y|B, X_j)$, by combining both equations and using the fact that $I(X_i;Y|B) = I(X_j;Y|B)$ (Lemma~\ref{lemma:red1}). From the chaining rule, we also have \begin{align} I(X_i,X_j,X_l;Y|B) &= I(X_i, X_j;Y|B) + I(X_l;Y|B,X_i,X_j) \nonumber \\ &= I(X_i; Y|B) + I(X_j;Y|B, X_i) + I(X_l;Y|B,X_i,X_j) \nonumber \\ &= I(X_i; Y|B) + I(X_l;Y|B,X_i,X_j). \end{align} By combining this last equation with Equation~\ref{lemma:red2:eqn1}, we finally have $I(X_l;Y|B,X_i) = I(X_l;Y|B,X_i,X_j)$, which proves Lemma~\ref{lemma:red2}. \end{proof} \begin{proposition}\label{prop:red:other} Let $X_j\in V$ be a relevant variable with respect to $Y$ and $V$ and let $X_j^\prime \notin V$ be a totally redundant variable with respect to $X_j$. The infinite sample size importance of $X_l \in V^{-j}$ as computed with an infinite ensemble of fully developed totally randomized trees built on $V\cup \{X_j^\prime\}$ is \begin{align}\label{prop:red:other:eqn} \text{Imp}(X_l) &= \sum_{k=0}^{p-2} \frac{p-k}{p+1} \frac{1}{C_p^k} \frac{1}{p-k} \sum_{B \in {\cal P}_k(V^{-l} \setminus X_j)} I(X_l;Y|B) + \\ & \hookrightarrow \sum_{k=0}^{p-2} \left[ \sum_{k'=1}^2 \frac{C^{k'}_2}{C_{p+1}^{k+k'}} \frac{1}{p+1-(k+k')} \right] \sum_{B \in {\cal P}_k(V^{-l}\setminus X_j)} I(X_l;Y|B\cup X_j). \nonumber \end{align} \end{proposition} \begin{proof} From Lemma~\ref{lemma:red2}, conditioning by either $X_j$, $X_j^\prime$ or by both variables yield terms $I(X_l;Y|B,X_j)$, $I(X_l;Y|B,X_j^\prime)$ and $I(X_l;Y|B,X_j,X_j^\prime)$ that are all equal. From Theorem~\ref{thm:imp}, the variable importance of $X_l$ can therefore be rewritten as follows: \begin{align} \text{Imp}(X_l) &= \sum_{k=0}^{p-1+1} \frac{1}{C_{p+1}^k} \frac{1}{p+1-k} \sum_{B \in {\cal P}_k(V^{-l}\cup X_j^\prime)} I(X_l;Y|B) \nonumber \\ &= \sum_{k=0}^{p-2} \sum_{k'=0}^2 \frac{1}{C_{p+1}^{k+k'}} \frac{1}{p+1-(k+k')} \sum_{\substack{B \in {\cal P}_k(V^{-l} \setminus X_j)\\ B' \in {\cal P}_{k'}(\{X_j, X_j^\prime\})} } I(X_l;Y|B \cup B') \nonumber \\ &= \sum_{k=0}^{p-2} \frac{1}{C_{p+1}^k} \frac{1}{p+1-k} \sum_{B \in {\cal P}_k(V^{-l} \setminus X_j)} I(X_l;Y|B) + \nonumber \\ & \hookrightarrow \sum_{k=0}^{p-2} \left[ \sum_{k'=1}^2 \frac{C^{k'}_2}{C_{p+1}^{k+k'}} \frac{1}{p+1-(k+k')} \right] \sum_{B \in {\cal P}_k(V^{-l} \setminus X_j)} I(X_l;Y|B \cup X_j) \nonumber \\ &= \sum_{k=0}^{p-2} \frac{p-k}{p+1} \frac{1}{C_p^k} \frac{1}{p-k} \sum_{B \in {\cal P}_k(V^{-l} \setminus X_j)} I(X_l;Y|B) + \nonumber \\ & \hookrightarrow \sum_{k=0}^{p-2} \left[ \sum_{k'=1}^2 \frac{C^{k'}_2}{C_{p+1}^{k+k'}} \frac{1}{p+1-(k+k')} \right] \sum_{B \in {\cal P}_k(V^{-l}\setminus X_j)} I(X_l;Y|B\cup X_j). \nonumber \end{align} \end{proof} Proposition~\ref{prop:red:self} shows that the importance of $X_j$ decreases when a redundant variable $X_j^\prime$ is added to the set of input variables, since all mutual information terms are multiplied by a factor $\smash{\tfrac{p-k}{p+1} < 1}$. Intuitively, this result is in fact expected since the same information is then conveyed within two variables (i.e., in $X_j$ and its copy $X_j^\prime$). It also shows that the terms in the total importance are not all modified in the same way. The weight of the terms corresponding to small conditioning sets remains nearly unchanged (i.e., for a large number $p$ of variables and small values of $k$, $\tfrac{p-k}{p+1}$ is close to $1$), while the weight of the terms of large conditioning sets is greatly impacted (i.e., for large values of $k$, $\tfrac{p-k}{p+1}$ tends to $0$). As shown by Proposition~\ref{prop:red:other}, the effect of adding a redundant variable $X_j^\prime$ on the importance of the other variables $X_l$ (for $l\neq j$) is twofold. The first part of Equation~\ref{prop:red:other:eqn} shows that the weight of all terms that do not include $X_j$ (or its copy) strictly decreases by a factor $\tfrac{p-k}{p+1}$. The second part of Equation~\ref{prop:red:other:eqn} shows that the the weight of all terms that include $X_j$ (or its copy) increases, since several equivalent conditioning sets (i.e., $B\cup \{X_j\}$, $B\cup \{X_j\}^\prime$ and $B\cup \{X_j, X_j^\prime\}$) can now appear within the branches of a tree. Like for Proposition~\ref{prop:red:self}, impurity terms are not all modified in the same way and changes depend on the size $k$ of the conditioning set $B$. Overall, the net effect on the total importance of $X_l$ is therefore a compromise between these opposing changes. If the weighted sum of the $I(X_l;Y|B,X_j)$ terms is small with respect to the sum of the terms that do not include $X_j$ (or its copy), then the decrease effect dominates and the importance of $X_l$ should be smaller. By contrast, if the $I(X_l;Y|B,X_j)$ terms are larger, then the increase effect dominates and the resulting importance is larger. As shown later in Figure~\ref{fig:7:red:xor}, redundant variables therefore increase the importance of all variables that \textit{interact} with the duplicated variable. Propositions~\ref{prop:red:self} and \ref{prop:red:other} can be extended to the case where $N_c$ redundant variables $X_j^c$ (for $c=1,\dots,N_c$) are added to the input variables instead of one, leading to the general Proposition~\ref{prop:red:general}. From this result, the same qualitative conclusions can be drawn, except that the decrease or increase effects discussed above are even stronger as more redundant variables are added. \begin{proposition}\label{prop:red:general} Let $X_j\in V$ be a relevant variable with respect to $Y$ and $V$ and let $X_j^c \notin V$ (for $c=1,\dots,N_c$) be $N_c$ totally redundant variables with respect to $X_j$. The infinite sample size importances of $X_j$ and $X_l \in V$ as computed with an infinite ensemble of fully developed totally randomized trees built on $V\cup \{X_j^1,\dots,X_j^{N_c}\}$ are \begin{align*} \text{Imp}(X_j) &= \sum_{k=0}^{p-1} \left[ \frac{C^k_p (p-k)}{C^k_{p+N_c}(p+N_c-k)} \right] \frac{1}{C_p^k} \frac{1}{p-k} \sum_{B \in {\cal P}_k(V^{-j})} I(X_j;Y|B), \\ \text{Imp}(X_l) &= \sum_{k=0}^{p-2} \left[ \frac{C^k_p (p-k)}{C^k_{p+N_c}(p+N_c-k)} \right] \frac{1}{C_p^k} \frac{1}{p-k} \sum_{B \in {\cal P}_k(V^{-l} \setminus X_j)} I(X_l;Y|B) + \\ & \hookrightarrow \sum_{k=0}^{p-2} \left[ \sum_{k'=1}^{N_c+1} \frac{C^{k'}_{N_c+1}}{C_{p+N_c}^{k+k'}} \frac{1}{p+N_c-(k+k')} \right] \sum_{B \in {\cal P}_k(V^{-l}\setminus X_j)} I(X_l;Y|B\cup X_j). \end{align*} \end{proposition} \begin{proof} Omitted here, but Proposition~\ref{prop:red:general} can be proved by generalizing for $N_c$ the proofs of Propositions~\ref{prop:red:self} and \ref{prop:red:other}. \end{proof} Finally, let us note that Propositions~\ref{prop:red:self}, \ref{prop:red:other} and \ref{prop:red:general} are in fact valid as soon as variables $X_i$ and $X_j$ satisfy conditions of Lemma~\ref{lemma:red1}, even if they are not totally redundant. Accordingly, we call two variables satisfying these conditions \textit{totally redundant with respect to the output $Y$}. As an illustrative example, let us reconsider the LED classification problem from Section~\ref{sec:6:illustration}, for which $X_5$ was found to be the most important variable. As shown in Figure~\ref{fig:7:red:led}, adding variables that are redundant with $X_5$ makes its importance decrease, as predicted by our theoretical result from Propositions~\ref{prop:red:self} and \ref{prop:red:general}. When 5 or more copies of $X_5$ are added, the importance of $X_5$ is the smallest of all, as if $X_5$ had become less informative. Similarly, we observe that the importance of the other variables remains about the same or slightly decreases, as if they all had become a bit less informative. With regards to our previous results in Propositions \ref{prop:red:other} and \ref{prop:red:general}, this indicates that the importance due to the $I(X_l;Y|B)$ terms prevails from the importance due to the $I(X_l;Y|B,X_5^c)$ terms. As a matter of fact, this example highlights a fundamental property of variable importances as computed in a random forest: \textit{importance measures are computed not only with respect to the output $Y$, but also with respect to all the other input variables that define the problem}. In particular, a variable which is not important is not necessarily uninformative, as the example illustrates. A variable may be considered as less important because the information it conveys is also redundantly conveyed and diluted in other variables, and not necessarily because it has no information about the output. \begin{figure} \centering \includegraphics[width=0.9\textwidth]{figures/ch7_red_led.pdf} \caption{Adding copies of $X_5$ on the LED classification task. The more redundant variables are added, the lesser the importance of $X_5$.} \label{fig:7:red:led} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\textwidth]{figures/ch7_red_xor.pdf} \caption{Adding copies of $X_1$ on a XOR classification task. The more redundant variables are added, the lesser the importance of $X_1$, but the larger the importance of $X_2$.} \label{fig:7:red:xor} \end{figure} As a second example, Figure~\ref{fig:7:red:xor} illustrates redundancy effects for a XOR classification problem defined on two variables $X_1$ and $X_2$. Again, the importance of the duplicated variable $X_1$ decreases as redundant variables are added, which confirms our results from Propositions~\ref{prop:red:self} and \ref{prop:red:general}. More interestingly, we now observe that the importance of the other variable, $X_2$, increases as copies of $X_1$ are added. For this problem, the $I(X_2;Y|B,X_1^c)$ terms are prevalent with respect to the $I(X_2;Y|B)$ terms (which is in fact unique and equal to $0$), thereby artificially increasing the overall importance of $X_2$ as redundancy augments, as expected from Propositions \ref{prop:red:other} and \ref{prop:red:general}. Overall, results presented in this section call for caution when interpreting variable importance scores. Due to redundancy effects -- either total, as studied here, or partial as it would often arise in practice -- it may happen that the total importance of a given variable is either misleadingly low or deceptively high because the same information is spread within several redundant variables and therefore taken into account several times within the total importances. As such, we advise to complement the interpretation with a systematic decomposition of variable importance scores, e.g., as previously done in Figure~\ref{fig:decomposition}, in order to better understand why a variable is in fact important and to possibly detect redundancy. \section{Bias in variable importances} \label{sec:7:bias} In this section, we study sources of bias in variable importances and show that variable selection (as previously discussed in Section~\ref{sec:ntrt}) is not the only cause of bias. In practice, complementary forces due to masking effects, impurity misestimations and the structure of the trees make variable importances deviate from the theoretical results found in asymptotic conditions for totally randomized trees. \subsection{Bias due to masking effects} As shown in the previous chapters, the guided selection of the split variable (i.e., for $K>1$) is necessary for balancing bias and variance in randomized decision trees and to produce accurate ensembles. In particular, we studied that decision trees built with too much randomization usually lead to an increase of bias (with respect to the generalization error) which cannot be compensated by a reciprocal decrease of variance, making it necessary to adjust the value of $K$ to find the appropriate trade-off. By contrast, we also showed in Section~\ref{sec:ntrt} that when variable selection is not totally random (i.e., as soon as $K>1$), masking effects induce a bias with respect to variable importances, since it forces some of the branches to never be built, and therefore some of the conditioning sets $B \in {\cal P}(V)$ to never be taken into account. As a consequence, random forests whose parameter $K$ has been tuned to maximize accuracy may yield variable importances that are biased (either over- or underestimated). More specifically, it can be shown (see Section~\ref{sec:ntrt}) that a relevant variable may be null with regards to its importance, thereby making it indistinguishable from irrelevant variables, and that the importance of relevant variables becomes dependent on the number of irrelevant variables. \subsection{Bias due to empirical impurity estimations} \label{sec:7:bias:high} The analysis of variable importances carried out so far has considered asymptotic conditions for which the true node impurity $i(t)$ is assumed to be known. In practice however, due to the finite size of the learning set, impurity measurements suffer from an empirical misestimation bias. In this section, we study this effect in the context of heterogeneous variables\footnote{As an example, in the case of meteorological problems, variables often comprise mixed environmental measurements of different nature and scale, like speed of wind, temperature, humidity, pressure, rainfall or solar radiation.}, with respect to their scale or their number of categories, and show that the misestimation of node impurity is directly proportional to the cardinality of the split variable and inversely proportional to the number $N_t$ of samples used for the evaluation. As a result, impurity reductions become overestimated as we go deeper in the tree and/or as the number of values of the variable is large. In consequence, variable importances also suffer from bias, making variables of higher cardinality wrongly appear as more important. To guide our discussion, let us revisit the simulation studies from \citep{strobl:2007b} which consider a binary output variable $Y$ and five input variables $X_1,\dots,X_5$, as defined in Table~\ref{table:simulation}. \begin{table} \centering \begin{tabular}{| c c c |} \hline & $X_1$ & $\sim{\cal N}(0, 1)$ \\ & $X_2$ & $\sim{\cal M}(2)$ \\ & $X_3$ & $\sim{\cal M}(4)$ \\ & $X_4$ & $\sim{\cal M}(10)$ \\ & $X_5$ & $\sim{\cal M}(20)$ \\ \hline \hline {\it null case} & $Y$ & $\sim{\cal B}(0.5)$ \\ {\it power case} & $Y|X_2=0$ & $\sim{\cal B}(0.5-\text{relevance})$ \\ & $Y|X_2=1$ & $\sim{\cal B}(0.5+\text{relevance})$ \\ \hline \end{tabular} \caption{Input variables are independent random variables as defined from the table. ${\cal N}(0, 1)$ is the standard normal distribution. ${\cal M}(k)$ is the multinomial distribution with values in $\{0, \dots, k-1\}$ and equal probabilities. ${\cal B}(p)$ is the binomial distribution. In the null case, $Y$ is independent from $X_1, \dots, X_5$. In the power case, $Y$ depends on the value $X_2$ while other input variables remain irrelevant.} \label{table:simulation} \end{table} \begin{figure} \centering \includegraphics[width=0.9\textwidth]{figures/ch7_bias_null.pdf} \caption{Variable importances for $Y$ independent of $X_1, \dots, X_5$. ($N=120$, $M=500$)} \label{fig:7:bias:null} \end{figure} Let us first analyze the case where $Y$ is independent from $X_1, \dots, X_5$, i.e., such that none of the input variables are informative with respect to the output. For a randomly sampled dataset ${\cal L}$ of $N=120$ samples, Figure~\ref{fig:7:bias:null} plots variable importances for different kinds of random forests. TRT corresponds to totally randomized trees, as defined in Section~\ref{sec:6:theory}, RF corresponds to standard Random Forest with bootstrap sampling and ETs corresponds to Extremely Randomized Trees. Both RF and ETs use binary splits while TRT rely on multiway exhaustive splits\footnote{Thereby creating as many branches as the number of values of the split variable, even if this variable is continuous and count unique values only.}. In asymptotic conditions, we proved in Theorem~\ref{thm:irrelevant} that the importance of irrelevant variables is strictly equal to $0$. For a finite value of $N$ however, this result does not hold, as Figure~\ref{fig:7:bias:null} indeed confirms. In contrast with what would be expected, we observe that the importance of none of the variables is in fact nowhere close to $0$. In light of Theorem~\ref{thm:sum-of-imp} however, this result is not that surprising: as long as decision trees can be fully developed, the sum of variable importances is equal to the (empirically estimated) mutual information between $X_1,\dots,X_p$ and the output $Y$, which is itself upper bounded by the (empirically estimated) entropy $H(Y)$ of the output variable. In this case, $H(Y)=\log_2(2)=1$, which indeed corresponds to the sum of variable importances for all methods compared in the figure. More importantly, we also observe that the larger the cardinality of the variable (i.e., the larger its number of unique values), the larger its importance. For example, the importance of $X_1$ (for which samples all have unique values) or of $X_5$ (which counts up to $40$ unique values) appears nearly $3$ times larger as the importance of $X_2$ (which is binary). In their study, \citet{strobl:2007b} argue that this bias is due to variable selection: \textit{variables with more potential cut-points are more likely to produce a good criterion value by chance, as in a multiple testing situation}. As a result, variables of higher cardinality are more likely to be chosen for splitting than those of lower cardinality. While this bias has been known for long in decision trees~\citep{kononenko:1995,kim:2001,hothorn:2006}, we argue that it is not the primary cause for the bias observed here. Indeed, this argument does not directly explain why similar trends are also observed when no variable selection is performed (e.g., for TRT or for $K=1$) nor why it similarly happens when cut-points are chosen at random, independently of the cardinality of the variable (e.g., in ETs). For multiway splits like in TRT, bias in variable importances can be traced to the misestimations of the mutual information terms \begin{equation} \Delta i(s, t) \approx I(X_j;Y|t) \end{equation} due to the finite size $N_t$ of the node samples. As shown in \citep{goebel:2005}, when $X_j$ and $Y$ are independent random variables (i.e., when $X_j$ is irrelevant), the distribution of finite sample size estimates of their mutual information follows approximately a gamma distribution \begin{equation} \widehat{I}(X_j; Y) \sim \Gamma\Big( \frac{1}{2}(|{\cal X}_j|-1)(|{\cal Y}|-1), \frac{1}{N_t \log 2} \Big) \end{equation} whose mean is linearly proportional to the cardinalities $|{\cal X}_j|$ and $|{\cal Y}|$ of $X_j$ and $Y$ and inversely proportional to $N_t$, that is \begin{equation} \mathbb{E}\{ \widehat{I}(X_j; Y) \} = \frac{(|{\cal X}_j|-1)(|{\cal Y}|-1)}{2 N_t \log 2}. \end{equation} As a result, estimates get larger as $X_j$ counts many unique values, and become even larger as nodes are deep in the tree (since $N_t$ gets smaller). Consequently, the weighted mean of all such estimated impurity terms $I(X_j;Y|t)$, for all nodes $t$ where $X_j$ is the split variable, and resulting in the total importance $\text{Imp}(X_j)$, is also linearly dependent on the cardinality of $X_j$. For TRT, this result explains why variables of high cardinality in Figure~\ref{fig:7:bias:null} appear as more important than those of lower cardinality. Intuitively, the closer the number of unique values with respect to the total number of samples, the larger the impurity decrease when splitting exhaustively on this variable. In the extreme case, when values for $X_j$ are all unique, splitting on the variable perfectly memorizes the values of $Y$, resulting in child nodes that are all pure, therefore maximizing the estimated mutual information. As such, this explains why $X_1$, whose values are all unique, appears as the most important variable. For binary splits (i.e., for RF and ETs) the mutual information $I(X_j;Y|t)$ is not directly estimated at each node. Rather, in the case of ordered variables, $\Delta i(s, t)$ corresponds to an estimate of the mutual information $I(X_j\leq v;Y|t)$ of the binary split $X_j \leq v$. Under the simplifying assumption that binary splits on the same variable are all directly consecutive in the decision tree, it is easy to see that binary trees are equivalent to multiway trees~\citep{knuth:1968}, as illustrated in Figure~\ref{fig:7:splits}. Using an argument similar to the proof of Theorem~\ref{thm:sum-of-imp}, intermediate impurity terms between the first split and the last of those splits cancel each other when summing up the importances, which finally amounts to collect an actual estimate of $I(X_j;Y|t)$ from the sequence of binary splits. For the same reasons as before, variables of high cardinality are therefore biased in the same way. (As we will study in Section~\ref{sec:bias:tree}, this consecutiveness assumption does not hold in practice, making the importances from binary decision trees strictly different from those of multiway decision trees. Yet, the qualitative conclusions are still valid since variables of high cardinality can be reused far more often than variables of low cardinality, before all possible splits are exhausted.) \begin{figure} \centering \includegraphics[scale=1.0]{figures/ch7_splits.pdf} \caption{Consecutive binary splits on the same variable are equivalent to direct multiway splits.} \label{fig:7:splits} \end{figure} In both situations, the origin of the problem stems from the fact that node impurity is misestimated when the size $N_t$ of the node samples is too small. To a large extent, the issue is aggravated by the fact that trees are fully developed by default, making impurity terms collected near the leaves usually unreliable. As a precaution, a safe and effective solution for the bias problem therefore simply consists in collecting impurity terms only for those nodes where the impurity estimates can be considered as reliable. Equivalently, the construction of the tree can also be stopped early, when impurity estimates become unreliable, e.g., by limiting the depth of the tree, controlling for the minimum number of samples in internal nodes or using any of the other stopping criteria defined in Section~\ref{sec:3:stop}. Among all alternatives, conditional inference trees~\citep{hothorn:2006} and earlier methods~\citep{quinlan:1986,wehenkel:1998} make use of statistical tests for assessing the independence of $X_j$ and $Y$ at a pre-specified level of confidence $\alpha$. If the null hypothesis cannot be rejected, then recursive partitioning halts. In particular, variable importances collected from conditional inference trees were shown experimentally by \citet{strobl:2007b} not to suffer from bias. The authors argue that it is due to the unbiased variable selection mechanisms also implemented in conditional inference trees. By contrast, we argue that the absence of bias in importances from such trees is mostly due to the early stopping criterion, and not to variable selection. Although variable selection plays an important and exacerbating role, it is not the true cause of the observed bias. Indeed, since the impurity reduction of variables of high cardinality is overestimated, searching for a split among $K>1$ randomly drawn variables increases their probability of being selected when compared to others of lower cardinality, therefore masking and reducing the importances of these latter variables. Yet, bias stems from the fact that impurity reductions are misestimated in the first place. \begin{figure} \centering ETs without variable selection\\[1ex] \includegraphics[width=0.9\textwidth]{figures/ch7_bias_depth.pdf}\\[2.5ex] RF with variable selection\\[1ex] \includegraphics[width=0.9\textwidth]{figures/ch7_bias_depth_rf.pdf} \caption{Variable importances of $X_1, \dots, X_5$ when varying both the maximum depth of the trees and the degree of relevance of $X_2$. Importance scores are normalized by the sum of importances. (Top) ETs with no variable selection, $K=1$, $N=120$, $M=500$. (Bottom) Random Forest with variable selection, $K=5$, $N=120$, $M=500$.} \label{fig:7:bias:depth} \end{figure} As an illustrative example, Figure~\ref{fig:7:bias:depth} reconsiders the simulation study of Table~\ref{table:simulation}, when varying both the maximum depth of the trees (from $\texttt{max\_depth}=1$ to $9$) and the relevance of $X_2$ with respect to $Y$. Let us first consider ETs built with no variable selection ($K=1$), as shown in four top plots of the figure. For the null case, when $\text{relevance}=0$, limiting the depth of the trees correctly fixes the bias that was observed before. For shallow trees, the importances before normalization of all five input variables are close to $0$, as expected from irrelevant variables. The normalized importances, as shown in the figure, are also all close to $\tfrac{1}{p}=0.2$, confirming that no variable is detected as more relevant than the others. However, when the depth of the decision trees increases, importances deviate and bias proportional to the cardinality of the variables appears as discussed before. When $X_2$ is a relevant variable (i.e., for $\text{relevance}>0$), its importance is expected to be strictly positive and at least as large as the importances of the irrelevant variables. For $\text{relevance}=0.1$ and $\texttt{max\_depth=1}$, the importance of $X_2$ appears nearly $6$ times larger than the importances of the other variables, confirming that $X_2$ is correctly identified as a relevant variable. For deeper trees however, noise dominates and the importance of the irrelevant variables is larger due to misestimations of the impurity terms. As relevance increases, $X_2$ can more clearly be identified as a relevant variable. In particular, the more relevant $X_2$, that is the stronger the signal, the deeper trees can be built until $X_2$ is made unrecognizable from the irrelevant variables. By comparison, the four bottom plots of Figure~\ref{fig:7:bias:depth} illustrate variable importances for RF built with variable selection ($K=5$). Again, we observe that limiting the depth of the trees helps reduce the misestimation bias. However, the supplementary effect due variable selection is also clearly visible: variables of larger cardinality appear as significantly more important than the other variables. Consequently, this makes the detection of $X_2$ as a relevant variable more difficult when trees are grown deeper. When comparing the relative importance of $X_2$ for a given depth and relevance, we indeed observe that $X_2$ consistently appears as less important in RF than in ETs. It is only for very shallow trees ($\texttt{max\_depth=1}$ or $\texttt{2}$) and high relevance that $X_2$ is identified with higher confidence than in ETs. In conclusion, evaluating impurity on small samples lead to over-estimations of the mutual information, resulting in biased variable importances. In particular, the higher the cardinality of the variable, the larger the misestimations. To minimize this effect, caution should be taken by only considering impurity terms that were computed from a large enough sample. This can be guaranteed, e.g., by stopping the construction of the tree early or making leaves grow more slowly than the size $N$ of the learning set. Additionally, we have also shown that variable selection may increase the bias due to over-estimations. In this case, a simple remedy consists in not using variable selection when assessing the relevance of variables. Finally, let us also note the connection with Propositions~\ref{proposition:pruning} and \ref{proposition:imp-subspaces}: as long as trees are built to a maximum depth which is larger than the number $r$ of relevant variables, early stopping the construction of the trees does not prevent us from detecting the relevant variables. \subsection{Bias due to binary trees and threshold selection} \label{sec:bias:tree} Previous developments from Chapter~\ref{ch:importances} studied variable importances for fully developed totally randomized trees and multiway exhaustive splits. In practice however, random forests usually rely on binary splits rather than on multiway splits. In terms of impurity, this results in additional and distinct information terms that were not previously accounted for, because i) a same variable can be reused several times along the same branch and ii) binary splits discretize the information contained in a variable, making variable importances dependent on the split threshold selection strategy. In lack of a rigorous theoretical framework, we give in this section preliminary insights on variable importances for binary decision trees, hopefully shedding some light on their interpretation. As a simplistic but illustrative example, let us consider a toy classification problem composed of a ternary input variable $X_1$ and of a binary input variable $X_2$, both of them being ordered. Let us further assume that input samples are uniformly drawn from Table~\ref{table:simulation:bias:tree}, which defines the output as $Y = X_1 < 1$ and $X_2$ as a copy of $Y$. With respect to $Y$, both variables are as informative and one would expect their importances to be the same. With totally randomized trees and exhaustive splits, two equiprobable decision trees can be built, as represented in Figure~\ref{fig:7:bias:trees:id3}. In both of them, splitting either on $X_1$ or on $X_2$ at the root node results in child nodes that are pure, hence halting the construction process. As expected, the measured variable importances are the same: \begin{align*} \text{Imp}(X_1) &= \frac{1}{2} I(X_1;Y) = \frac{1}{2} H(Y) = 0.459, \\ \text{Imp}(X_2) &= \frac{1}{2} I(X_2;Y) = \frac{1}{2} H(Y) = 0.459. \end{align*} \begin{table} \centering \begin{tabular}{| c | c c |} \hline $y$ & $x_1$ & $x_2$ \\ \hline \hline 0 & 0 & 0 \\ 1 & 1 & 1 \\ 1 & 2 & 1 \\ \hline \end{tabular} \caption{Toy problem. All $3$ possible samples are equiprobable.} \label{table:simulation:bias:tree} \end{table} \begin{figure} \centering \includegraphics[scale=1.0]{figures/ch7_trees_id3.pdf} \caption{Totally randomized trees built from Table~\ref{table:simulation:bias:tree}. Both decision trees are equiprobable. The resulting variable importances indicate that both $X_1$ and $X_2$ are as important. } \label{fig:7:bias:trees:id3} \end{figure} \begin{figure} \centering \includegraphics[scale=1.0]{figures/ch7_trees_ets.pdf}\vspace{1cm}\\ \includegraphics[scale=1.0]{figures/ch7_trees_ets2.pdf} \caption{Extremely randomized trees ($K=1$) built from Table~\ref{table:simulation:bias:tree}. From left to right, top to bottom, decision trees are respectively generated with probability $\tfrac{1}{8}$, $\tfrac{1}{8}$, $\tfrac{1}{4}$ and $\tfrac{1}{2}$. The resulting variable importances indicate that $X_2$ is now more important than $X_1$. } \label{fig:7:bias:trees:ets} \end{figure} By contrast, using binary splits and ETs (for $K=1$) results in $4$ different decision trees, as represented in Figure~\ref{fig:7:bias:trees:ets}. When splitting on $X_1$ at the root node, two binary splits are now possible with equal probability: $t_L = X_1 \leq 0, t_R = X_1 > 0$ or $t_L = X_1 \leq 1, t_R = X_1 > 1$. In the former case, the resulting child nodes are pure, hence halting the construction process. In the latter case, the right child (corresponding to $X_1 > 1$) is pure, while the left child is not. For this node, recursive partitioning proceeds and a second binary split can be made either on $X_1$ or on $X_2$. Overall, the measured variable importances in these binary trees, in asymptotic conditions, are \begin{align*} \text{Imp}(X_1) &= \frac{2}{8} I(X_1 \leq 1;Y) + \frac{1}{8} P(X_1 \leq 1) I(X_1 \leq 0;Y|X_1 \leq 1) + \frac{1}{4} I(X_1 \leq 0;Y)\\ &= 0.375 \\ \text{Imp}(X_2) &= \frac{1}{2} I(X_2;Y) + \frac{1}{8} P(X_1 \leq 1) I(X_2;Y|X_1\leq 1)\\ &= 0.541, \end{align*} which makes them strictly different from the importances collected from multiway totally randomized trees. In particular, due to the binarization of the split variables, importances now account for conditioning sets that may include several values of a same variable. For instance, the importance of $X_2$ includes $I(X_2;Y|X_1\leq 1)$, which measures the mutual information between $X_2$ and $Y$ when $X_1$ is either equal to $0$ or $1$. With multiway exhaustive splits, these conditioning sets are not considered because branches correspond to single values only. Similarly, importances also account for binarized mutual information terms such as $I(X_1 \leq 1;Y)$. Again, these are not evaluated in totally randomized trees because of the multiway splits. Accordingly, threshold selection in binary splits has a dominant effect on variable importances since it controls how the original variables are binarized. In ETs, all intermediate thresholds $v$ are possible, resulting in a combinatorial number of additional impurity terms $I(X_j \leq v;Y|\cdot)$ and $I(\cdot;Y|X_j \leq v, \cdot)$. In RF, only the best threshold is selected locally at each node, resulting in fewer additional impurity terms $I(X_j < v^*;Y|\cdot)$ and $I(\cdot;Y|X_j < v^*,\cdot)$ (and masking all others). \begin{figure} \centering \includegraphics[width=0.9\textwidth]{figures/ch7_bias_trees.pdf} \caption{Variable importances of $X_1$ and $X_2$ when increasing the cardinality of $X_1$.}% Note that, in this example, high cardinality does not necessarily mean high importance. } \label{fig:7:bias:trees} \end{figure} As a last example, Figure~\ref{fig:7:bias:trees} illustrates variable importances for $X_1$ and $X_2$ when increasing the cardinality $L=|{\cal X}_1|$ of $X_1$ on the previous toy problem. That is, let us redefine the output as $Y = X_1 < \tfrac{L}{2}$, for ${\cal X}_1 = \{0, \dots, L-1\}$, while keeping $X_2$ as a binary variable defined as a copy of $Y$. Assuming that all input samples are equiprobable, the importances yielded by totally randomized trees with multiway splits remain the same as before. For ETs however, increasing $L$ induces even more new impurity terms that are now accounted for in the importances. As the figure shows, increasing the cardinality of $X_1$ makes its importance decrease, but also simultaneously makes the importance of $X_2$ increase. Indeed, splitting on $X_2$ always yield child nodes that are pure, which is unlikely to be the case when splitting randomly on $X_1$. For RF, only the best thresholds are selected, yielding in this case child nodes that are always pure, as for totally randomized trees. As such, their importances respectively appear as equal as the figure confirms. Overall, these additional effects due to binary splits make variable importances computed from classical random forests very difficult to interpret and understand, as soon as data include many variables of different number of categories. While they can still be used to identify the most relevant variables, caution should be taken when interpreting the amplitude of importances. As our last example illustrates, they may be misleadingly low or high because of combinatorial effects, solely due to the possible ways variables are binarized through the implemented threshold selection mechanisms. \begin{remark}{Encoding $L$-ary variables into binary variables} Partitioning node samples with binary splits on $L$-ary input variables is equivalent to individually transforming each input variable $X_j$ into a set of $L_j-1$ binary input variables $\{X_j^{l} | l=1,\dots,L_j - 1\}$, each encoding for one the possible splits, and then partitioning node samples using of one these new variables. If we consider the binarized learning set ${\cal L}^\prime$, for which all $p$ the input variables have been transformed into $\sum_j (L_j - 1)$ binary variables, then our theoretical framework for totally randomized trees and exhaustive splits can be applied and Theorem~\ref{thm:imp} could possibly be adapted. The only difference lies in the way binary variables are drawn at random: instead of splitting randomly on one of the $\sum_j (L_j - 1)$ variables, first, one of the $p$ original variables is drawn uniformly at random; second, one of its $L_j - 1$ binary variables is selected for splitting the current node. In this setting, the importance of $X_j$ therefore amounts to the sum of importances $\sum_l \text{Imp}(X_j^l)$ of its binary variables. \end{remark} \section{Applications} \label{sec:7:applications} In spite of the various concurrent effects discussed earlier, variable importance measures have been used with success in a wide range of scientific applications. While they have proven to be a good proxy for assessing the relevance of input variables, providing helpful insights, too often variable importances are used as a black-box metric, hence under-exploiting the information they offer. As such, examples presented in this section all demonstrate that any progress towards a better theoretical understanding of variable importances may help to further advance a wide range of research domains. \subsection{Feature selection} Within the past 10 or 20 years, typical machine learning problems have grown from a few tens of input variables to domains exploring hundreds of thousands of variables, often comprising noisy, irrelevant or redundant predictors, mixing both numerical and categorical variables and involving complex interaction effects. In this context, the feature selection problem consists in identifying a subset of the original input variables that are useful for building a good model~\citep{guyon:2003,liu:2005}. The advantages and benefits of reducing the dimensionality of the problem include: speeding up machine learning algorithms, reducing measurement and storage requirements, improving the accuracy of the models or facilitating data visualization and understanding. Because of the properties of random forests (good prediction performance, robustness to noisy variables, support of numerical and categorical variables and ability to model complex interactions), variable importances often provide an effective solution for the feature selection problem. The most straightforward solution consists in ranking variables according to their importance scores and to only keep the most important ones. Depending on the objective, the best subset of variables can be identified in several ways: \begin{itemize} \item When the goal is simply to reduce dimensionality because of speed and storage requirements, the simplest solution is to keep only those variables whose importances $\text{Imp}(X_j)$ is greater than some manually defined threshold $\alpha$. \item If the goal is to improve accuracy, a good subset of variables can typically be found by tuning the threshold $\alpha$ so as to minimize some user-defined criterion (e.g., the zero-one loss in classification or the squared error loss in regression) for the model built on the subset $\{X_j | \text{Imp}(X_j) > \alpha \}$. At the price of more computational efforts, even better performance can usually be reached by embedding variable importances into a dedicated iterative feature selection procedure, such as those described in \citep{guyon:2002,tuv:2009}. \item In some other applications, the objective is to identify variables that are relevant, in order to better understand the underlying relations with the output $Y$. In asymptotic conditions, this could be done by discarding all variables whose importances is null, as shown by Theorem~\ref{thm:relevant}. In a finite setting however, bias due to masking effects or impurity misestimations (as previously discussed in Section~\ref{sec:7:bias}) makes it more difficult to identify variables that are truly relevant since their importances might appear to be lower than those of irrelevant variables. Yet, several options are available for controlling and limiting false positives, such as stopping the construction process when impurity estimations become statistically unreliable (Section~\ref{sec:7:bias:high}), comparing the importances of the original input variables to artificial contrasts \citep{tuv:2006} or robustly controlling the conditional error rate through permutation tests \citep{saeys:2012}. \end{itemize} \subsection{Biomarker identification} With the rise of \textit{-omics} data, random forests have become one of the most popular tools in life sciences, providing practitioners with both high-prediction accuracy and helpful insights about the importances of variables. In many cases, variable importance measures (either MDI or the permutation importance) is exploited to better understand complex interaction effects between the inputs and the output. Examples of successful applications include the identification of disease-associated SNPs in genome-wide association studies \citep{lunetta:2004,meng:2009,botta:2014}, the discovery of important genes and pathways from micro-array gene-expression data \citep{pang:2006,chang:2008} or the identification of factors for predicting protein-protein interactions~\citep{qi:2006}. These examples are not isolated and tens of further studies based on random forests could in fact be cited from the fields of genomics, metabolomics, proteomics or transcriptomics. Some of them are reviewed in \citep{geurts:2009,touw:2013,boulesteix:2012}. In light of our study and discussion of variable importances, recommendations for biomarker identification depend on the exact objective of the application. If the goal is to identify all relevant variables, then totally randomized trees with multiway splits should be preferred. They indeed constitute the only method for which variable importances are unbiased, taking into account all possible interaction terms in a fair and exhaustive way. The only caveat is that a (very) large number of trees may be required before variable importances converge, making them computationally intensive to compute, even if the full randomization of the induction procedure actually make individual decision trees to be quite fast to generate. By contrast, if the objective is only to identify a subset of good predictors for predicting the output (hence possibly omitting relevant but redundant variables), then non-totally randomized trees (e.g., standard RF or ETs) with $K$ chosen to maximize accuracy should be preferred. Even if some informative variables may be masked because of variable selection, a good subset of input variables should still be identifiable from importance scores. From a computational point of view, convergence should also be faster, making this approach more appealing when resources are limited. In all cases, impurity misestimations should be controlled by stopping the construction process early or collecting importances only for those nodes where impurity reductions can be considered as reliable. Finally, whenever practical, variable importances should also be decomposed in order to better understand why some variables appear as more important than others. In particular, studying the decomposition might help identify redundancy effects. \subsection{Network inference} Given a set of input variables $V = \{X_1,\dots,X_p\}$, the \textit{network inference} problem consists in the identification of conditional dependencies between variables. In genomics for example, regulatory network inference consists in the identification of interactions between genes or transcription factors on the basis of their expression level, in order to reconstruct a global network of interactions (as illustrated in Figure~\ref{fig:7:network} for Staphylococcus aureus). \begin{figure} \centering \includegraphics[width=0.9\textwidth]{figures/ch7_network.png} \caption{Gene regulatory network in Staphylococcus aureus. Image from \citep{marbach:2012}.} \label{fig:7:network} \end{figure} As proposed in the GENIE3 method~\citep{irrthum:2010}, the network inference problem can be solved generically by remarking that it can decomposed into $p$ independent supervised learning problems. Formally, for each input variable $X_j$ (for $j=1,\dots,p$), GENIE3 considers the supervised sub-problem consisting in the prediction of the target variable $X_j$ from the remaining $p-1$ variables $V^{-j}$. Using random forests for solving each of these $p$ problems, variable importances can be derived and used as an indication of the (directed) putative link between the predictor variables $X_i$ (for $i\neq j$) and the target $X_j$. Intuitively, the larger the importance, the more likely the conditional dependency with $X_j$. Once all $p$ problems are solved, putative links are aggregated over all $p$ variables to provide a ranking of interactions from which the network can finally be reconstructed. Again, our previous theoretical analysis calls for caution when plugging variable importances into network inference procedures. Indeed, since the goal is to retrieve all \textit{direct} interactions, importances might in fact not be as appropriate as desired since they also account for indirect or combined effects. In this case, a good heuristic is to intentionally induce masking effects (i.e., by setting $K>1$) in order to recover only the strongest (and assumingly direct) interactions. Alternatively, a promising strategy might be to directly look for \textit{strongly} relevant variables, that is for variables $X_i$ such that $I(X_i;X_j|B) > 0$ for \textit{all} $B$. In both cases, impurity misestimations effects should be mitigated by using an adequate stopping criterion. When input variables are of different scale of measurements or vary in the number of categories, care should finally be taken when aggregating variable importances into a single ranking of interactions. As shown by Theorem~\ref{thm:sum-of-imp}, variable importances are in this case upper bounded by the entropy $H(X_j)$ of the target variable, which may greatly varies from one target to another. As such, variable importances are not directly comparable and should preferably be normalized (e.g., by $H(X_j)$) before their aggregation.
{ "alphanum_fraction": 0.755478388, "avg_line_length": 59.863238512, "ext": "tex", "hexsha": "62fe0be61414f775771b2e6992ff42dcc27560c3", "lang": "TeX", "max_forks_count": 153, "max_forks_repo_forks_event_max_datetime": "2021-12-26T10:13:51.000Z", "max_forks_repo_forks_event_min_datetime": "2015-01-14T03:46:42.000Z", "max_forks_repo_head_hexsha": "d2c5e0174d1a778be37a495083d756b2829160ec", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "mathkann/understanding-random-forests", "max_forks_repo_path": "tex/chapters/chapter07.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "d2c5e0174d1a778be37a495083d756b2829160ec", "max_issues_repo_issues_event_max_datetime": "2016-06-29T05:43:41.000Z", "max_issues_repo_issues_event_min_datetime": "2016-06-29T05:43:41.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "mathkann/understanding-random-forests", "max_issues_repo_path": "tex/chapters/chapter07.tex", "max_line_length": 350, "max_stars_count": 353, "max_stars_repo_head_hexsha": "d2c5e0174d1a778be37a495083d756b2829160ec", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "mathkann/understanding-random-forests", "max_stars_repo_path": "tex/chapters/chapter07.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-25T05:16:30.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-03T13:34:03.000Z", "num_tokens": 15034, "size": 54715 }
\section{Data Persistence} % pickle -- Python object serialization % copyreg -- Register pickle support functions % shelve -- Python object persistence % marshal -- Internal Python object serialization % dbm -- Interfaces to Unix “databases” % sqlite3 -- DB-API 2.0 interface for SQLite databases %
{ "alphanum_fraction": 0.7058823529, "avg_line_length": 35.8888888889, "ext": "tex", "hexsha": "adcce778c22a017308253b0daf4215596b68fc8b", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2016-11-24T19:55:47.000Z", "max_forks_repo_forks_event_min_datetime": "2016-11-24T19:55:47.000Z", "max_forks_repo_head_hexsha": "dd7d6f30d945733f7ed792fcccd33875b59d240f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "remigiusz-suwalski/programming-notes", "max_forks_repo_path": "src/python3/chapters/data-persistence.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "dd7d6f30d945733f7ed792fcccd33875b59d240f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "remigiusz-suwalski/programming-notes", "max_issues_repo_path": "src/python3/chapters/data-persistence.tex", "max_line_length": 58, "max_stars_count": 1, "max_stars_repo_head_hexsha": "dd7d6f30d945733f7ed792fcccd33875b59d240f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "remigiusz-suwalski/programming-notes", "max_stars_repo_path": "src/python3/chapters/data-persistence.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-28T05:03:18.000Z", "max_stars_repo_stars_event_min_datetime": "2022-02-28T05:03:18.000Z", "num_tokens": 71, "size": 323 }
\chapter{\label{Appendix-A}Title of Appendix-A} Type in the details of Appendix-A. % ------------------------------------------------------------------------
{ "alphanum_fraction": 0.397515528, "avg_line_length": 26.8333333333, "ext": "tex", "hexsha": "ad87b1a462e03f1286a4da4f93844ea871d33e6e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d29ba241f3d913c03c2c72ce616a6b2a217097ed", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "pcputta/SIT_PROJECT_LATEX_TEMPLATE", "max_forks_repo_path": "MyReport/Appendix1/appendix1.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d29ba241f3d913c03c2c72ce616a6b2a217097ed", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "pcputta/SIT_PROJECT_LATEX_TEMPLATE", "max_issues_repo_path": "MyReport/Appendix1/appendix1.tex", "max_line_length": 74, "max_stars_count": null, "max_stars_repo_head_hexsha": "d29ba241f3d913c03c2c72ce616a6b2a217097ed", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "pcputta/SIT_PROJECT_LATEX_TEMPLATE", "max_stars_repo_path": "MyReport/Appendix1/appendix1.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 25, "size": 161 }
% Title: % UiB presentation template % ---------------------- % Description: % A UiB presentation template. Based loosely on % http://kapd.h.uib.no/profilmanual/02Maler/02bb1_PPTPresentasjon.html % % Creator: Tommy O. % ---------------------- % Setup % ---------------------- \documentclass[12pt, aspectratio=1610]{beamer} % Options for aspectratio: 1610, 149, 54, 43 and 32, 169 \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usecolortheme{beaver} % Decent options: beaver, rose, crane \usepackage{listings} \title{The Apriori Algorithm} \subtitle{Association rule learning, \\the Apriori algorithm and \\it's implementation} \institute{Presentation: \texttt{github.com/tommyod/Efficient-Apriori/blob/master/docs/presentation/apriori.pdf}} \date{\today} \author{\texttt{tommyod @ github}} % ------------------------------------------------------------------------- % Package imports % ------------------------------------------------------------------------- \usepackage{etoolbox} \usepackage{graphicx} \usepackage{tikz} \usepackage{amsmath} \usepackage{amsthm} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{mathtools} \usepackage{graphicx} \usepackage{hyperref} \usepackage{listings} \usepackage[sharp]{easylist} \usepackage{multicol} \usepackage{tikz-cd} \usepackage{booktabs} % Set up colors to be used \definecolor{purered}{RGB}{204,0,0} \definecolor{titlered}{RGB}{229,78,71} \definecolor{bggray}{RGB}{242,242,242} \definecolor{bggraydark}{RGB}{217,217,217} % Change the default colors \setbeamercolor*{title}{bg=bggray,fg=titlered} \AtBeginEnvironment{theorem}{% \setbeamercolor{block title}{fg=titlered, bg=bggraydark} \setbeamercolor{block body}{fg=black,bg=bggray} } \AtBeginEnvironment{proof}{% \setbeamercolor{block title}{bg=bggraydark} \setbeamercolor{block body}{fg=black,bg=bggray} } \AtBeginEnvironment{example}{% \setbeamercolor{block title example}{bg=bggraydark} \setbeamercolor{block body example}{fg=black,bg=bggray} } \AtBeginEnvironment{definition}{% \setbeamercolor{block title}{bg=bggraydark} \setbeamercolor{block body}{fg=black,bg=bggray} } \setbeamercolor{block title example}{bg=bggraydark} \setbeamercolor{block body example}{fg=black,bg=bggray} \setbeamercolor{block title}{bg=bggraydark} \setbeamercolor{block body}{fg=black,bg=bggray} \setbeamercolor{frametitle}{fg=titlered,bg=bggray} \setbeamercolor{section in head/foot}{bg=black} \setbeamercolor{author in head/foot}{bg=black} \setbeamercolor{date in head/foot}{fg=titlered} % Custom mathematics commands \DeclareMathOperator{\C}{\mathbb{C}} \DeclareMathOperator{\R}{\mathbb{R}} \DeclareMathOperator{\Q}{\mathbb{Q}} \DeclareMathOperator{\Z}{\mathbb{Z}} \DeclareMathOperator{\N}{\mathbb{N}} % Spacing for lsits \newcommand{\listSpace}{0.2em} % Theorems, equations, definitions setup \theoremstyle{plain} \beamertemplatenavigationsymbolsempty \setbeamerfont{page number in head/foot}{size=\small} \setbeamertemplate{footline}[frame number] \AtBeginSection[]{ \begin{frame} \vfill \centering \begin{beamercolorbox}[sep=8pt,center,shadow=true,rounded=true]{title} \usebeamerfont{title}\insertsectionhead\par% \end{beamercolorbox} \vfill \end{frame} } % Default fixed font does not support bold face \DeclareFixedFont{\ttb}{T1}{txtt}{bx}{n}{12} % for bold \DeclareFixedFont{\ttm}{T1}{txtt}{m}{n}{12} % for normal % Custom colors \usepackage{color} \definecolor{deepblue}{rgb}{0,0,0.5} \definecolor{deepred}{rgb}{0.6,0,0} \definecolor{deepgreen}{rgb}{0,0.5,0} \usepackage{listings} % ------------------------------------------------------------------------- % Document start % ------------------------------------------------------------------------- \begin{document} \maketitle \begin{frame}{Table of contents} \tableofcontents \end{frame} % ------------------------------------------------------------------------- \section{A problem: learning association rules} \begin{frame}[fragile]{Motivating example} \begin{example}[Learning from transactions] Consider the following set of \emph{transactions}. \begin{align*} \{ \text{eggs}, \text{bread}, \text{jam}, \text{bacon} \} \\ \{ \text{apples} , \text{eggs}, \text{bacon} \} \\ \{ \text{bacon} , \text{bread} \} \\ \{ \text{ice cream} , \text{bread}, \text{bacon} \} \end{align*} What interesting information can we infer from this data? \\ Examples: \begin{easylist}[itemize] \ListProperties(Space=\listSpace, Space*=\listSpace) # The itemsets $\{ \text{bacon} , \text{bread} \}$ and $\{ \text{bacon}, \text{eggs} \}$ often appear in the transactions, with counts 3 and 2, respectively. # The rule $\{ \text{bread} \} \Rightarrow \{ \text{bacon} \}$ is meaningful in the sense that $P(\text{bacon} | \text{bread}) = 1$. \end{easylist} \end{example} \end{frame} \begin{frame}[fragile]{Formal problem statement} \begin{problem} Given a \emph{database} $T = \{t_1, t_2, \ldots, t_m\}$, where the $t_i$ are transactions, and a set of \emph{items} $I=\{i_1, i_2,\ldots,i_n\}$, learn meaningful rules $X \Rightarrow Y$, where $X, Y \subset I$. \end{problem} To accomplish this, we need measures of the \emph{meaningfulness} of association rules. \end{frame} \begin{frame}[fragile]{Properties of association rules} \begin{definition}[Support] The \emph{support} of an association rule $X \Rightarrow Y$ is the frequency of which $X \cup Y$ appears in the transactions $T$, i.e. $ \operatorname{support}(X \Rightarrow Y) := P(X, Y)$. \end{definition} \begin{easylist}[itemize] \ListProperties(Space=\listSpace, Space*=\listSpace) # No reason to distinguish between the support of an itemset, and the support of an association rule, i.e. $\operatorname{support}(X \Rightarrow Y) = \operatorname{support}(X \cup Y)$. # An important property of support is that $\operatorname{support}(\{ \text{eggs}, \text{bacon} \}) \leq \operatorname{support}(\{ \text{bacon} \})$. \end{easylist} \vspace*{1em} More formally, we observe that: \begin{theorem}[Downward closure property of sets] If $s \subset S$, then $\operatorname{support}(s) \geq \operatorname{support}(S)$. \end{theorem} \end{frame} \begin{frame}[fragile]{Properties of association rules} \small \begin{definition}[Confidence] The confidence of the association rule $X \Rightarrow Y$ is given by \begin{equation*} \operatorname{confidence}(X \Rightarrow Y) = P(Y | X) = \frac{P(X, Y) }{P(X) } = \frac{\operatorname{support}(X \Rightarrow Y)}{\operatorname{support}(X)}. \end{equation*} \end{definition} Notice the following interesting property. \begin{example} The confidence of $\{A, B\} \Rightarrow \{C\}$ will always be greater than, or equal to, $\{A\} \Rightarrow \{B, C\}$. By definition we have \begin{equation*} \frac{\operatorname{support}(\{A, B\} \Rightarrow \{C\})}{\operatorname{support}(\{A, B\})} \geq \frac{\operatorname{support}(\{A\} \Rightarrow \{B, C\})}{\operatorname{support}(\{A\})}, \end{equation*} where the numerator is identical, and $\operatorname{support}(\{A\}) \geq \operatorname{support}(\{A, B\})$ \end{example} \normalsize \end{frame} \begin{frame}[fragile]{Properties of association rules} \begin{definition}[Confidence] The confidence of the association rule $X \Rightarrow Y$ is given by \begin{equation*} \operatorname{confidence}(X \Rightarrow Y) = P(Y | X) = \frac{P(X, Y) }{P(X) } = \frac{\operatorname{support}(X \Rightarrow Y)}{\operatorname{support}(X)}. \end{equation*} \end{definition} \begin{theorem}[Downward closure property of rules] Consider the rule $(X - y) \Rightarrow y$ and $(X - Y) \Rightarrow Y$, where $y \subset Y$. Then \begin{equation*} \operatorname{confidence} \left( (X - y) \Rightarrow y \right) \geq \operatorname{confidence} \left( (X - Y) \Rightarrow Y \right) \end{equation*} \end{theorem} \textbf{Proof.} The numerator is identical, but the denominator has $\operatorname{support}(X - y) \leq \operatorname{support}(X - Y)$ by the downward closure property of sets. \end{frame} \begin{frame}[fragile]{Examples of support and confidence} \begin{example}[Support and confidence of a rule] Consider again the following set of transactions. \begin{align*} \{ \text{eggs}, \text{bread}, \text{jam}, \text{bacon} \} \\ \{ \text{apples} , \text{eggs}, \text{bacon} \} \\ \{ \text{bacon} , \text{bread} \} \\ \{ \text{ice cream} , \text{bread}, \text{bacon} \} \end{align*} \vspace{-2em} \begin{easylist}[itemize] \ListProperties(Space=\listSpace, Space*=\listSpace) # The rule $\{ \text{bread} \} \Rightarrow \{ \text{bacon} \}$ has support $3/4$, confidence $1$. ## Support $3/4$ since $\{ \text{bread}, \text{bacon}\}$ appears in $3$ of the transactions. ## Confidence $1$ since $\{ \text{bread} \}$ appears 3 times, and in 3 of those $\{ \text{bacon} \}$ also appears. \end{easylist} \end{example} \end{frame} \begin{frame}[fragile]{A naive algorithm} \begin{example}[Naive algorithm for learning rules] for subsets of every size $k=1, \dots, |I|$ \\ \hspace*{1em} for every subset of size $k$ \\ \hspace*{2em} for every split of this subset into $\{ X \} \Rightarrow \{ Y \}$ \\ \hspace*{3em} compute support and confidence of the rule \\ \hspace*{3em} by counting the support in the transactions \end{example} \begin{easylist}[itemize] \ListProperties(Space=\listSpace, Space*=\listSpace) # Fantastic staring point for an algorithm, since it (1) clearly terminates in finite time, (2) is simple to implement and (3) will run reasonably fast on small problem instances. # Terribly slow on realistic problem instances, since it must check every possible itemset against every transaction. \end{easylist} \end{frame} % ------------------------------------------------------------------------- \section{A solution: the Apriori algorithm} \begin{frame}[fragile]{Overview of apriori} \begin{easylist}[itemize] \ListProperties(Space=\listSpace, Space*=\listSpace) # Split the problem into two distinct phases. ## Finding meaningful (high support) itemsets. ## Generating meaningful (high confidence) rules. # \textbf{Phase 1} ## The user specifies a desired \emph{minimum support}. ## The algorithm exploits the downward closure property, i.e. $\operatorname{support}(S) \leq \operatorname{support}(s)$ if $s \subset S$. ### No reason to check $S$ if $s$ has low support. ## Bottom-up approach to subset generation. # \textbf{Phase 2} ## The user specifies a desired \emph{minimum confidence}. ## Also exploits the above downward closure property. ## Bottom-up approach to rule generation. \end{easylist} \end{frame} \begin{frame}[fragile]{Phase 1: Generating itemsets (example 1)} \begin{example}[Itemset generation via Apriori] Consider again the following set of transactions. \vspace*{-1em} \begin{align*} \{ \text{eggs}, \text{bread}, \text{jam}, \text{bacon} \} \\ \{ \text{apples} , \text{eggs}, \text{bacon} \} \\ \{ \text{bacon} , \text{bread} \} \\ \{ \text{ice cream} , \text{bread}, \text{bacon} \} \end{align*} \vspace{-2em} \begin{easylist}[itemize] \ListProperties(Space=\listSpace, Space*=\listSpace) # We set the minimum confidence to 50 \%. ## Itemsets of size $1$ with desired confidence are \\ $\{ \text{bacon} \}$, $\{ \text{bread} \}$ and $\{ \text{eggs} \}$. They are called \emph{large itemsets} of size 1. ## From these, we can form \\ $\{\text{bacon}, \text{bread}\}$, $\{\text{bacon}, \text{eggs}\}$ and $\{\text{bread}, \text{eggs}\}$. These are \emph{candidate itemsets} of size 2. ## Large itemsets of size $2$: $\{\text{bacon}, \text{bread}\}$ and $\{\text{bacon}, \text{eggs}\}$. \end{easylist} \end{example} \end{frame} \begin{frame}[fragile]{Phase 1: Generating itemsets (example 2)} \begin{Example} \begin{columns} \begin{column}{0.25\textwidth} \begin{center} \textbf{Transactions} \end{center} \vspace{-1em} \begin{align*} \{ 1, 2, 7, 4 \} \\ \{ 2, 3, 4 \} \\ \{ 1, 6, 3 \} \\ \{ 1, 2, 4, 5 \} \end{align*} \end{column} \begin{column}{0.75\textwidth} %%<--- here \begin{center} \textbf{Iteration 1} \end{center} \begin{easylist}[itemize] \ListProperties(Space=\listSpace, Space*=\listSpace) # Running the algorithm with minimum support 50 \%. # Candidate itemsets of size 1: ## $\{1\}, \{2\}, \{3\}, \{4\}, \{5\}, \{6\}, \{7\}$ # Large itemsets of size 1: ## $\{1\}, \{2\}, \{3\}, \{4\}$ \end{easylist} \end{column} \end{columns} \end{Example} \end{frame} \begin{frame}[fragile]{Phase 1: Generating itemsets (example 2)} \begin{Example} \begin{columns} \begin{column}{0.25\textwidth} \begin{center} \textbf{Transactions} \end{center} \vspace{-1em} \begin{align*} \{ 1, 2, 7, 4 \} \\ \{ 2, 3, 4 \} \\ \{ 1, 6, 3 \} \\ \{ 1, 2, 4, 5 \} \end{align*} \end{column} \begin{column}{0.75\textwidth} %%<--- here \begin{center} \textbf{Iteration 2} \end{center} \begin{easylist}[itemize] \ListProperties(Space=\listSpace, Space*=\listSpace) # Running the algorithm with minimum support 50 \%. # Candidate itemsets of size 2: ## $\{1, 2\}, \{1, 3\}, \{1, 4\}, \{2, 3\}, \{2, 4\}, \{3, 4\}$ # Large itemsets of size 2: ## $\{1, 2\}, \{1, 4\}, \{2, 4\}$ \end{easylist} \end{column} \end{columns} \end{Example} \end{frame} \begin{frame}[fragile]{Phase 1: Generating itemsets (example 2)} \begin{Example} \begin{columns} \begin{column}{0.25\textwidth} \begin{center} \textbf{Transactions} \end{center} \vspace{-1em} \begin{align*} \{ 1, 2, 7, 4 \} \\ \{ 2, 3, 4 \} \\ \{ 1, 6, 3 \} \\ \{ 1, 2, 4, 5 \} \end{align*} \end{column} \begin{column}{0.75\textwidth} %%<--- here \begin{center} \textbf{Iteration 3} \end{center} \begin{easylist}[itemize] \ListProperties(Space=\listSpace, Space*=\listSpace) # Running the algorithm with minimum support 50 \%. # Candidate itemsets of size 3: ## $\{1, 2, 4\}$ # Large itemsets of size 3: ## $\{1, 2, 4\}$ \end{easylist} \end{column} \end{columns} \end{Example} \end{frame} \begin{frame}[fragile]{Phase 1: Pseudocode} \textbf{Algorithm sketch} \\ Create $L_1$, a set of large itemsets of size 1 \\ $ $ \\ $j = 1$ \\ while $L_j$ is not empty do: \\ \hspace*{1em} create every candidate set $C_{j+ 1}$ from $L_j$ \\ \hspace*{1em} prune candidates a priori $C_{j+ 1}$ (every subset must be in $L_j$) \\ $ $ \\ \hspace*{1em} for every transaction $t_i \in T$ do: \\ \hspace*{2em} count occurrences of every set in $C_{j+ 1}$ in $t_i$ \\ $ $ \\ \hspace*{1em} $j = j + 1$ $ $ \\ \vspace*{0.5em} \hrule \vspace*{0.5em} Iterating through the transactions checking for every possible candidate in $C_{j+1}$ is expensive. Optimizations: choosing good data structures, pruning transactions. \end{frame} \begin{frame}[fragile]{Phase 1: Pseudocode - Details on candidates and pruning} \hspace*{1em} create every candidate set $C_{j+ 1}$ from $L_j$ \\ \hspace*{1em} prune candidates a priori $C_{j+ 1}$ (every subset must be in $L_j$) \\ \vspace*{0.5em} \hrule \vspace*{0.5em} \textbf{Example} Given large itemsets of size 3 $\{1, 2, 3\}, \{1, 2, 4\}, \{1, 3, 4\}, \{1, 3, 5\}, \{2, 3, 4\}$. \begin{easylist}[itemize] \ListProperties(Space=\listSpace, Space*=\listSpace) # Naive candidates are $\{2, 3, 4, 5\}, \{1, 3, 4, 5\}, \{1, 2, 4, 5\}, \{1, 2, 3, 5\}, \{1, 2, 3, 4\}$. # Apriori-gen candidates are $\{1, 2, 3, 4\}, \{1, 3, 4, 5\}$. Generated efficiently by keeping the itemsets sorted. # While the itemset $\{1, 2, 3, 4\}$ is kept, $\{1, 3, 4, 5\}$ is discarded since the subset $\{1, 3, 5\} \subset \{1, 3, 4, 5\}$ is not among the large itemsets of size 3 . \end{easylist} \vfill {\footnotesize The example above is from page 4 in the referenced paper.} \end{frame} \begin{frame}[fragile]{Phase 1: Pseudocode - Details on counting occurences} \hspace*{1em} for every transaction $t_i \in T$ do: \\ \hspace*{2em} count occurrences of every set in $C_{j+ 1}$ in $t_i$ \\ \vspace*{0.5em} \hrule \vspace*{0.5em} \textbf{Example} Check if $A = \{1, 3, 7\}$ is a subset of $B = \{1, 2, 3, 5, 7, 9\}$. \begin{easylist}[itemize] \ListProperties(Space=\listSpace, Space*=\listSpace) # A naive computation checks if every element of $A$ is found in $B$. This has computational complexity $\mathcal{O}(|A| |B|)$, where $|A|$ is the size of $A$. # A better approach is to use binary search when $B$ is sorted. The computational complexity becomes $\mathcal{O}(|A| \log_2 |B|)$. # Using hash tables (e.g. the built-in \texttt{set.issubset} in Python), the computational complexity is down to $\mathcal{O}(|A|)$. \end{easylist} For the given example, this resolves to approximately 18, 8 and 3 operations. \end{frame} \begin{frame}[fragile]{Phase 2: Building association rules (example)} \begin{easylist}[itemize] \ListProperties(Space=\listSpace, Space*=\listSpace) # In practice this step is much faster than Phase 1. # The efficient algorithm exploits the downward closure property. \end{easylist} \begin{Example} Consider rules made from $ABCD$. First the algorithm tries to move itemsets of size 1 to the right hand side, i.e. one of $\{\{ A\}, \{ B\}, \{ C\}, \{ D\} \}$. \begin{align*} BCD \Rightarrow A \quad & \quad ACD \Rightarrow B \\ ABD \Rightarrow C \quad & \quad ABC \Rightarrow D \end{align*} Assume that only $ABC \Rightarrow D$ and $ACD \Rightarrow B$ had high enough confidence. Then the only rule created from $ABCD$ with a size 2 itemset on the right hand side worth considering is $AC \Rightarrow BD$. This is a direct result of the downward closure property. \end{Example} Recursive function which is not very easy to explain in detail. \end{frame} \begin{frame}[fragile]{The Apriori algorithm on real data} Consider the following data set, with $32.561$ rows. \vfill {\footnotesize \begin{tabular}{lllllll} \toprule Education & Marital-status & Relationship & Race & Sex & Income & Age \\ \midrule Bachelors & Never-married & Not-in-family & White & Male & $\leq$50K & middle-aged \\ Bachelors & Married-civ-spouse & Husband & White & Male & $\leq$50K & old \\ HS-grad & Divorced & Not-in-family & White & Male & $\leq$50K & middle-aged \\ 11th & Married-civ-spouse & Husband & Black & Male & $\leq$50K & old \\ Bachelors & Married-civ-spouse & Wife & Black & Female & $\leq$50K & young \\ $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ \\ Masters & Married-civ-spouse & Wife & White & Female & $\leq$50K & middle-aged \\ 9th & Married-spouse-absent & Not-in-family & Black & Female & $\leq$50K & middle-aged \\ HS-grad & Married-civ-spouse & Husband & White & Male & $>$50K & old \\ Masters & Never-married & Not-in-family & White & Female & $>$50K & middle-aged \\ \bottomrule \end{tabular} } \vfill {\footnotesize The data may be found at \url{https://archive.ics.uci.edu/ml/datasets/adult}.} \end{frame} \begin{frame}[fragile]{The Apriori algorithm on real data} Some rules are obvious in retrospect: \begin{align*} \{ \text{Husband} \} &\Rightarrow \{ \text{Male} \} \\ \{ \leq \text{50K}, \text{Husband} \} &\Rightarrow \{ \text{Male} \} \\ \{ \text{Husband}, \text{middle-aged} \} &\Rightarrow \{ \text{Male}, \text{Married-civ-spouse} \} \end{align*} Some are more interesting: \begin{align*} \{ \text{HS-grad} \} &\Rightarrow \{ \leq \text{50K} \} \\ \{ \leq \text{50K}, \text{young} \} &\Rightarrow \{ \text{Never-married} \} \\ \{ \text{Husband} \} &\Rightarrow \{ \text{Male}, \text{Married-civ-spouse}, \text{middle-aged} \} \end{align*} The meaningfulness of a rule may be measured by \emph{confidence}, \emph{lift} and \emph{conviction}. \end{frame} % ------------------------------------------------------------------------- \section{A practical matter: writing a Python implementation} \begin{frame}[fragile]{Overview of workflow} \begin{easylist}[itemize] \ListProperties(Space=\listSpace, Space*=\listSpace) %# Keep everything under version control using git # Write simple functions first, i.e. the building blocks (e.g. pruning) # Add doctests and unit tests (e.g. examples from paper) # Implement a naive, but correct algorithm # Implement an asymptotically fast algorithm # Test the preceding two implementations against each other %# Integrate with GitHub and Travis CI %# Enforce PEP8 compliance and testing on Travis CI # Optimize implementation by profiling the code (find bottlenecks) %# Distribute on PyPI %# Advertise the implementation and get some traffic %# Commit oneself to 1-2 evenings per semester to maintain \end{easylist} \vfill Understand $\to$ Naive algorithm $\to$ Asymptotically fast $\to$ Further optimizations \end{frame} \begin{frame}[fragile]{Software testing} \begin{easylist}[itemize] \ListProperties(Space=\listSpace, Space*=\listSpace) # Unit tests ## Test a simple function $f(x_i) = y_i$ for known cases $i=1,2,\dots$ ## Doubles as documentation when writing \emph{doctests} in Python # Property tests ## Fix a property, i.e. $f(a, b) = f(b, a)$ for every $a,b$ ## Generate many random inputs $a,b$ to make sure the property holds # Testing against \verb|R|, Wikipedia, etc ## Generate some inputs and test against the \verb|arules| package \end{easylist} \end{frame} \begin{frame}[fragile]{Software structure} \begin{center} \includegraphics[width=12cm]{figs/apriori_software_functions.pdf} \end{center} {\footnotesize Software found at \url{https://github.com/tommyod/Efficient-Apriori}.} \end{frame} % ------------------------------------------------------------------------- \section{Summary and references} \begin{frame}[fragile]{Summary and references} The Apriori algorithm discovers frequent itemsets in phase 1, and meaningful association rules in phase 2. Both phases employ clever bottom-up algorithms. By application of the downward closure property of itemsets (support) and rules (confidence), candidates may be pruned prior to expensive computations. \vfill \begin{easylist}[itemize] \ListProperties(Space=\listSpace, Space*=\listSpace) # The Python implementation ## \href{https://github.com/tommyod/Efficient-Apriori/}{\texttt{github.com/tommyod/Efficient-Apriori}} # The original paper ## Agrawal et al, \emph{Fast Algorithms for Mining Association Rules}, 1994 \url{http://www.cse.msu.edu/~cse960/Papers/MiningAssoc-AgrawalAS-VLDB94.pdf} \end{easylist} \end{frame} \end{document}
{ "alphanum_fraction": 0.6397134523, "avg_line_length": 38.1538461538, "ext": "tex", "hexsha": "bf718d4d34d7e84c69898d18ce6af9fa3406fa05", "lang": "TeX", "max_forks_count": 53, "max_forks_repo_forks_event_max_datetime": "2022-03-21T07:40:55.000Z", "max_forks_repo_forks_event_min_datetime": "2018-11-08T02:15:45.000Z", "max_forks_repo_head_hexsha": "7f3c7ea51d6a36204f00acdc6950c3eb8ea03ef7", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "CRJFisher/Efficient-Apriori", "max_forks_repo_path": "docs/presentation/apriori.tex", "max_issues_count": 26, "max_issues_repo_head_hexsha": "7f3c7ea51d6a36204f00acdc6950c3eb8ea03ef7", "max_issues_repo_issues_event_max_datetime": "2022-03-14T12:04:02.000Z", "max_issues_repo_issues_event_min_datetime": "2019-01-15T06:16:18.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "CRJFisher/Efficient-Apriori", "max_issues_repo_path": "docs/presentation/apriori.tex", "max_line_length": 217, "max_stars_count": 220, "max_stars_repo_head_hexsha": "7f3c7ea51d6a36204f00acdc6950c3eb8ea03ef7", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "CRJFisher/Efficient-Apriori", "max_stars_repo_path": "docs/presentation/apriori.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-27T20:29:36.000Z", "max_stars_repo_stars_event_min_datetime": "2018-10-17T08:37:08.000Z", "num_tokens": 7826, "size": 23312 }
\documentclass[9pt,twocolumn,twoside,lineno]{pnas-new} % Use the lineno option to display guide line numbers if required. % Note that the use of elements such as single-column equations % may affect the guide line number alignment. \templatetype{pnasresearcharticle} % Choose template % {pnasresearcharticle} = Template for a two-column research article % {pnasmathematics} = Template for a one-column mathematics article % {pnasinvited} = Template for a PNAS invited submission \usepackage{subcaption} \usepackage{caption} \title{Hack Weeks as a model for Data Science Education and Collaboration} % Use letters for affiliations, numbers to show equal authorship (if applicable) and to indicate the corresponding author \author[a,b,c,1]{Daniela Huppenkothen} \author[d,e]{Anthony Arendt} \author[b,a,f,g]{David W. Hogg} \author[h]{Karthik Ram} \author[e]{Jake VanderPlas} \author[e]{Ariel Rokem} \affil[a]{Center for Data Science, New York University, 65 5th Avenue, 7th Floor, New York, NY 10003, USA} \affil[b]{Center for Cosmology and Particle Physics, New York University, 726 Broadway, 10th Floor, New York, NY 10003, USA} \affil[c]{DIRAC Institute, Department of Astronomy, University of Washington, 3910 15th Ave NE, Seattle, WA 98195, USA} \affil[d]{Polar Science Center/Applied Physics Laboratory, University of Washington, 1013 NE 40th Street, Box 355640, Seattle, WA 98105-6698} \affil[e]{The University of Washington eScience Institute, The WRF Data Science Studio, Physics/Astronomy Tower, 6th Floor, 3910 15th Ave NE, Campus Box 351570, University of Washington, Seattle, WA 98105, USA} \affil[f]{Max-Planck-Institut f\"ur Astronomie, K\"onigstuhl 17, D-69117 Heidelberg} \affil[g]{Center for Computational Astrophysics, Flatiron Institute, 162 5th Ave, New York, NY 10010, USA} \affil[h]{Berkeley Institute for Data Science \& Berkeley Initiative in Global Change Biology, University of California, Berkeley, Berkeley CA 94720} % Please give the surname of the lead author for the running footer \leadauthor{Huppenkothen} % Please add here a significance statement to explain the relevance of your work \significancestatement{As scientific disciplines grapple with more data sets of rapidly increasing complexity and size, new approaches are urgently required to introduce new statistical and computational tools into research communities and improve the cross-disciplinary exchange of ideas. In this paper, we introduce a new type of scientific workshop, called a \textit{hack week}, which allows for fast dissemination of new methodologies into scientific communities, and fosters exchange and collaboration within and between disciplines. We present implementations of this concept in astronomy, neuroscience and geoscience, and show that hack weeks produce positive learning outcomes, foster lasting collaborations, yield scientific results and improve attitudes toward open science and reproducibility.} % Please include corresponding author, author contribution and author declaration information \authorcontributions{The paper was conceptualized by Jake VanderPlas, David W. Hogg and Karthik Ram. Daniela Huppenkothen, Ariel Rokem and Anthony Arendt performed the studies and the data analysis. All authors contributed to the final manuscript.} \authordeclaration{The authors declare no conflicts of interest.} %\equalauthors{\textsuperscript{1}A.O.(Author One) and A.T. (Author Two) contributed equally to this work (remove if not applicable).} \correspondingauthor{\textsuperscript{1}To whom correspondence should be addressed. E-mail: [email protected]} % Keywords are not mandatory, but authors are strongly encouraged to provide them. If provided, please include two to five keywords, separated by the pipe symbol, e.g: \keywords{Data science $|$ Education $|$ Reproducibility $|$ Interdisciplinary Collaboration $|$ Astronomy $|$ Neuroscience $|$ Geosciences} %\begin{abstract} %Please provide an abstract of no more than 250 words in a single paragraph. Abstracts should explain to the general reader the major contributions of the article. References in the abstract must be cited in full within the abstract itself and cited in the text. %\end{abstract} \input{abstract} \dates{This manuscript was compiled on \today} \doi{\url{www.pnas.org/cgi/doi/10.1073/pnas.XXXXXXXXXX}} \begin{document} % Optional adjustment to line up main text (after abstract) of first page with line numbers, when using both lineno and twocolumn options. % You should only change this length when you've finalised the article contents. \verticaladjustment{-2pt} \maketitle \thispagestyle{firststyle} \ifthenelse{\boolean{shortarticle}}{\ifthenelse{\boolean{singlecolumn}}{\abscontentformatted}{\abscontent}}{} \input{intro} \input{what} \input{why} \input{participants} \input{themes} \input{design} \input{results} \input{conclusions} % If your first paragraph (i.e. with the \dropcap) contains a list environment (quote, quotation, theorem, definition, enumerate, itemize...), the line after the list may have some extra indentation. If this is the case, add \parshape=0 to the end of the list environment. %\dropcap{T}his PNAS journal template is provided to help you write your work in the correct journal format. Instructions for use are provided below. %Note: please start your introduction without including the word ``Introduction'' as a section heading (except for math articles in the Physical Sciences section); this heading is implied in the first paragraphs. %\section*{Guide to using this template on Overleaf} %Please note that whilst this template provides a preview of the typeset manuscript for submission, to help in this preparation, it will not necessarily be the final publication layout. For more detailed information please see the \href{http://www.pnas.org/site/authors/format.xhtml}{PNAS Information for Authors}. %If you have a question while using this template on Overleaf, please use the help menu (``?'') on the top bar to search for \href{https://www.overleaf.com/help}{help and tutorials}. You can also \href{https://www.overleaf.com/contact}{contact the Overleaf support team} at any time with specific questions about your manuscript or feedback on the template. %\subsection*{Author Affiliations} %Include department, institution, and complete address, with the ZIP/postal code, for each author. Use lower case letters to match authors with institutions, as shown in the example. Authors with an ORCID ID may supply this information at submission. %\subsection*{Submitting Manuscripts} %All authors must submit their articles at \href{http://www.pnascentral.org/cgi-bin/main.plex}{PNAScentral}. If you are using Overleaf to write your article, you can use the ``Submit to PNAS'' option in the top bar of the editor window. %\subsection*{Format} %Many authors find it useful to organize their manuscripts with the following order of sections; Title, Author Affiliation, Keywords, Abstract, Significance Statement, Results, Discussion, Materials and methods, Acknowledgments, and References. Other orders and headings are permitted. %\subsection*{Manuscript Length} %PNAS generally uses a two-column format averaging 67 characters, including spaces, per line. The maximum length of a Direct Submission research article is six pages and a PNAS PLUS research article is ten pages including all text, spaces, and the number of characters displaced by figures, tables, and equations. When submitting tables, figures, and/or equations in addition to text, keep the text for your manuscript under 39,000 characters (including spaces) for Direct Submissions and 72,000 characters (including spaces) for PNAS PLUS. %\subsection*{References} %References should be cited in numerical order as they appear in text; this will be done automatically via bibtex, e.g. \cite{belkin2002using} and \cite{berard1994embedding,coifman2005geometric}. All references, including for the SI, should be included in the main manuscript file. References appearing in both sections should not be duplicated. SI references included in tables should be included with the main reference section. %\subsection*{Data Archival} %PNAS must be able to archive the data essential to a published article. Where such archiving is not possible, deposition of data in public databases, such as GenBank, ArrayExpress, Protein Data Bank, Unidata, and others outlined in the Information for Authors, is acceptable. %\subsection*{Language-Editing Services} %Prior to submission, authors who believe their manuscripts would benefit from professional editing are encouraged to use a language-editing service (see list at www.pnas.org/site/authors/language-editing.xhtml). PNAS does not take responsibility for or endorse these services, and their use has no bearing on acceptance of a manuscript for publication. %\begin{figure}%[tbhp] %\centering %\includegraphics[width=.8\linewidth]{frog} %\caption{Placeholder image of a frog with a long example caption to show justification setting.} %\label{fig:frog} %\end{figure} %\subsection*{Digital Figures} %\label{sec:figures} %Only TIFF, EPS, and high-resolution PDF for Mac or PC are allowed for figures that will appear in the main text, and images must be final size. Authors may submit U3D or PRC files for 3D images; these must be accompanied by 2D representations in TIFF, EPS, or high-resolution PDF format. Color images must be in RGB (red, green, blue) mode. Include the font files for any text. %Figures and Tables should be labelled and referenced in the standard way using the \verb|\label{}| and \verb|\ref{}| commands. %Figure \ref{fig:frog} shows an example of how to insert a column-wide figure. To insert a figure wider than one column, please use the \verb|\begin{figure*}...\end{figure*}| environment. Figures wider than one column should be sized to 11.4 cm or 17.8 cm wide. %\subsection*{Single column equations} %Authors may use 1- or 2-column equations in their article, according to their preference. %To allow an equation to span both columns, options are to use the \verb|\begin{figure*}...\end{figure*}| environment mentioned above for figures, or to use the \verb|\begin{widetext}...\end{widetext}| environment as shown in equation \ref{eqn:example} below. %Please note that this option may run into problems with floats and footnotes, as mentioned in the \href{http://texdoc.net/pkg/cuted}{cuted package documentation}. In the case of problems with footnotes, it may be possible to correct the situation using commands \verb|\footnotemark| and \verb|\footnotetext|. %% Do not use widetext if paper is in single column. %\begin{widetext} %\begin{align*} %(x+y)^3&=(x+y)(x+y)^2\\ % &=(x+y)(x^2+2xy+y^2) \numberthis \label{eqn:example} \\ % &=x^3+3x^2y+3xy^3+x^3. %\end{align*} %\end{widetext} %\begin{table}%[tbhp] %\centering %\caption{Comparison of the fitted potential energy surfaces and ab initio benchmark electronic energy calculations} %\begin{tabular}{lrrr} %Species & CBS & CV & G3 \\ %\midrule %1. Acetaldehyde & 0.0 & 0.0 & 0.0 \\ %2. Vinyl alcohol & 9.1 & 9.6 & 13.5 \\ %3. Hydroxyethylidene & 50.8 & 51.2 & 54.0\\ %\bottomrule %\end{tabular} %\addtabletext{nomenclature for the TSs refers to the numbered species in the table.} %\end{table} %\subsection*{Supporting Information (SI)} %The main text of the paper must stand on its own without the SI. Refer to SI in the manuscript at an appropriate point in the text. Number supporting figures and tables starting with S1, S2, etc. Authors are limited to no more than 10 SI files, not including movie files. Authors who place detailed materials and methods in SI must provide sufficient detail in the main text methods to enable a reader to follow the logic of the procedures and results and also must reference the online methods. If a paper is fundamentally a study of a new method or technique, then the methods must be described completely in the main text. Because PNAS edits SI and composes it into a single PDF, authors must provide the following file formats only. %\subsubsection*{SI Text} %Supply Word, RTF, or LaTeX files (LaTeX files must be accompanied by a PDF with the same file name for visual reference). %\subsubsection*{SI Figures} %Provide a brief legend for each supporting figure after the supporting text. Provide figure images in TIFF, EPS, high-resolution PDF, JPEG, or GIF format; figures may not be embedded in manuscript text. When saving TIFF files, use only LZW compression; do not use JPEG compression. Do not save figure numbers, legends, or author names as part of the image. Composite figures must be pre-assembled. %\subsubsection*{3D Figures} %Supply a composable U3D or PRC file so that it may be edited and composed. Authors may submit a PDF file but please note it will be published in raw format and will not be edited or composed. %\subsubsection*{SI Tables} %Supply Word, RTF, or LaTeX files (LaTeX files must be accompanied by a PDF with the same file name for visual reference); include only one table per file. Do not use tabs or spaces to separate columns in Word tables. %\subsubsection*{SI Datasets} %Supply Excel (.xls), RTF, or PDF files. This file type will be published in raw format and will not be edited or composed. %\subsubsection*{SI Movies} %Supply Audio Video Interleave (avi), Quicktime (mov), Windows Media (wmv), animated GIF (gif), or MPEG files and submit a brief legend for each movie in a Word or RTF file. All movies should be submitted at the desired reproduction size and length. Movies should be no more than 10 MB in size. %\subsubsection*{Still images} %Authors must provide a still image from each video file. Supply TIFF, EPS, high-resolution PDF, JPEG, or GIF files. %\subsubsection*{Appendices} %PNAS prefers that authors submit individual source files to ensure readability. If this is not possible, supply a single PDF file that contains all of the SI associated with the paper. This file type will be published in raw format and will not be edited or composed. \matmethods{ We performed post-attendance surveys for AHW, GHW and NHW in 2016 and 2017. All surveys contained general questions about attitudes towards the workshop as well as open science and reproducibility, and their own skills in statistical and computational methods, shared among all three surveys. The NHW and GHW surveys and the AHW 2017 survey were administered on site on the last day of the workshop; AHW participants in 2016 were e-mailed immediately after the workshop, with reminders after several weeks. All responses were collected within four weeks of the end of the workshop. The experimental procedures were approved by the Institutional Review Boards at UW, NYU and UC Berkeley. All participants gave their informed consent. Response rates for NHW (2016: 41 responses; 2017: 45 responses) and GHW (2016: 42 responses; 2017: 41 responses) were 100\% in both years; the response rate for AHW was 71\% (35 out of 49) in 2016 and 82\% (37 out of 45) in 2017. The lower response rates for AHW can be explained by the generally lower rates of response for time-delayed survey in 2016. In 2017, a number of participants did not attend the full week, and thus several attendees were not present on the last day when the survey was administered. For the AHW surveys, we checked whether the demographics of the respondents different significantly from those of the attendees overall in three relevant categories (career stage, racial/ethnic background and gender identity) and find no significant difference in any of the categories except for career stage in 2017: here, we find that graduate students are underrepresented (27\% of respondents compared to 41\% of attendees), while undergraduate students are overrepresented (25\% of respondents compared to 12\% of attendees). Because both undergraduate and graduate students are at the beginning of their careers for whom AHW is likely the first exposure of many of the concepts around data science and open science, we believe that this discrepancy has little impact on the overall results presented here. Participants were asked to respond to statements regarding these topics using a six-point Likert-type scale. All questions were anonymously recorded. No responses were discarded, and no pre-processing was performed on the data. We test for correlations between categorical variables using a standard $\chi^2$ test and compute the effect sizes via a bias-corrected version of Cram\'{e}r's V \citep{bergsma2013}. Only $p$-values with $p < 0.0018$ (equivalent to $p < 0.05$ corrected for $N=27$ trials) were reported as potential candidates. The full analysis procedure is available online\footnote{See the repository: \url{https://github.com/uwescience/HackWeek-Writeup}}. %\subsection*{Subsection for Method} %Example text for subsection. } \showmatmethods % Display the Materials and Methods section \acknow{The authors would like to thank Laura Nor\'{e}n (NYU) for help on ethics and IRB, Stuart Geiger for helping to formulate the questionnaires that served as the basis for the results presented here, Brittany Fiore-Gartland, Laura Nor\'{e}n and Jason Yeatman for comments on the manuscript, and Tal Yarkoni for advice regarding automated selection procedures. This work was partially supported by the Moore-Sloan Data Science Environments at UC Berkeley, New York University, and the University of Washington. Neuro hack week is supported through a grant from the National Institute for Mental Health (1R25MH112480). Daniela Huppenkothen is partially supported by the James Arthur Postdoctoral Fellowship at NYU, and acknowledges support from the DIRAC Institute in the Department of Astronomy at the University of Washington. The DIRAC Institute is supported through generous gifts from the Charles and Lisa Simonyi Fund for Arts and Sciences, and the Washington Research Foundation.} \showacknow % Display the acknowledgments section % \pnasbreak splits and balances the columns before the references. % If you see unexpected formatting errors, try commenting out this line % as it can run into problems with floats and footnotes on the final page. \vspace{-1cm} \pnasbreak % Bibliography \bibliography{paper} \end{document}
{ "alphanum_fraction": 0.7879334865, "avg_line_length": 80.1842105263, "ext": "tex", "hexsha": "afe1112ebcb8c2d69bf96d290a91324cd7b5909d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "450d5e11334a11a2502bb48dac7c4f77b932cf82", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "arokem/HackWeek-Writeup", "max_forks_repo_path": "paper-pnas.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "450d5e11334a11a2502bb48dac7c4f77b932cf82", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "arokem/HackWeek-Writeup", "max_issues_repo_path": "paper-pnas.tex", "max_line_length": 1324, "max_stars_count": null, "max_stars_repo_head_hexsha": "450d5e11334a11a2502bb48dac7c4f77b932cf82", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "arokem/HackWeek-Writeup", "max_stars_repo_path": "paper-pnas.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4277, "size": 18282 }
\section{Appendix A} \label{sec:appendix-a} \subsection{Git Internals} \code{Git} - is a simple key-value data store that stores 3 main types of Objects: \code{blob}, \code{tree}, \code{commit}, and one additional: \code{tag}. Git stores content in a manner similar to a UNIX filesystem, but a bit simplified. All the content is stored as tree and blob objects, with trees corresponding to UNIX directory entries and blobs corresponding more or less to inodes or file contents. Conceptually, the data that Git is storing looks something like this::\\ \begin{figure}[H] \begin{center} \begin{tikzpicture} \node[rectangle,rounded corners, draw=green, fill=green, scale=2] (tree) at(0,0){tree}; \node[rectangle,rounded corners, draw=gray, fill=gray, scale=2] (Rakefile-blob) at(0,-3){blob}; \node[rectangle,rounded corners, draw=gray, fill=gray, scale=2] (README-blob) at(-3,-3){blob}; \node[rectangle,rounded corners, draw=green, fill=green, scale=2] (lib-tree) at(3,-3){tree}; \node[rectangle,rounded corners, draw=gray, fill=gray, scale=2] (simplegit-blob) at(3,-6){blob}; \node[rectangle,rounded corners, draw=white, fill=white] (Rakefile) at(0,-1.5){Rakefile}; \node[rectangle,rounded corners, draw=white, fill=white] (README) at(-2,-1.5){README}; \node[rectangle,rounded corners, draw=white, fill=white] (lib) at(1.5,-1.5){lib}; \node[rectangle,rounded corners, draw=white, fill=white] (simplegit) at(3,-4.5){simplegit.rb}; \draw (tree) -- (Rakefile); \draw [->] (Rakefile) -- (Rakefile-blob); \draw (tree) to (README); \draw [->] (README) to (README-blob); \draw (tree) -- (lib); \draw [->] (lib) -- (lib-tree); \draw (lib-tree) -- (simplegit); \draw [->] (simplegit) -- (simplegit-blob); \end{tikzpicture} \end{center} \caption{Simple version of the Git data model} \label{fig:git-tree-sample} \end{figure} Blobs store content without file names and trees store filenames and also allows to store a group of files together. Top level trees represent the different snapshots of a project that you want to track. Commit objects point to that top level trees(snapshots) and store information about who saved the snapshots, when they were saved, or why they were saved. To order snapshots Commit object also points to parent(previous) commit if any. So snapshots could be reached and tracked via commit objects. \begin{figure}[H] \begin{center} \begin{tikzpicture} \node[rectangle,rounded corners, draw=white, fill=orange, scale=1.5] (3-commit) at(0,0){third commit}; \node[rectangle,rounded corners, draw=white, fill=orange, scale=1.5] (2-commit) at(0,-3){second commit}; \node[rectangle,rounded corners, draw=white, fill=orange, scale=1.5] (1-commit) at(0,-6){first commit}; \draw [->] (3-commit) -- (2-commit); \draw [->] (2-commit) -- (1-commit); \node[rectangle,rounded corners, draw=white, fill=green, scale=1.5] (3-tree) at(4,0){tree 3c4e}; \draw [->] (3-commit) -- (3-tree); \node[rectangle,rounded corners, draw=white, fill=green, scale=1.5] (2-tree) at(4,-3){tree 0155}; \draw [->] (2-commit) -- (2-tree); \node[rectangle,rounded corners, draw=white, fill=green, scale=1.5] (1-tree) at(4,-6){tree d832}; \draw [->] (1-commit) -- (1-tree); \node[rectangle,rounded corners, draw=white, fill=white, scale=1.5] (3-bak) at(7,1){bak}; \node[rectangle,rounded corners, draw=white, fill=white, scale=1.5] (3-new) at(7,0){new.txt}; \node[rectangle,rounded corners, draw=white, fill=white, scale=1.5] (3-test) at(7,-1){test.txt}; \node[rectangle,rounded corners, draw=white, fill=white, scale=1.5] (2-test) at(7,-2.5){test.txt}; \node[rectangle,rounded corners, draw=white, fill=white, scale=1.5] (2-new) at(7,-3.5){new.txt}; \node[rectangle,rounded corners, draw=white, fill=white, scale=1.5] (1-test) at(7,-6){test.txt}; \draw (3-tree) -- (3-bak); \draw (3-tree) -- (3-new); \draw (3-tree) -- (3-test); \draw (2-tree) -- (2-test); \draw (2-tree) -- (2-new); \draw (1-tree) -- (1-test); \node[rectangle,rounded corners, draw=white, fill=gray, scale=1.5] (version-2) at(10,-1.75){"version 2"}; \node[rectangle,rounded corners, draw=white, fill=gray, scale=1.5] (new-file) at(10,-4){"new file"}; \node[rectangle,rounded corners, draw=white, fill=gray, scale=1.5] (version-1) at(10,-6){"version 1"}; \draw [->] (1-test) -- (version-1); \draw [->] (2-new) -- (new-file); \draw [->] (2-test) -- (version-2); \draw [->] (3-test) -- (version-2); \draw (3-new) to (11.7, 0); \draw [->] (11.7, 0) |- (new-file); \draw (3-bak) -| (12, -7); \draw (12, -7) -- (7, -7); \draw [->] (7, -7) -- (1-tree); \end{tikzpicture} \end{center} \caption{All the reachable objects in Git directory} \label{fig:commit-objects} \end{figure} {\large Things, listed below, serve to make our interaction with these objects easier. } \begin{description} \item[Reference] -- A file in which you could store SHA-1 value of a commit under a simple name, so you could use that simple name rather than the raw SHA-1 value. \item[Branch] -- is a simple pointer or reference to the head of a line of work(the latest commit) in .git/refs/heads. Branch name is mapped to the SHA of the commit that is latest for this branch. \begin{figure}[H] \begin{center} \adjustbox{scale=0.8,center}{% \begin{tikzpicture} \node[rectangle,rounded corners, draw=white, fill=red, scale=1.5] (master) at(-5,0){refs/heads/master}; \node[rectangle,rounded corners, draw=white, fill=red, scale=1.5] (test) at(-5,-3){refs/heads/test}; \node[rectangle,rounded corners, draw=white, fill=orange, scale=1.5] (3-commit) at(0,0){third commit}; \node[rectangle,rounded corners, draw=white, fill=orange, scale=1.5] (2-commit) at(0,-3){second commit}; \node[rectangle,rounded corners, draw=white, fill=orange, scale=1.5] (1-commit) at(0,-6){first commit}; \draw [->] (master) -- (3-commit); \draw [->] (test) -- (2-commit); \draw [->] (3-commit) -- (2-commit); \draw [->] (2-commit) -- (1-commit); \node[rectangle,rounded corners, draw=white, fill=green, scale=1.5] (3-tree) at(4,0){tree 3c4e}; \draw [->] (3-commit) -- (3-tree); \node[rectangle,rounded corners, draw=white, fill=green, scale=1.5] (2-tree) at(4,-3){tree 0155}; \draw [->] (2-commit) -- (2-tree); \node[rectangle,rounded corners, draw=white, fill=green, scale=1.5] (1-tree) at(4,-6){tree d832}; \draw [->] (1-commit) -- (1-tree); \node[rectangle,rounded corners, draw=white, fill=white, scale=1.5] (3-bak) at(7,1){bak}; \node[rectangle,rounded corners, draw=white, fill=white, scale=1.5] (3-new) at(7,0){new.txt}; \node[rectangle,rounded corners, draw=white, fill=white, scale=1.5] (3-test) at(7,-1){test.txt}; \node[rectangle,rounded corners, draw=white, fill=white, scale=1.5] (2-test) at(7,-2.5){test.txt}; \node[rectangle,rounded corners, draw=white, fill=white, scale=1.5] (2-new) at(7,-3.5){new.txt}; \node[rectangle,rounded corners, draw=white, fill=white, scale=1.5] (1-test) at(7,-6){test.txt}; \draw (3-tree) -- (3-bak); \draw (3-tree) -- (3-new); \draw (3-tree) -- (3-test); \draw (2-tree) -- (2-test); \draw (2-tree) -- (2-new); \draw (1-tree) -- (1-test); \node[rectangle,rounded corners, draw=white, fill=gray, scale=1.5] (version-2) at(10,-1.75){"version 2"}; \node[rectangle,rounded corners, draw=white, fill=gray, scale=1.5] (new-file) at(10,-4){"new file"}; \node[rectangle,rounded corners, draw=white, fill=gray, scale=1.5] (version-1) at(10,-6){"version 1"}; \draw [->] (1-test) -- (version-1); \draw [->] (2-new) -- (new-file); \draw [->] (2-test) -- (version-2); \draw [->] (3-test) -- (version-2); \draw (3-new) to (11.7, 0); \draw [->] (11.7, 0) |- (new-file); \draw (3-bak) -| (12, -7); \draw (12, -7) -- (7, -7); \draw [->] (7, -7) -- (1-tree); \end{tikzpicture} } \end{center} \label{fig:reference} \caption{Git directory objects with branch head references included} \end{figure} When you run commands like \code{git branch <branch>}, Git basically runs update-ref command to add the SHA-1 of the last commit of the branch you're on into whatever new reference you want to create. So create branch is just create a reference to some commit. And delete branch is only delete a reference. After branch deletion, all its objects will be available via their SHAs. However, unreachable objects could be deleted via other git operations. \item[HEAD] -- Usually the HEAD file is a symbolic reference to the branch you're currently on. By symbolic reference, we mean that unlike a normal reference, it contains a pointer to another reference.\\ If you look at the file, you'll normally see something like this:\\ \\ \code{\$ cat .git/HEAD}\\ \code{ref: refs/heads/master}\\ \\If you run \code{git checkout branch}, Git updates the file to look like this:\\ \\ \code{\$ cat .git/HEAD}\\ \code{ref: refs/heads/test}\\ When you run \code{git commit}, it creates the commit object, specifying the parent of that commit object to be whatever SHA-1 value the reference in HEAD points to. This commit also becomes a new head for a current branch. refs/heads/<branch-name> will be updated and map to the SHA of this new commit. When you do \code{git reset HEAD~1}, it also only update refs/heads/<branch-name>. It takes SHA of parent commit of current head commit. \item[Tag (lightweight)] -- It's like a branch reference, but it never moves - it always points to the same commit but gives it a friendlier name.\\ You can create, update and delete such a tag, and no objects would be affected. It's only a reference. \item[Tag (annotated)] -- tag object, that points to commit or any other object amd contains metadata, and a reference to this tag object. \end{description} \subsubsection{Git file storage internals} \textbullet\textbf{\large{ loose object storage}}\\ As we mentioned, a git repository take cares of several types of objects: \code{blob}, \code{tree}, \code{commit}, \code{tag}. Let's take a look at the details of object storage:\\ git by default writes the objects to directory \code{GIT\_DIR\footnote{the .git directory}/objects}. Each object is named by a sha1 value, for example \emph{\code{b98191cf971f2418e42877410a6c40fc112a0a93}}\\ it's storage path will be \begin{center} \emph{\code{GIT\_DIR/objects/b9/8191cf971f2418e42877410a6c40fc112a0a93}} \end{center} the directories under \code{GIT\_DIR/objects} is named by \code{SHA[0,2]} of objects' sha1 value, this can avoid too much files in one directory because some filesystem has the max link number of files, such as ext3 has the definition: \begin{flushleft} \code{include/linux/ext3\_fs.h:\#define EXT3\_LINK\_MAX 32000} \end{flushleft} One object file is consist of three portions: \begin{description} \item[type] can be one of `blob', `tree', `commit', `tag' \item[size] the number of bytes of the content \item[content] the content of object \end{description} and the object file will be compressed using zlib: \begin{tikzpicture} \node at (0,0.25) {binary}; \node at (4,0.25) {\code{7370 6163 657d 0a5c ...}}; \draw (1,0) rectangle (7,0.5); \node at (3.1,1.25) {compress}; \draw[-latex] (2,1.5) -- (2,1); \node at (0,2.25) {buffer}; \node at (1.5,2.25) {type}; \node at (3.5,2.25) {size}; \node at (6,2.25) {content}; \draw (1,2) rectangle (2.95,2.5); \draw (3,2) rectangle (4.95,2.5); \draw (5,2) rectangle (10,2.5); \end{tikzpicture} the compression level can be set by `\textbf{\code{core.compression}}' or `\textbf{\code{core.loosecompression}}'. \\ \textbullet\textbf{\large{ pack file storage}}\\ When we do \emph{\code{git gc}} or \emph{\code{git repack}}, pack files with the suffix \code{.pack} maybe be created under directory \emph{\code{GIT\_DIR/objects/pack}}. A pack is a collection of objects, individually compressed, with delta compression applied, stored in a single file, with an associated index file. Packs are used to reduce the load on mirror systems, backup engines, disk storage, etc. \begin{lstlisting}[basicstyle=\ttfamily] $ tree .git/objects/pack .git/objects/pack |-- pack-24fcb83682e5a2848ef7bd00a9eda0bff1a372fc.idx |-- pack-24fcb83682e5a2848ef7bd00a9eda0bff1a372fc.pack |-- pack-915d8b4ae03b03179912c589cee932e5a990b7f0.idx |-- pack-915d8b4ae03b03179912c589cee932e5a990b7f0.pack |-- ... \end{lstlisting} Conceptually there are only four object types in a pack file: commit, tree, tag and blob. However to save space, an object could be stored as a ``delta'' of another ``base'' object. These representations are assigned new types \code{ofs-delta} and \code{ref-delta}. \begin{description} \item[delta object] Both ofs-delta and ref-delta store the ``delta'' to be applied to another object (called `base object') to reconstruct the object. The difference between them is, ref-delta directly encodes 20-byte base object name. If the base object is in the same pack, ofs-delta encodes the offset of the base object in the pack instead. \item[base object] The base object could also be deltified if it's in the same pack. Ref-delta can also refer to an object outside the pack. When stored on disk however, the pack should be self contained to avoid cyclic dependency. \end{description} And usualy we have lots of versions of a single file in a repository, a new pack file will store the last version's content as the base object, and computes the earlier versioins to to be delta objects and store them referring to the base object: \begin{tikzpicture} % loose objects \filldraw[fill=yellow, draw=white] (3.75,2.5) rectangle (8.5,5.5); \node at(0,5){v3}; \node[rectangle,rounded corners, draw=gray,] (v3) at(2,5){README.md}; \node[rectangle,rounded corners, draw=gray,] (v3obj) at(6,5){.git/objects/77/652063...}; \node at(0,4){v2}; \node[rectangle,rounded corners, draw=gray,] (v2) at(2,4){README.md}; \node[rectangle,rounded corners, draw=gray,] (v2obj) at(6,4){.git/objects/6e/6f7420...}; \node at(0,3){v1}; \node[rectangle,rounded corners, draw=gray,] (v1) at(2,3){README.md}; \node[rectangle,rounded corners, draw=gray,] (v1obj) at(6,3){.git/objects/28/753a20...}; \draw (v3) -- (v3obj); \draw (v2) -- (v2obj); \draw (v1) -- (v1obj); % arrow \draw (8.5,2) -- (8.5,0.35); \draw[-latex] (8.5,0.35) -- (6.5,0.35); \node at(7,1.5){pack-objects}; % pack file \filldraw[fill=yellow, draw=white] (0.8,0) rectangle (6.2,0.75); \node at (0,0.35) {pack file}; \node (base) at (2.5,0.35) {base object}; \node (deltav2) at (4.5,0.35) {delta}; \node (deltav1) at (5.5,0.35) {delta}; \draw (1,0.1) rectangle (3.95,0.6); \draw (4,0.1) rectangle (4.95,0.6); \draw (5,0.1) rectangle (5.95,0.6); \draw[dashed, ultra thick, -latex] (v3obj) -- (base); \draw[dashed, very thick, -latex] (v2obj) -- (deltav2); \draw[dashed, -latex] (v1obj) -- (deltav1); \end{tikzpicture} Usually objects contained in the pack are compressed using zlib, the compression level can be set by `\textbf{\code{core.compression}}' or `\textbf{\code{pack.compression}}'. When objects contained in the pack are stored using delta compression. The objects are first internally sorted by type, size and optionally names and compared against the other objects to see if using delta compression saves space. `\textbf{\code{pack.depth}}' limits the maximum delta depth; Making it too deep affects the performance on the unpacker side, because delta data needs to be applied that many times to get to the necessary object. \\ To quickly find objects in the pack file, an associated index file is created with the \emph{\href{https://github.com/git/git/blob/master/Documentation/technical/pack-format.txt\#L161}{format}}, And `\emph{\code{git index-pack}}' can be used to regenerate the *.idx file on the *.pack file. \\ \textbullet\textbf{\large{ when pack file is created}}\\ When we talk about the pack files, we said that packs are used to reduce the load on mirror systems, backup engines, disk storage, etc. So pack files will created in several scenes: \begin{description} \item[git gc] `\emph{\code{git gc}}' runs a number of housekeeping tasks within the repository, such as compressing file revisions (to reduce disk space and increase performance), removing unreachable objects which may have been created from prior git commands. We can run it manually, and it may create a new pack file. \item[git repack] `\emph{\code{git repack}}' is used to combine all objects that do not currently reside in a pack file into a pack. It can also be used to re-organize existing packs into a single, more efficient pack. \item[git push] `\emph{\code{git push}}' will run \code{send-pack} to connects to the remote side, it gets one file descriptor which is either a socket (over the network) or a pipe (local). What's written to this file descriptor goes to `\emph{\code{git-receive-pack}}' to be unpacked. We can get the following pipeline flow from\\ \emph{\href{https://github.com/git/git/blob/master/Documentation/technical/send-pack-pipeline.txt}{Documentation/technical/send-pack-pipeline.txt}}: \begin{lstlisting}[basicstyle=\ttfamily] send-pack | pack_objects() --> fd --> receive-pack | ^ (pipe) v | (child) \end{lstlisting} so we can see that `\emph{\code{git push}}' will create a pack file and send it to the remote side. \item[git-receive-pack] As menthioned above, `\emph{\code{git-receive-pack}}' will receive a pack file from `\emph{\code{git push}}'. And `\emph{\code{git receive-pack}}' will also forked a child `\emph{\code{git gc --auto --quiet}}' to check if there are too many loose objects to pack. Ususaly the threshold is 6700 loose files, we can set it by `\textbf{\code{gc.auto}}'. \item[git-upload-pack] When clients runs `\emph{\code{git clone}}' or `\emph{\code{git fetch}}', the client connects to the remote side and invokes `\emph{\code{git-upload-pack}}'. `\emph{\code{git-upload-pack}}' will communicate with client and compute how many objects the client want then fork and exec child `\emph{\code{git-pack-objects}}' to create a pack file for all the needed objects and pipe the pack file to client. \begin{lstlisting}[basicstyle=\ttfamily] upload-pack | pack_objects() --> fd --> fetch-pack | ^ (pipe) v | (child) \end{lstlisting} We can also use the `\textbf{\code{repack.writeBitmaps}}' to let git write a bitmap index when packing all objects to disk (e.g., when git repack -a is run). This index can speed up the "counting objects" phase of subsequent packs created for clones and fetches, at the cost of some disk space and extra time spent on the initial repack. \end{description}
{ "alphanum_fraction": 0.6655580003, "avg_line_length": 47.5498783455, "ext": "tex", "hexsha": "aef67d5feed736722d048d0410bfc5bc14344ef2", "lang": "TeX", "max_forks_count": 9, "max_forks_repo_forks_event_max_datetime": "2021-10-10T15:21:01.000Z", "max_forks_repo_forks_event_min_datetime": "2020-09-14T19:48:19.000Z", "max_forks_repo_head_hexsha": "219c9cb7419e1999d10bc8c0345231e5b6ae3356", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "askerka/degitx", "max_forks_repo_path": "white-paper/appendix-a.tex", "max_issues_count": 170, "max_issues_repo_head_hexsha": "219c9cb7419e1999d10bc8c0345231e5b6ae3356", "max_issues_repo_issues_event_max_datetime": "2022-03-15T15:11:30.000Z", "max_issues_repo_issues_event_min_datetime": "2020-09-07T07:23:29.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "askerka/degitx", "max_issues_repo_path": "white-paper/appendix-a.tex", "max_line_length": 157, "max_stars_count": 26, "max_stars_repo_head_hexsha": "219c9cb7419e1999d10bc8c0345231e5b6ae3356", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "askerka/degitx", "max_stars_repo_path": "white-paper/appendix-a.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-11T02:49:45.000Z", "max_stars_repo_stars_event_min_datetime": "2020-09-07T07:34:07.000Z", "num_tokens": 6027, "size": 19543 }
\documentclass[11pt, twoside, a4paper]{article} \usepackage[top=15mm, bottom=15mm, left=25mm, right=25mm]{geometry} \usepackage{tikz} \usepackage{amssymb,amsmath, amsthm} \usepackage[labelsep=period]{caption} \newtheorem{theorem}{Theorem}[section] \newtheorem{corollary}{Corollary}[theorem] \newtheorem{lemma}[theorem]{Lemma} \theoremstyle{definition} \newtheorem{definition}{Definition}[section] \newcommand{\Fig}[1]{Figure~\ref{fig:#1}} \newcommand{\Eq}[1]{Eq.~(\ref{eq:#1})} \newcommand{\Eqs}[2]{Eqs.~(\ref{eq:#1})--(\ref{eq:#2})} \newcommand{\set}[1]{\mathbb{#1}} \newcommand{\Th}[1]{Theorem~\ref{th:#1}} \newcommand{\Le}[1]{Lemma~\ref{le:#1}} \newcommand{\xpm}{\ensuremath{x_\pm}} \newcommand{\xmp}{\ensuremath{x_\mp}} \newcommand{\ypm}{\ensuremath{y_\pm}} \newcommand{\ymp}{\ensuremath{y_\mp}} \newcommand{\inv}[1]{#1^{-1}} \begin{document} \title{Euler 94: Almost equilateral triangles} \date{} \author{Didier Pieroux} \maketitle %=============================================================================== \begin{abstract} It is easily proved that no equilateral triangle exists with integral length sides and integral area. However, the almost equilateral triangle 5-5-6 has an area of 12 square units. We shall define an almost equilateral triangle to be a triangle for which two sides are equal and the third differs by no more than one unit. Find the sum of the perimeters of all almost equilateral triangles with integral side lengths and area and whose perimeters do not exceed one billion (1,000,000,000). \end{abstract} %=============================================================================== \section{Formulation} We have to find all isosceles triangles whose base $b\in\set N$ and equal sides $c \in\set N$ differ by one. Let's label $h$ the height of such a triangle (\Fig{triangle1}). \begin{figure}[h] \begin{center} \begin{tikzpicture}[scale=0.5] \draw (0, 0) -- (6, 0) node[midway, below] {b} -- (3, 4) -- (0, 0) node[midway, above left] {c} ; \draw[dotted] (3, 0) -- (3, 4) node[pos=0.33, right] {h}; \end{tikzpicture} \caption{Illustration of the Labels used.} \label{fig:triangle1} \end{center} \end{figure} The problem is then described by the following equations, with $P$ the perimeter and $S$ the area: \begin{align} c^2 & = (b/2)^2 + h^2 \label{eq:pytha1} \\ c & = b \pm 1 \label{eq:ab} \\ P & = 2c+b \\ S & = h b/2 \text{ with } S \in\set N \label{eq:S1} \end{align} \begin{lemma}$b$ is even.\end{lemma} \begin{proof} Suppose $b$ odd. \Eq{S1} implies that $h$ must be even and integer for $S$ to be integral. But then $c^2 = b^2/4 + h^2$ is not integral, which is impossible as $c$ is integral. \end{proof} For ease, let us pose $b=2a$ with $a\in\set N$ (\Fig{triangle2}). \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.5] \draw (0, 0) -- (3, 0) node[midway, below] {a}; \draw [dashed] (3, 0) -- (6, 0) -- (3, 4); \draw (3, 4) -- (0, 0) node[midway, above left] {c} ; \draw [dotted] (3, 0) -- (3, 4) node[pos=0.33, right] {h}; \end{tikzpicture} \caption{Illustration of the Labels used (reduced problem).} \label{fig:triangle2} \end{center} \end{figure} The \Eqs{pytha1}{S1} read now: \begin{eqnarray} c^2 & = & a^2 + h^2 \label{eq:pytha2} \\ c & = & 2a \pm 1 \label{eq:ac} \\ P & = & 2(a+c) \\ S & = & ah \label{eq:S2} \end{eqnarray} The triple $(a, h, c)$ is thus Pythagorean\footnote{https://en.wikipedia.org/wiki/Pythagorean\_triple}. \begin{lemma} $a$, $h$ and $c$ are pair-wise coprime. \end{lemma} \begin{proof} Suppose that $a$ and $c$ have a common factor $\lambda\in\set N$: $a=\lambda a'$ and $c=\lambda c'$. From \Eq{ac}, it comes $c' = 2 a' \pm 1/\lambda$, which would imply that $c'$ is not integral. Suppose now that $c$ and $h$ have a common factor. \Eq{pytha2} implies that it must be a factor of $a$; therefore $c$ and $a$ would not be coprime. Similarly if $a$ and $h$ have a common factor, \Eq{pytha2} implies that it must be a factor of $c$; therefore $c$ and $a$ would not be coprime. \end{proof} As a consequence valid triple $(a, h, c)$ are primitive Pythagorean triple (PPT). \section{Tree of primitive Pythagorean triples} Let the infinitely-deep ternary tree whose root is the Pythagorean triple $p=(3, 4, 5)$ and whose internal nodes are the left-multiplication of their parent node, interpreted as a column vector, by one of the following matrices: \[ A = \left(\begin{matrix} 1 & -2 & 2 \\ 2 & -1 & 2 \\ 2 & -2 & 3 \end{matrix} \right)\!,\ B = \left(\begin{matrix} 1 & 2 & 2 \\ 2 & 1 & 2 \\ 2 & 2 & 3 \end{matrix} \right)\!\text{ and } C = \left(\begin{matrix} -1 & 2 & 2 \\ -2 & 1 & 2 \\ -2 & 2 & 3 \end{matrix} \right)\!. \] Otherwise said, the nodes of the tree are, in breadth-first order: \[p, Ap, Bp, Cp, A\!Ap, B\!Ap, C\!Ap, A\!Bp, B\!Bp, C\!Bp, \ldots\] \begin{theorem}\label{th:tree} It can be shown that all the PPTs, and only them, are generated without duplication by this tree\footnote{For a proof see https://en.wikipedia.org/wiki/Tree\_of\_primitive\_Pythagorean\_triples}. \end{theorem} Because of \Eq{ac}, we are only interested in PPTs of the form $\xmp\!=\!(x, y, 2x\!\pm\!1)$ and $\ypm\!=\!(x, y, 2y\!\pm\!1)$. \begin{lemma} The product of $A$ with a PPT \ypm\ produces a PPT \xmp. Similarly, the product of $C$ with a PPT \xpm\ produces a PPT \ymp. \end{lemma} \begin{proof} By \Th{tree}, the product of a PPT by $A$ or $B$ is also a PPT. Calculating explicitly $A \ypm$ shows that the result has the form \xmp. Idem for $B \xpm$ which produces a \ymp. \end{proof} \begin{lemma}\label{le:inverse} Given a PPT $\pi\!=\!(x, y, z)$, \begin{itemize} \item $\inv A\pi$ is a PPT iff $4x<3y$; \item $\inv B\pi$ is a PPT iff $3x<4y \text{ and } 3y<4x$; \item $\inv C\pi$ is a PPT iff $4y<3x$. \end{itemize} \end{lemma} \begin{proof} By carrying out the calculation, it is easily shown that \begin{itemize} \item the Pythagorean relation $x^2+y^2=z^2$ is preserved by multiplication of the inverse matrices, \item the components of the result are positive iff the inequalities of the lemme are fulfilled. \end{itemize} We still have to show that the result is primitive. If it wasn't, the components of $\inv A\pi$ would have a common factor. But then the components of $A\inv A\pi=\pi$ would also have that common factor, which is forbidden as $\pi$ is primitive. \end{proof} \begin{lemma} Assuming the inequalities of \Le{inverse}, the product of $\inv A$ with a PPT of the form \xpm\ gives a PPT of the form \ymp. Similarly, the product of $\inv C$ with a PPT of the form \ypm\ gives a PPT of the form \xmp. \end{lemma} \begin{proof} From \Le{inverse}, the result is a PPT. The relations between the components are verified through explicit calculation. \end{proof} \begin{lemma}\label{le:predecessor1} Excepted for $(3, 4, 5)$ which has no predecessor, all PPTs of the form \xpm\ have a predecessor of the form \ymp. \end{lemma} \begin{proof} Since \xpm\ is a PPT different of $(3, 4, 5)$, \Th{tree} implies that it has a parent node. If that parent is not of the form \ymp, then \xpm\ is not generated from it by the matrix $A$, but by the matrix $B$ or $C$. Then (\Le{inverse}) implies that $3y<4x$. As \xpm\ fulfils the Pythagorean equality, it comes $x^2+y^2=z^2=4x^2\pm4x+1$, or $y^2=3x^2\pm4x+1$. $3y<4x$ then implies that $11x^2\pm36x+9<0$. This inequality can be verified only for the $-$ sign and if $3/11<x<3$. But this asks for $x=1$ or $x=2$, for which there is no PPT. This contradiction implies that the parent node must be of the from \ymp. \end{proof} \begin{lemma}\label{le:predecessor2} All PPTs of the form \ypm\ have a predecessor of the form \xmp. \end{lemma} \begin{proof}The demonstration is similar to the previous one.\end{proof} \begin{theorem}\label{th:PPT_branch} Starting from $(3,4,5)$, multiplying it by C, then multiplying the result by A, then multiplying again that result by C\ldots generates all the PPTs satisfying \Eq{ac}. \end{theorem} \begin{proof} Excepted for $(3, 4, 5)$ which has no predecessor, Lemmas~\ref{le:predecessor1} and \ref{le:predecessor2} show that any other PPT satisfying \Eq{ac} must have a predecessor also satisfying \Eq{ac}. Therefore, starting from any of such a PPT and going from predecessor to predecessor, we move towards the root of the PPT tree, until $(3, 4, 5)$ is finally reached. \end{proof} \section{Iterative process} \subsection{Reduction to a unique transformation} As seen above, multiplying $C$ with a PPT of the form $(a, h, 2a\pm1)$, gives a solution of the form $(h', a', 2a'\mp1)$. Multiplying that result with $A$ produces a result of the form $(a'', h'', 2a''\pm1)$. The $a$ and $h$ components are thus swapped at each iteration. Exchanging the first and second line of $C$, and the first and second line of $A$ avoids this. Interestingly these operations on $C$ and $A$ lead to a same single matrix $D$: \[D = \left(\begin{matrix} -2 & 1 & 2 \\ -1 & 2 & 2 \\ -2 & 2 & 3 \end{matrix} \right)\] \Th{PPT_branch} implies that multiplying $(3, 4, 5)$ by D iteratively produces a sequence of all the PPTs satisfying \Eq{ac}. \subsection{Triangles of increasing size} Given a element $(a, h, c)$ of the sequence defined above and its successor $(a', h', c')$, it is trivial to show that $a'+c'>2(a+c)$. Otherwise said, the perimeter of the triangles increases by more than a factor 2 after multiplication by $D$. \section{Solution} The solution to the original problem is now trivial to state. Starting from $p=(3, 4, 5)$, we generate the sequence $(D^np)_{n\in\set N}$. We compute the corresponding perimeters, i.e.\ two times the sum of the first and the third component, and we sum the perimeters which do not exceed one billion. Expressed in Haskell, this leads to the following code: \begin{verbatim} nextTriangle :: (Int, Int, Int) -> (Int, Int, Int) nextTriangle (x, y, z) = (-2*x+y+2*z, -x+2*y+2*z,-2*x+2*y+3*z) main = do putStrLn $ show $ sum $ takeWhile (<=10^9) $ map (\(a, h, c) -> 2*(a+c)) $ iterate nextTriangle (3, 4, 5) \end{verbatim} \end{document}
{ "alphanum_fraction": 0.6612824754, "avg_line_length": 51.904040404, "ext": "tex", "hexsha": "5cfd337ced03b8eff5bae7df962b1d7a0112a28a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d9e7d39d93e588402cc13efcf2eddb0371540160", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "dpieroux/euler", "max_forks_repo_path": "doc/euler_0094.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d9e7d39d93e588402cc13efcf2eddb0371540160", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "dpieroux/euler", "max_issues_repo_path": "doc/euler_0094.tex", "max_line_length": 612, "max_stars_count": null, "max_stars_repo_head_hexsha": "d9e7d39d93e588402cc13efcf2eddb0371540160", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "dpieroux/euler", "max_stars_repo_path": "doc/euler_0094.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3450, "size": 10277 }
\documentclass{article} \PassOptionsToPackage{no-math}{fontspec} \usepackage{l3draw,array} \usepackage{fontspec,geometry,multicol,xcolor,hyperref} % \usepackage{l3benchmark} \setmainfont{FreeSans}[ Extension = .otf, UprightFont = *, BoldFont = *Bold, ItalicFont = *Oblique, BoldItalicFont = *BoldOblique, Ligatures = CommonOff, ] \setmonofont{FreeMono}[ Extension = .otf, UprightFont = *, BoldFont = *Bold, ItalicFont = *Oblique, BoldItalicFont = *BoldOblique, ] \geometry{paperwidth=24cm, hmargin=1cm, vmargin=1.5cm} \hypersetup{colorlinks} \makeatletter \ExplSyntaxOn % Internal functions \cs_new:Npn \@@_char:n #1 { \tex_Uchar:D #1 ~ } \prg_new_conditional:Npnn \@@_if_char_exist:n #1 { T, F, TF } { \tex_iffontchar:D \tex_font:D #1 ~ \prg_return_true: \else: \prg_return_false: \fi: } \cs_new_protected:Npn \@@_font:n #1 { \use:c { font @ #1 } } % Internal variables \seq_new:N \l_@@_font_seq \seq_new:N \l_@@_control_char_seq \seq_new:N \l_@@_reserved_char_seq % Constants \dim_const:Nn \c_@@_cell_width_dim { 36pt } \dim_const:Nn \c_@@_cell_height_dim { 34pt } \dim_const:Nn \c_@@_cell_depth_dim { 2pt } \dim_const:Nn \c_@@_total_height_dim { \c_@@_cell_height_dim + \c_@@_cell_depth_dim } \dim_const:Nn \c_@@_table_frame_dim { 2pt } \dim_const:Nn \c_@@_table_rule_dim { .4pt } \int_const:Nn \c_@@_column_int { 16 } \int_const:Nn \c_@@_table_int { 256 } \cs_new:Npn \@@_symbol_style: { \LARGE } \cs_new:Npn \@@_encoding_style: { \tiny \ttfamily } %%\file_input:n { specials.tex } \file_input:n { fonts.tex } % #1 = name, #2 = begin, #3 = end, #4 = font list \NewDocumentCommand \BLOCK { m m m m } { \@@_block:nnnn {#1} {#2} {#3} {#4} } \cs_new_protected:Npn \@@_block:nnnn #1#2#3#4 { \iow_log:n {Begin ~ block ~ "#1"} \exp_args:Ne \subsection { #1 ~ ( U + \@@_encoding:n {#2} -- U + \@@_encoding:n {#3} ) } \@@_set_font:n {#4} \@@_multiple_table:nn {#2} {#3} } \cs_new_protected:Npn \@@_set_font:n #1 { \seq_set_from_clist:Nn \l_@@_font_seq {#1} } \cs_new_protected:Npn \@@_multiple_table:nn #1#2 { \@@_prepare_data:nn {#1} {#2} \int_set:Nn \l_tmpa_int { #2 - #1 } \int_set:Nn \l_tmpb_int { \int_div_truncate:nn { \l_tmpa_int + 1 } { \c_@@_table_int } } \int_set:Nn \l_@@_column_int { \int_compare:nNnTF \l_tmpb_int > \c_zero_int { \c_@@_column_int - 1 } { \int_min:nn { \c_@@_column_int - 1 } { \l_tmpa_int } } } \int_step_inline:nnn { 0 } { \l_tmpb_int - 1 } { \@@_table:n { #1 + ##1 * \c_@@_table_int } } \int_set:Nn \l_tmpa_int { #1 + \l_tmpb_int * \c_@@_table_int } \int_compare:nNnF \l_tmpa_int > {#2} { \@@_table:nn { \l_tmpa_int } {#2} } } \int_new:N \l_@@_column_int \cs_new_protected:Npn \@@_table:n #1 { \exp_args:Ne \@@_table_aux:n { \int_eval:n {#1} } } \cs_new_protected:Npn \@@_table_aux:n #1 { \@@_table:nn {#1} { #1 + \c_@@_table_int - 1 } } \cs_new_protected:Npn \@@_table:nn #1#2 { \use:e { \@@_table_aux:nn { \int_eval:n {#1} } { \int_eval:n {#2} } } } \cs_new_protected:Npn \@@_table_aux:nn #1#2 { \begin{center} % See https://tex.stackexchange.com/a/198934 \dim_zero:N \tabcolsep \tl_clear:N \@arstrut \setlength \arrayrulewidth { \c_@@_table_rule_dim } \int_set:Nn \l_@@_row_int { \int_div_truncate:nn { #2 - #1 + 1 } { \c_@@_column_int } } \begin{tabular} { V * { \l_@@_column_int } {C|} C V } \@@_frame_hline: \@@_table_body:nn {#1} {#2} \\ \@@_frame_hline: \end{tabular} \end{center} } \int_new:N \l_@@_row_int \cs_new:Npn \@@_frame_hline: { \tex_noalign:D { \tex_hrule:D height ~ \c_@@_table_frame_dim \scan_stop: } } \newcolumntype { C } { @{} c @{} } \newcolumntype { V } { @{ \tex_vrule:D width \c_@@_table_frame_dim \scan_stop: } } \cs_new_protected:Npn \@@_table_body:nn { \int_compare:nNnTF { \l_@@_column_int + 1 } = \c_@@_column_int { \int_compare:nNnTF { \l_@@_row_int } > \c_zero_int { \@@_multiple_row:nn } { \@@_single_row:nn } } { \@@_row:nn } } \cs_new_protected:Npn \@@_multiple_row:nn #1 { \@@_row:n {#1} \int_step_inline:nnn { 1 } { \l_@@_row_int - 1 } { \\ \hline \@@_row:n { #1 + ##1 * \c_@@_column_int } } \exp_args:Ne \@@_remaining_row:nn { \int_eval:n { #1 + \l_@@_row_int * \c_@@_column_int } } } \cs_new_protected:Npn \@@_row:n #1 { \exp_args:Ne \@@_row_aux:n { \int_eval:n {#1} } } \cs_new_protected:Npn \@@_row_aux:n #1 { \@@_row:nn {#1} { #1 + \c_@@_column_int - 1 } } \cs_new_protected:Npn \@@_single_row:nn #1#2 { \@@_row:nn {#1} {#2} & \exp_args:Ne \@@_single_row_aux:n { \int_eval:n { \c_@@_column_int - 1 - #2 + #1 } } } \cs_new:Npn \@@_single_row_aux:n #1 { \multicolumn {#1} { CV } { \dim_set:Nn \l_tmpa_dim { #1 \c_@@_cell_width_dim + #1 \arrayrulewidth - 2 \arrayrulewidth } \tex_kern:D \l_tmpa_dim } } \cs_new_protected:Npn \@@_remaining_row:nn #1#2 { \int_compare:nNnF {#1} > {#2} { \\ \hline \@@_row:nn {#1} {#2} & \multicolumn { \int_eval:n { \l_@@_column_int - #2 + #1 } } { CV } { } } } \cs_new_protected:Npn \@@_row:nn #1#2 { \group_align_safe_begin: \exp_after:wN \use_i:nn \exp_after:wN \group_align_safe_end: \tex_expanded:D { \int_step_function:nnN {#1} {#2} \@@_cell_ampersand:n } \prg_do_nothing: } \cs_new:Npn \@@_cell_ampersand:n #1 { & \@@_cell:n {#1} } \cs_new_protected:Npn \@@_cell:n #1 { \@@_unicode_data:n {#1} \@@_cell_box:nnn {#1} { \@@_symbol_style: \@@_symbol:n {#1} } { \@@_encoding_style: \l_@@_code_point_str } } \str_new:N \l_@@_code_point_str \cs_new_protected:Npn \@@_cell_box_symbol:nnn #1#2#3 { \vbox_set_to_ht:Nnn \l_tmpa_box { \c_@@_cell_height_dim } { \tex_vss:D \hbox_to_wd:nn { \c_@@_cell_width_dim } { \tex_hss:D #2 \tex_hss:D } \tex_vss:D \nointerlineskip \hbox_to_wd:nn { \c_@@_cell_width_dim } { \tex_hss:D #3 \tex_hss:D } } \box_set_dp:Nn \l_tmpa_box { \c_@@_cell_depth_dim } \box_use_drop:N \l_tmpa_box } \cs_new_eq:NN \@@_cell_box:nnn \@@_cell_box_symbol:nnn \cs_new_protected:Npn \@@_set_cell_box:n #1 { \cs_set_eq:Nc \@@_cell_box:nnn { @@_cell_box_#1:nnn } } \cs_new_protected:Npn \@@_symbol:n #1 { \seq_map_inline:Nn \l_@@_font_seq { \@@_font:n { ##1 } \@@_if_char_exist:nT {#1} { \@@_char:n {#1} \seq_map_break:n { \use_none:nn } } } \iow_log:x { U+ \l_@@_code_point_str \iow_char:N \ is~missing } } % #1 = specials type, #2 = l3draw \cs_new_protected:Npn \@@_symbol_specials:nn #1 { \exp_args:Ncc \@@_symbol_specials_aux:NNnn { g_@@_#1_cell_box } { @@_pdfsavebox_#1: } {#1} } \cs_new_protected:Npn \@@_symbol_specials_aux:NNnn #1#2#3#4 { \box_gclear_new:N #1 \cs_new_protected:Npn #2 { \@@_pdfsavebox:Nnn #1 {#3} {#4} \cs_gset_eq:NN #2 \prg_do_nothing: } \cs_new_protected:cpn { @@_cell_box_#3:nnn } { #2 \@@_special_cell_box:Nnnn #1 } } \cs_new_protected:Npn \@@_pdfsavebox:Nnn #1#2#3 { \@@_pdfxform:nnnnn {#2} { \c_@@_cell_width_dim } { \c_@@_cell_height_dim } { \c_@@_cell_depth_dim } { \box_move_down:nn { \c_@@_cell_depth_dim } { \hbox_overlap_right:n {#3} } } \hbox_gset:Nn #1 { \@@_pdfrefxform:n {#2} } \box_gset_wd:Nn #1 { \c_zero_dim } \box_gset_ht:Nn #1 { \c_@@_cell_height_dim } \box_gset_dp:Nn #1 { \c_@@_cell_depth_dim } } \cs_new_protected:Npn \@@_pdfxform:nnnnn #1#2#3#4#5 { \tex_special:D { pdf: bxobj ~ @codecharts@obj@ #1 ~ width ~ \dim_eval:n {#2} ~ height ~ \dim_eval:n {#3} ~ depth ~ \dim_eval:n {#4} } #5 \tex_special:D { pdf: exobj } } \cs_new_protected:Npn \@@_pdfrefxform:n #1 { \tex_special:D { pdf: uxobj ~ @codecharts@obj@ #1 } } \cs_new_protected:Npn \@@_special_cell_box:Nnnn #1#2#3#4 { \box_use:N #1 \hbox_to_wd:nn { \c_@@_cell_width_dim } { \tex_hss:D #4 \tex_hss:D } } \@@_symbol_specials:nn { control } { \draw_begin: \draw_path_rectangle:nn { 0pt , 0pt } { \c_@@_cell_width_dim , \c_@@_total_height_dim } \color_fill:n { black ! 25 } \draw_path_use_clear:n { fill } \draw_end: } \@@_symbol_specials:nn { reserved } { \draw_begin: \draw_path_rectangle:nn { \c_@@_table_rule_dim / 2 , \c_@@_table_rule_dim / 2 } { \c_@@_cell_width_dim , \c_@@_total_height_dim } \draw_path_use_clear:n { clip } \fp_set:Nn \l_@@_cell_x_fp { \dim_to_fp:n { \c_@@_table_rule_dim + \c_@@_cell_width_dim } } \fp_set:Nn \l_@@_cell_y_fp { \dim_to_fp:n { \c_@@_table_rule_dim + \c_@@_total_height_dim } } \draw_transform_shift_absolute:n { - \l_@@_cell_x_fp / 2 , - \l_@@_cell_y_fp / 2 } \fp_set:Nn \l_tmpa_fp { \l_@@_cell_x_fp / 20 } \fp_set:Nn \l_tmpb_fp { \l_@@_cell_y_fp / 20 } \prg_replicate:nn { 19 } { \draw_transform_shift:n { \l_tmpa_fp , \l_tmpb_fp } \draw_path_moveto:n { 0 , \l_@@_cell_y_fp } \draw_path_lineto:n { \l_@@_cell_x_fp , 0 } } \color_stroke:n { black ! 50 } \draw_path_use_clear:n { stroke } \draw_end: } \fp_new:N \l_@@_cell_x_fp \fp_new:N \l_@@_cell_y_fp \cs_new:Npn \@@_encoding:n #1 { \int_compare:nNnT {#1} < { "1000 } { \int_compare:nNnTF {#1} < { "100 } { \int_compare:nNnTF {#1} < { "10 } { 000 } { 00 } } { 0 } } \int_to_Hex:n {#1} } \cs_new_eq:NN \@@_tableofcontents: \tableofcontents \cs_set:Npn \tableofcontents { \begin{multicols}{2} \small \@@_tableofcontents: \end{multicols} } \cs_set:Npn \l@subsection { \@dottedtocline {2} {3.8em} {3.4em} } \cs_new_protected:Npn \@@_ann_tooltip:nn #1#2 { \tex_special:D { pdf: ann ~ width ~ \dim_use:N \c_@@_cell_width_dim \c_space_tl height ~ \dim_use:N \c_@@_cell_height_dim \c_space_tl depth ~ \dim_use:N \c_@@_cell_depth_dim \c_space_tl << /Type /Annot ~ /Subtype /Widget ~ /FT /Btn ~ /H /N ~ /TU ~ (#1) ~ /T ~ (tooltip ~ #2) ~ /C ~ [ ] ~ /F ~ 768 ~ /Ff ~ 65536 ~ /BS ~ << /W ~ 0 >> >> } } \cs_new_protected:Npn \@@_prepare_data:nn #1 { \int_compare:nNnF {#1} > \g_@@_last_code_point_int { \ior_open:NnF \g_@@_data_ior { UnicodeData.txt } { \UNICODEDATAERROR } } \int_gset:Nn \g_@@_last_code_point_int } \ior_new:N \g_@@_data_ior \int_new:N \g_@@_last_code_point_int \int_gset_eq:NN \g_@@_last_code_point_int \c_max_int \AtEndDocument { \ior_close:N \g_@@_data_ior } \cs_new_protected:Npn \@@_unicode_data:n #1 { \int_compare:nNnTF {#1} > \g_@@_range_int { \@@_read_data:n } { \@@_range:n } {#1} } \cs_new_protected:Npn \@@_read_data:n { \str_if_empty:NTF \g_@@_data_str { \@@_read_data_auxi:n } { \@@_read_data_auxii:n } } \str_new:N \g_@@_data_str \cs_new_protected:Npn \@@_read_data_auxi:n { \ior_str_get:NNF \g_@@_data_ior \l_@@_data_str { \str_clear:N \l_@@_data_str } \str_if_empty:NTF \l_@@_data_str { \@@_set_reserved:n } { \exp_after:wN \@@_read_data:w \l_@@_data_str \q_stop } } \str_new:N \l_@@_data_str \cs_new_protected:Npn \@@_read_data_auxii:n { \str_set_eq:NN \l_@@_data_str \g_@@_data_str \str_gclear:N \g_@@_data_str \exp_after:wN \@@_read_data:w \l_@@_data_str \q_stop } \cs_new_protected:Npn \@@_range:n #1 { \@@_range_char:n {#1} \int_compare:nNnF {#1} < \g_@@_range_int { \@@_gset_range:nn { } { -1 } } } \cs_new_protected:Npn \@@_range_char:n #1 { \str_set:Nx \l_@@_code_point_str { \@@_encoding:n {#1} } \@@_ann_tooltip:nn { \g_@@_range_str } {#1} } \cs_new_protected:Npn \@@_read_data:w #1 ; #2#3 ; #4 \q_stop #5 { \str_set:Nn \l_@@_code_point_str {#1} \int_set:Nn \l_@@_code_point_int { "#1 } \int_compare:nNnTF \l_@@_code_point_int = {#5} { \@@_extract_data:nnnn } { \@@_seek_data:nnnn } {#2} {#3} {#4} {#5} } \int_new:N \l_@@_code_point_int \cs_new_protected:Npn \@@_extract_data:nnnn #1#2#3 { \token_if_eq_charcode:NNTF #1 < { \@@_special_char:w #2 ; #3 , ~ > ; \q_stop } { \@@_ann_tooltip:nn { #1 #2 } } } \cs_new_protected:Npn \@@_special_char:w #1 , ~ #2 > ; #3 \q_stop { \tl_if_empty:nTF {#2} { \@@_control_char:w #1 \q_stop } { \use:c { @@_range_#2:nn } {#1} } } \exp_last_unbraced:Nno \use:n { \cs_new_protected:Npn \@@_control_char:w #1 ; } { \token_to_str:N N } ; #2 ; #3 \q_stop { \@@_set_cell_box:n { control } \@@_ann_tooltip:nn {#2} } \cs_new_protected:Npn \@@_range_First:nn #1 { \ior_str_get:NN \g_@@_data_ior \l_@@_data_str \@@_gset_range:nn {#1} { \exp_after:wN \@@_code_point:w \l_@@_data_str \q_stop } \@@_ann_tooltip:nn {#1} } \cs_new_eq:NN \@@_range_Last:nn \@@_ann_tooltip:nn \cs_new_protected:Npn \@@_gset_range:nn #1#2 { \str_gset:Nn \g_@@_range_str {#1} \int_gset:Nn \g_@@_range_int {#2} } \cs_new:Npn \@@_code_point:w #1 ; #2 \q_stop { "#1 } \str_new:N \g_@@_range_str \int_new:N \g_@@_range_int \int_gdecr:N \g_@@_range_int \cs_new_protected:Npn \@@_seek_data:nnnn #1#2#3#4 { \int_compare:nNnTF \l_@@_code_point_int > {#4} { \@@_set_reserved:n } { \@@_seek_data_forward:n } {#4} } \cs_new_protected:Npn \@@_set_reserved:n #1 { \str_set:Nx \l_@@_code_point_str { \@@_encoding:n {#1} } \@@_set_cell_box:n { reserved } \str_gset_eq:NN \g_@@_data_str \l_@@_data_str } \cs_new_protected:Npn \@@_seek_data_forward:n #1 { \ior_str_map_variable:NNn \g_@@_data_ior \l_@@_data_str { \exp_args:No \@@_seek_data_forward:nn { \l_@@_data_str } {#1} } \str_if_empty:NT \l_@@_data_str { \@@_set_reserved:n {#1} } } \cs_new_protected:Npn \@@_seek_data_forward:nn #1#2 { \int_compare:nNnF { \@@_code_point:w #1 \q_stop } < {#2} { \ior_map_break:n { \@@_seek_data:nn {#1} {#2} } } } \cs_new_protected:Npn \@@_seek_data:nn #1 { \cs_set_eq:NN \@@_seek_data:nnnn \@@_seek_range:nnnn \@@_read_data:w #1 \q_stop } \cs_new_protected:Npn \@@_seek_range:nnnn #1#2#3 { \token_if_eq_charcode:NNTF #1 < { \@@_seek_range:w #2 ; #3 , ~ > ; \q_stop } { \@@_set_reserved:n } } \cs_new_protected:Npn \@@_seek_range:w #1 , ~ #2 > ; #3 \q_stop { \tl_if_empty:nTF {#2} { \@@_set_reserved:n } { \@@_seek_range:nnn {#1} {#2} } } \cs_new_protected:Npn \@@_seek_range:nnn #1#2 { \str_if_eq:nnTF {#2} { Last } { \@@_gset_range:nn {#1} { \l_@@_code_point_int } \@@_range_char:n } { \@@_set_reserved:n } } \ExplSyntaxOff \makeatother \title{The Unicode Standard, Version 14.0\\ Archived Code Charts\\ (Unofficial Version)} \author{The Unicode Consortium} \hypersetup{% pdftitle = {The Unicode Standard, Version 14.0}, pdfauthor = {The Unicode Consortium}, pdfsubject = {Character Code Charts (Unofficial Version)}, } \begin{document} \maketitle \tableofcontents \section{Plane 0 --- Basic Multilingual Plane} \BLOCK {C0 Controls and Basic Latin} { "0000} { "007F} {noto} \BLOCK {C1 Controls and Latin-1 Supplement} { "0080} { "00FF} {noto} \BLOCK {Latin Extended-A} { "0100} { "017F} {noto} \BLOCK {Latin Extended-B} { "0180} { "024F} {noto} \BLOCK {IPA Extensions} { "0250} { "02AF} {noto} \BLOCK {Spacing Modifier Letters} { "02B0} { "02FF} {noto} \BLOCK {Combining Diacritical Marks} { "0300} { "036F} {noto} \BLOCK {Greek and Coptic} { "0370} { "03FF} {noto,noto-coptic} \BLOCK {Cyrillic} { "0400} { "04FF} {noto} \BLOCK {Cyrillic Supplement} { "0500} { "052F} {noto} \BLOCK {Armenian} { "0530} { "058F} {noto-armenian} \BLOCK {Hebrew} { "0590} { "05FF} {noto-hebrew,unifont} \BLOCK {Arabic} { "0600} { "06FF} {scheherazade-new} \BLOCK {Syriac} { "0700} { "074F} {noto-syriac} \BLOCK {Arabic Supplement} { "0750} { "077F} {scheherazade-new} \BLOCK {Thaana} { "0780} { "07BF} {noto-thaana} \BLOCK {NKo} { "07C0} { "07FF} {noto-nko} \BLOCK {Samaritan} { "0800} { "083F} {noto-samaritan} \BLOCK {Mandaic} { "0840} { "085F} {noto-mandaic} \BLOCK {Syriac Supplement} { "0860} { "086F} {unifont} \BLOCK {Arabic Extended-B} { "0870} { "089F} {scheherazade-new} \BLOCK {Arabic Extended-A} { "08A0} { "08FF} {scheherazade-new} \BLOCK {Devanagari} { "0900} { "097F} {noto-devanagari} \BLOCK {Bengali} { "0980} { "09FF} {noto-bengali} \BLOCK {Gurmukhi} { "0A00} { "0A7F} {noto-gurmukhi} \BLOCK {Gujarati} { "0A80} { "0AFF} {noto-gujarati} \BLOCK {Oriya} { "0B00} { "0B7F} {noto-oriya,unifont} \BLOCK {Tamil} { "0B80} { "0BFF} {noto-tamil} \BLOCK {Telugu} { "0C00} { "0C7F} {noto-telugu,unifont} \BLOCK {Kannada} { "0C80} { "0CFF} {noto-kannada,unifont} \BLOCK {Malayalam} { "0D00} { "0D7F} {noto-malayalam} \BLOCK {Sinhala} { "0D80} { "0DFF} {noto-sinhala} \BLOCK {Thai} { "0E00} { "0E7F} {noto-thai} \BLOCK {Lao} { "0E80} { "0EFF} {noto-lao,unifont} \BLOCK {Tibetan} { "0F00} { "0FFF} {noto-tibetan} \BLOCK {Myanmar} { "1000} { "109F} {noto-myanmar} \BLOCK {Georgian} { "10A0} { "10FF} {noto-georgian} \BLOCK {Hangul Jamo} { "1100} { "11FF} {shserif} \BLOCK {Ethiopic} { "1200} { "137F} {noto-ethiopic} \BLOCK {Ethiopic Supplement} { "1380} { "139F} {noto-ethiopic} \BLOCK {Cherokee} { "13A0} { "13FF} {noto-cherokee} \BLOCK {Unified Canadian Aboriginal Syllabics} { "1400} { "167F} {noto-canadianaboriginal} \BLOCK {Ogham} { "1680} { "169F} {noto-ogham} \BLOCK {Runic} { "16A0} { "16FF} {noto-runic} \BLOCK {Tagalog} { "1700} { "171F} {noto-tagalog,unifont} \BLOCK {Hanunoo} { "1720} { "173F} {noto-hanunoo} \BLOCK {Buhid} { "1740} { "175F} {noto-buhid} \BLOCK {Tagbanwa} { "1760} { "177F} {noto-tagbanwa} \BLOCK {Khmer} { "1780} { "17FF} {noto-khmer} \BLOCK {Mongolian} { "1800} { "18AF} {noto-mongolian,unifont} \BLOCK {Unified Canadian Aboriginal Syllabics Extended} { "18B0} { "18FF} {noto-canadianaboriginal} \BLOCK {Limbu} { "1900} { "194F} {noto-limbu} \BLOCK {Tai Le} { "1950} { "197F} {noto-taile} \BLOCK {New Tai Lue} { "1980} { "19DF} {noto-newtailue} \BLOCK {Khmer Symbols} { "19E0} { "19FF} {noto-khmer} \BLOCK {Buginese} { "1A00} { "1A1F} {noto-buginese} \BLOCK {Tai Tham} { "1A20} { "1AAF} {noto-taitham} \BLOCK {Combining Diacritical Marks Extended} { "1AB0} { "1AFF} {noto,unifont} \BLOCK {Balinese} { "1B00} { "1B7F} {noto-balinese,unifont} \BLOCK {Sundanese} { "1B80} { "1BBF} {noto-sundanese} \BLOCK {Batak} { "1BC0} { "1BFF} {noto-batak} \BLOCK {Lepcha} { "1C00} { "1C4F} {noto-lepcha} \BLOCK {Ol Chiki} { "1C50} { "1C7F} {noto-olchiki} \BLOCK {Cyrillic Extended-C} { "1C80} { "1C8F} {noto} \BLOCK {Georgian Extended} { "1C90} { "1CBF} {noto-georgian} \BLOCK {Sundanese Supplement} { "1CC0} { "1CCF} {noto-sundanese} \BLOCK {Vedic Extensions} { "1CD0} { "1CFF} {noto-devanagari,unifont} \BLOCK {Phonetic Extensions} { "1D00} { "1D7F} {noto} \BLOCK {Phonetic Extensions Supplement} { "1D80} { "1DBF} {noto} \BLOCK {Combining Diacritical Marks Supplement} { "1DC0} { "1DFF} {noto,unifont} \BLOCK {Latin Extended Additional} { "1E00} { "1EFF} {noto} \BLOCK {Greek Extended} { "1F00} { "1FFF} {noto} \BLOCK {General Punctuation} { "2000} { "206F} {noto} \BLOCK {Superscripts and Subscripts} { "2070} { "209F} {noto} \BLOCK {Currency Symbols} { "20A0} { "20CF} {noto,unifont} \BLOCK {Combining Diacritical Marks for Symbols} { "20D0} { "20FF} {noto-math,noto-symbols,symbola,unifont} \BLOCK {Letterlike Symbols} { "2100} { "214F} {noto} \BLOCK {Number Forms} { "2150} { "218F} {freeserif,noto-symbols} \BLOCK {Arrows} { "2190} { "21FF} {noto-math,noto-symbols2} \BLOCK {Mathematical Operators} { "2200} { "22FF} {noto-math} \BLOCK {Miscellaneous Technical} { "2300} { "23FF} {noto-math,noto-symbols,noto-symbols2,symbola,unifont} \BLOCK {Control Pictures} { "2400} { "243F} {noto-symbols2} \BLOCK {Optical Character Recognition} { "2440} { "245F} {noto-symbols2} \BLOCK {Enclosed Alphanumerics} { "2460} { "24FF} {noto-symbols} \BLOCK {Box Drawing} { "2500} { "257F} {noto-mono} \BLOCK {Block Elements} { "2580} { "259F} {noto-mono} \BLOCK {Geometric Shapes} { "25A0} { "25FF} {noto-symbols2} \BLOCK {Miscellaneous Symbols} { "2600} { "26FF} {noto-symbols,noto-symbols2} \BLOCK {Dingbats} { "2700} { "27BF} {noto-symbols,noto-symbols2,symbola} \BLOCK {Miscellaneous Mathematical Symbols-A} { "27C0} { "27EF} {noto-math} \BLOCK {Supplemental Arrows-A} { "27F0} { "27FF} {noto-math} \BLOCK {Braille Patterns} { "2800} { "28FF} {noto-symbols2} \BLOCK {Supplemental Arrows-B} { "2900} { "297F} {noto-math} \BLOCK {Miscellaneous Mathematical Symbols-B} { "2980} { "29FF} {noto-math} \BLOCK {Supplemental Mathematical Operators} { "2A00} { "2AFF} {noto-math} \BLOCK {Miscellaneous Symbols and Arrows} { "2B00} { "2BFF} {noto-math,noto-symbols2} \BLOCK {Glagolitic} { "2C00} { "2C5F} {noto-glagolitic} \BLOCK {Latin Extended-C} { "2C60} { "2C7F} {noto} \BLOCK {Coptic} { "2C80} { "2CFF} {noto-coptic} \BLOCK {Georgian Supplement} { "2D00} { "2D2F} {noto-georgian} \BLOCK {Tifinagh} { "2D30} { "2D7F} {noto-tifinagh} \BLOCK {Ethiopic Extended} { "2D80} { "2DDF} {noto-ethiopic} \BLOCK {Cyrillic Extended-A} { "2DE0} { "2DFF} {noto} \BLOCK {Supplemental Punctuation} { "2E00} { "2E7F} {noto,symbola,unifont} \BLOCK {CJK Radicals Supplement} { "2E80} { "2EFF} {shserif} \BLOCK {Kangxi Radicals} { "2F00} { "2FDF} {shserif} \BLOCK {Ideographic Description Characters} { "2FF0} { "2FFF} {shserif} \BLOCK {CJK Symbols and Punctuation} { "3000} { "303F} {shserif} \BLOCK {Hiragana} { "3040} { "309F} {shserif} \BLOCK {Katakana} { "30A0} { "30FF} {shserif} \BLOCK {Bopomofo} { "3100} { "312F} {shsans} \BLOCK {Hangul Compatibility Jamo} { "3130} { "318F} {shserif} \BLOCK {Kanbun} { "3190} { "319F} {shserif} \BLOCK {Bopomofo Extended} { "31A0} { "31BF} {shsans} \BLOCK {CJK Strokes} { "31C0} { "31EF} {shserif} \BLOCK {Katakana Phonetic Extensions} { "31F0} { "31FF} {shserif} \BLOCK {Enclosed CJK Letters and Months} { "3200} { "32FF} {shserif,shsans} \BLOCK {CJK Compatibility} { "3300} { "33FF} {shserif,hanamina} \BLOCK {CJK Unified Ideographs Extension A} { "3400} { "4DBF} {shserif} \BLOCK {Yijing Hexagram Symbols} { "4DC0} { "4DFF} {noto-symbols2} \BLOCK {CJK Unified Ideographs} { "4E00} { "9FFF} {shserif,shsans} \BLOCK {Yi Syllables} { "A000} { "A48F} {noto-yi} \BLOCK {Yi Radicals} { "A490} { "A4CF} {noto-yi} \BLOCK {Lisu} { "A4D0} { "A4FF} {noto-lisu} \BLOCK {Vai} { "A500} { "A63F} {noto-vai} \BLOCK {Cyrillic Extended-B} { "A640} { "A69F} {noto} \BLOCK {Bamum} { "A6A0} { "A6FF} {noto-bamum} \BLOCK {Modifier Tone Letters} { "A700} { "A71F} {noto} \BLOCK {Latin Extended-D} { "A720} { "A7FF} {noto,unifont} \BLOCK {Syloti Nagri} { "A800} { "A82F} {noto-sylotinagri} \BLOCK {Common Indic Number Forms} { "A830} { "A83F} {noto-devanagari} \BLOCK {Phags-pa} { "A840} { "A87F} {noto-phagspa} \BLOCK {Saurashtra} { "A880} { "A8DF} {noto-saurashtra} \BLOCK {Devanagari Extended} { "A8E0} { "A8FF} {noto-devanagari} \BLOCK {Kayah Li} { "A900} { "A92F} {noto-kayahli} \BLOCK {Rejang} { "A930} { "A95F} {noto-rejang} \BLOCK {Hangul Jamo Extended-A} { "A960} { "A97F} {shserif} \BLOCK {Javanese} { "A980} { "A9DF} {noto-javanese} \BLOCK {Myanmar Extended-B} { "A9E0} { "A9FF} {noto-myanmar} \BLOCK {Cham} { "AA00} { "AA5F} {noto-cham} \BLOCK {Myanmar Extended-A} { "AA60} { "AA7F} {noto-myanmar} \BLOCK {Tai Viet} { "AA80} { "AADF} {noto-taiviet} \BLOCK {Meetei Mayek Extensions} { "AAE0} { "AAFF} {noto-meeteimayek} \BLOCK {Ethiopic Extended-A} { "AB00} { "AB2F} {noto-ethiopic} \BLOCK {Latin Extended-E} { "AB30} { "AB6F} {noto,unifont} \BLOCK {Cherokee Supplement} { "AB70} { "ABBF} {noto-cherokee} \BLOCK {Meetei Mayek} { "ABC0} { "ABFF} {noto-meeteimayek} \BLOCK {Hangul Syllables} { "AC00} { "D7AF} {shserif} \BLOCK {Hangul Jamo Extended-B} { "D7B0} { "D7FF} {shserif} %! \BLOCK {High Surrogates} { "D800} { "DB7F} {} %! \BLOCK {High Private Use Surrogates} { "DB80} { "DBFF} {} %! \BLOCK {Low Surrogates} { "DC00} { "DFFF} {} \BLOCK {Private Use Area} { "E000} { "F8FF} {lastresort} \BLOCK {CJK Compatibility Ideographs} { "F900} { "FAFF} {shserif,bs-han} \BLOCK {Alphabetic Presentation Forms} { "FB00} { "FB4F} {noto,noto-armenian,noto-hebrew} \BLOCK {Arabic Presentation Forms-A} { "FB50} { "FDFF} {scheherazade-new,noto-arabic,unifont} \BLOCK {Variation Selectors} { "FE00} { "FE0F} {unifont} \BLOCK {Vertical Forms} { "FE10} { "FE1F} {shserif} \BLOCK {Combining Half Marks} { "FE20} { "FE2F} {noto} \BLOCK {CJK Compatibility Forms} { "FE30} { "FE4F} {shserif} \BLOCK {Small Form Variants} { "FE50} { "FE6F} {shserif} \BLOCK {Arabic Presentation Forms-B} { "FE70} { "FEFF} {noto-arabic} \BLOCK {Halfwidth and Fullwidth Forms} { "FF00} { "FFEF} {shserif} \BLOCK {Specials} { "FFF0} { "FFFF} {noto,bs-han} \section{Plane 1 --- Supplementary Multilingual Plane} \BLOCK {Linear B Syllabary} { "10000} { "1007F} {noto-linearb} \BLOCK {Linear B Ideograms} { "10080} { "100FF} {noto-linearb} \BLOCK {Aegean Numbers} { "10100} { "1013F} {noto-linearb} \BLOCK {Ancient Greek Numbers} { "10140} { "1018F} {noto-symbols2} \BLOCK {Ancient Symbols} { "10190} { "101CF} {noto-symbols2} \BLOCK {Phaistos Disc} { "101D0} { "101FF} {noto-symbols2} \BLOCK {Lycian} { "10280} { "1029F} {noto-lycian} \BLOCK {Carian} { "102A0} { "102DF} {noto-carian} \BLOCK {Coptic Epact Numbers} { "102E0} { "102FF} {noto-symbols2} \BLOCK {Old Italic} { "10300} { "1032F} {noto-olditalic} \BLOCK {Gothic} { "10330} { "1034F} {noto-gothic} \BLOCK {Old Permic} { "10350} { "1037F} {noto-oldpermic} \BLOCK {Ugaritic} { "10380} { "1039F} {noto-ugaritic} \BLOCK {Old Persian} { "103A0} { "103DF} {noto-oldpersian} \BLOCK {Deseret} { "10400} { "1044F} {noto-deseret} \BLOCK {Shavian} { "10450} { "1047F} {noto-shavian} \BLOCK {Osmanya} { "10480} { "104AF} {noto-osmanya} \BLOCK {Osage} { "104B0} { "104FF} {noto-osage} \BLOCK {Elbasan} { "10500} { "1052F} {noto-elbasan} \BLOCK {Caucasian Albanian} { "10530} { "1056F} {noto-caucasianalbanian} \BLOCK {Vithkuqi} { "10570} { "105BF} {unifont2} \BLOCK {Linear A} { "10600} { "1077F} {noto-lineara} \BLOCK {Latin Extended-F} { "10780} { "107BF} {unifont2} \BLOCK {Cypriot Syllabary} { "10800} { "1083F} {noto-cypriot} \BLOCK {Imperial Aramaic} { "10840} { "1085F} {noto-imperialaramaic} \BLOCK {Palmyrene} { "10860} { "1087F} {noto-palmyrene} \BLOCK {Nabataean} { "10880} { "108AF} {noto-nabataean} \BLOCK {Hatran} { "108E0} { "108FF} {noto-hatran} \BLOCK {Phoenician} { "10900} { "1091F} {noto-phoenician} \BLOCK {Lydian} { "10920} { "1093F} {noto-lydian} \BLOCK {Meroitic Hieroglyphs} { "10980} { "1099F} {noto-meroitic} \BLOCK {Meroitic Cursive} { "109A0} { "109FF} {noto-meroitic} \BLOCK {Kharoshthi} { "10A00} { "10A5F} {noto-kharoshthi} \BLOCK {Old South Arabian} { "10A60} { "10A7F} {noto-oldsoutharabian} \BLOCK {Old North Arabian} { "10A80} { "10A9F} {noto-oldnortharabian} \BLOCK {Manichaean} { "10AC0} { "10AFF} {noto-manichaean} \BLOCK {Avestan} { "10B00} { "10B3F} {noto-avestan} \BLOCK {Inscriptional Parthian} { "10B40} { "10B5F} {noto-inscriptionalparthian} \BLOCK {Inscriptional Pahlavi} { "10B60} { "10B7F} {noto-inscriptionalpahlavi} \BLOCK {Psalter Pahlavi} { "10B80} { "10BAF} {noto-psalterpahlavi} \BLOCK {Old Turkic} { "10C00} { "10C4F} {noto-oldturkic} \BLOCK {Old Hungarian} { "10C80} { "10CFF} {noto-oldhungarian} \BLOCK {Hanifi Rohingya} { "10D00} { "10D3F} {noto-hanifirohingya} \BLOCK {Rumi Numeral Symbols} { "10E60} { "10E7F} {noto-symbols2} \BLOCK {Yezidi} { "10E80} { "10EBF} {noto-yezidi} \BLOCK {Old Sogdian} { "10F00} { "10F2F} {noto-oldsogdian} \BLOCK {Sogdian} { "10F30} { "10F6F} {noto-sogdian} \BLOCK {Old Uyghur} { "10F70} { "10FAF} {unifont2} \BLOCK {Chorasmian} { "10FB0} { "10FDF} {unifont2} \BLOCK {Elymaic} { "10FE0} { "10FFF} {noto-elymaic} \BLOCK {Brahmi} { "11000} { "1107F} {noto-brahmi,unifont2} \BLOCK {Kaithi} { "11080} { "110CF} {noto-kaithi,unifont2} \BLOCK {Sora Sompeng} { "110D0} { "110FF} {noto-sorasompeng} \BLOCK {Chakma} { "11100} { "1114F} {noto-chakma} \BLOCK {Mahajani} { "11150} { "1117F} {noto-mahajani} \BLOCK {Sharada} { "11180} { "111DF} {noto-sharada} \BLOCK {Sinhala Archaic Numbers} { "111E0} { "111FF} {noto-sinhala} \BLOCK {Khojki} { "11200} { "1124F} {noto-khojki} \BLOCK {Multani} { "11280} { "112AF} {noto-multani} \BLOCK {Khudawadi} { "112B0} { "112FF} {noto-khudawadi} \BLOCK {Grantha} { "11300} { "1137F} {noto-grantha} \BLOCK {Newa} { "11400} { "1147F} {noto-newa} \BLOCK {Tirhuta} { "11480} { "114DF} {noto-tirhuta} \BLOCK {Siddham} { "11580} { "115FF} {noto-siddham} \BLOCK {Modi} { "11600} { "1165F} {noto-modi} \BLOCK {Mongolian Supplement} { "11660} { "1167F} {noto-mongolian} \BLOCK {Takri} { "11680} { "116CF} {noto-takri,unifont2} \BLOCK {Ahom} { "11700} { "1174F} {noto-ahom} \BLOCK {Dogra} { "11800} { "1184F} {noto-dogra} \BLOCK {Warang Citi} { "118A0} { "118FF} {noto-warangciti} \BLOCK {Dives Akuru} { "11900} { "1195F} {unifont2} \BLOCK {Nandinagari} { "119A0} { "119FF} {noto-nandinagari} \BLOCK {Zanabazar Square} { "11A00} { "11A4F} {noto-zanabazarsquare} \BLOCK {Soyombo} { "11A50} { "11AAF} {noto-soyombo} \BLOCK {Unified Canadian Aboriginal Syllabics Extended-A} { "11AB0} { "11ABF} {unifont2} \BLOCK {Pau Cin Hau} { "11AC0} { "11AFF} {noto-paucinhau} \BLOCK {Bhaiksuki} { "11C00} { "11C6F} {noto-bhaiksuki} \BLOCK {Marchen} { "11C70} { "11CBF} {noto-marchen} \BLOCK {Masaram Gondi} { "11D00} { "11D5F} {noto-masaramgondi} \BLOCK {Gunjala Gondi} { "11D60} { "11DAF} {noto-gunjalagondi} \BLOCK {Makasar} { "11EE0} { "11EFF} {unifont2} \BLOCK {Lisu Supplement} { "11FB0} { "11FBF} {noto-lisu} \BLOCK {Tamil Supplement} { "11FC0} { "11FFF} {noto-tamilsupplement} \BLOCK {Cuneiform} { "12000} { "123FF} {noto-cuneiform} \BLOCK {Cuneiform Numbers and Punctuation} { "12400} { "1247F} {noto-cuneiform} \BLOCK {Early Dynastic Cuneiform} { "12480} { "1254F} {noto-cuneiform} \BLOCK {Cypro-Minoan} { "12F90} { "12FFF} {unifont2} \BLOCK {Egyptian Hieroglyphs} { "13000} { "1342F} {noto-egyptianhieroglyphs} \BLOCK {Egyptian Hieroglyph Format Controls} { "13430} { "1343F} {noto-egyptianhieroglyphs,unifont2} \BLOCK {Anatolian Hieroglyphs} { "14400} { "1467F} {noto-anatolianhieroglyphs} \BLOCK {Bamum Supplement} { "16800} { "16A3F} {noto-bamum} \BLOCK {Mro} { "16A40} { "16A6F} {noto-mro} \BLOCK {Tangsa} { "16A70} { "16ACF} {unifont2} \BLOCK {Bassa Vah} { "16AD0} { "16AFF} {noto-bassavah} \BLOCK {Pahawh Hmong} { "16B00} { "16B8F} {noto-pahawhhmong} \BLOCK {Medefaidrin} { "16E40} { "16E9F} {noto-medefaidrin} \BLOCK {Miao} { "16F00} { "16F9F} {noto-miao} \BLOCK {Ideographic Symbols and Punctuation} { "16FE0} { "16FFF} {bs-han,bs-tangut,noto-nushu,unifont2} \BLOCK {Tangut} { "17000} { "187FF} {bs-tangut} \BLOCK {Tangut Components} { "18800} { "18AFF} {bs-tangut} \BLOCK {Khitan Small Script} { "18B00} { "18CFF} {bs-khitan} \BLOCK {Tangut Supplement} { "18D00} { "18D7F} {bs-tangut} \BLOCK {Kana Extended-B} { "1AFF0} { "1AFFF} {unifont2} \BLOCK {Kana Supplement} { "1B000} { "1B0FF} {bs-han} \BLOCK {Kana Extended-A} { "1B100} { "1B12F} {bs-han,unifont2} \BLOCK {Small Kana Extension} { "1B130} { "1B16F} {bs-han} \BLOCK {Nushu} { "1B170} { "1B2FF} {noto-nushu} \BLOCK {Duployan} { "1BC00} { "1BC9F} {noto-duployan} \BLOCK {Shorthand Format Controls} { "1BCA0} { "1BCAF} {unifont2} \BLOCK {Znamenny Musical Notation} { "1CF00} { "1CFCF} {unifont2} \BLOCK {Byzantine Musical Symbols} { "1D000} { "1D0FF} {noto-music} \BLOCK {Musical Symbols} { "1D100} { "1D1FF} {noto-music,unifont2} \BLOCK {Ancient Greek Musical Notation} { "1D200} { "1D24F} {noto-music} \BLOCK {Mayan Numerals} { "1D2E0} { "1D2FF} {noto-mayannumerals} \BLOCK {Tai Xuan Jing Symbols} { "1D300} { "1D35F} {noto-symbols2} \BLOCK {Counting Rod Numerals} { "1D360} { "1D37F} {noto-symbols2} \BLOCK {Mathematical Alphanumeric Symbols} { "1D400} { "1D7FF} {noto-math,xits} \BLOCK {Sutton SignWriting} { "1D800} { "1DAAF} {unifont2} \BLOCK {Latin Extended-G} { "1DF00} { "1DFFF} {unifont2} \BLOCK {Glagolitic Supplement} { "1E000} { "1E02F} {noto-glagolitic} \BLOCK {Nyiakeng Puachue Hmong} { "1E100} { "1E14F} {noto-nyiakengpuachuehmong} \BLOCK {Toto} { "1E290} { "1E2BF} {unifont2} \BLOCK {Wancho} { "1E2C0} { "1E2FF} {noto-wancho} \BLOCK {Ethiopic Extended-B} { "1E7E0} { "1E7FF} {unifont2} \BLOCK {Mende Kikakui} { "1E800} { "1E8DF} {noto-mendekikakui} \BLOCK {Adlam} { "1E900} { "1E95F} {noto-adlam} \BLOCK {Indic Siyaq Numbers} { "1EC70} { "1ECBF} {noto-indicsiyaqnumbers,unifont2} \BLOCK {Ottoman Siyaq Numbers} { "1ED00} { "1ED4F} {unifont2} \BLOCK {Arabic Mathematical Alphabetic Symbols} { "1EE00} { "1EEFF} {noto-math,xits} \BLOCK {Mahjong Tiles} { "1F000} { "1F02F} {noto-symbols2} \BLOCK {Domino Tiles} { "1F030} { "1F09F} {noto-symbols2} \BLOCK {Playing Cards} { "1F0A0} { "1F0FF} {noto-symbols2} \BLOCK {Enclosed Alphanumeric Supplement} { "1F100} { "1F1FF} {shserif,noto-symbols,bs-han} \BLOCK {Enclosed Ideographic Supplement} { "1F200} { "1F2FF} {shserif,bs-han} \BLOCK {Miscellaneous Symbols and Pictographs} { "1F300} { "1F5FF} {noto-symbols,noto-symbols2,symbola} \BLOCK {Emoticons} { "1F600} { "1F64F} {symbola} \BLOCK {Ornamental Dingbats} { "1F650} { "1F67F} {noto-symbols2} \BLOCK {Transport and Map Symbols} { "1F680} { "1F6FF} {noto-symbols2,symbola} \BLOCK {Alchemical Symbols} { "1F700} { "1F77F} {noto-symbols} \BLOCK {Geometric Shapes Extended} { "1F780} { "1F7FF} {noto-symbols2,unifont2} \BLOCK {Supplemental Arrows-C} { "1F800} { "1F8FF} {noto-symbols2} \BLOCK {Supplemental Symbols and Pictographs} { "1F900} { "1F9FF} {symbola,unifont2} \BLOCK {Chess Symbols} { "1FA00} { "1FA6F} {noto-symbols2} \BLOCK {Symbols and Pictographs Extended-A} { "1FA70} { "1FAFF} {noto-symbols2} \BLOCK {Symbols for Legacy Computing} { "1FB00} { "1FBFF} {noto-symbols2} \section{Plane 2 --- Supplementary Ideographic Plane} \BLOCK {CJK Unified Ideographs Extension B} { "20000} { "2A6DF} {shserif,hanamina,hanaminb} \BLOCK {CJK Unified Ideographs Extension C} { "2A700} { "2B73F} {shserif,hanamina,hanaminb} \BLOCK {CJK Unified Ideographs Extension D} { "2B740} { "2B81F} {shserif,hanamina,hanaminb} \BLOCK {CJK Unified Ideographs Extension E} { "2B820} { "2CEAF} {shserif,hanamina,hanaminb} \BLOCK {CJK Unified Ideographs Extension F} { "2CEB0} { "2EBEF} {shserif,hanamina,hanaminb} \BLOCK {CJK Compatibility Ideographs Supplement} { "2F800} { "2FA1F} {shserif,hanamina,hanaminb} \section{Plane 3 --- Tertiary Ideographic Plane} \BLOCK {CJK Unified Ideographs Extension G} { "30000} { "3134F} {glyphs-wiki} \section{Plane 14 --- Supplementary Special-purpose Plane} \BLOCK {Tags} { "E0000} { "E007F} {lastresort} \BLOCK {Variation Selectors Supplement} { "E0100} { "E01EF} {bs-han} \section{Plane 15 --- Supplementary Private Use Area-A} %! \BLOCK {Supplementary Private Use Area-A} { "F0000} { "FFFFF} {lastresort} \BLOCK {Supplementary Private Use Area-A} { "FFF80} { "FFFFF} {lastresort} \section{Plane 16 --- Supplementary Private Use Area-B} %! \BLOCK {Supplementary Private Use Area-B} {"100000} {"10FFFF} {lastresort} \BLOCK {Supplementary Private Use Area-B} {"10FF80} {"10FFFF} {lastresort} \end{document}
{ "alphanum_fraction": 0.4766527872, "avg_line_length": 50.724137931, "ext": "tex", "hexsha": "3ff412ee770c32655e55ff6b98ff0da40853516a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5fcf209fca2c9ffd5963dfb58c2cdd408f1ebb6b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Stone-Zeng/latex-showcase", "max_forks_repo_path": "codecharts/codecharts.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5fcf209fca2c9ffd5963dfb58c2cdd408f1ebb6b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Stone-Zeng/latex-showcase", "max_issues_repo_path": "codecharts/codecharts.tex", "max_line_length": 134, "max_stars_count": null, "max_stars_repo_head_hexsha": "5fcf209fca2c9ffd5963dfb58c2cdd408f1ebb6b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Stone-Zeng/latex-showcase", "max_stars_repo_path": "codecharts/codecharts.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 15502, "size": 47072 }
\documentclass{article} \usepackage{fontspec} \usepackage{fullpage} \usepackage{multicol} \usepackage{multirow} \usepackage{tikz} \begin{document} \newfontfamily\swfill{SuttonSignWritingFill.ttf} \newfontfamily\swline{SuttonSignWritingLine.ttf} \newcommand{\bul}{\hfil$\bullet$&} \renewenvironment{glossary}{\begin{multicols}{5}\begin{center}}{\end{center}\end{multicols}} \setcounter{secnumdepth}{0} \setlength{\columnseprule}{1pt} \section{Supplement For Lesson 4} \begin{center} \it Objectives inspired by, vocabulary transcribed from, and sentences and story by Bill Vicars. Handshape photos by Adam Frost. No endorsement implied nor given by either. \end{center} \subsection{Objectives} \begin{tabular}{p{1cm}p{14cm}} \bul I have completed the objectives for this lesson.\\ \bul I know how SignWriting handles non-manual markers.\\ \bul I am able to demonstrate the iconicity of SignWriting.\\ \bul I understand how directionality affects SignWriting.\\ \bul I am able to read the numbers 21--30.\\ \bul I am able to show the meaning and form of the symbol group in the timing category.\\ \bul I understand which types of handshapes are in Symbol Groups six and seven.\\ \bul I am able to draw the side palmshape in all forms.\\ \bul I am able to draw and demonstrate what fill three means.\\ \bul I am able to read, write, and sign half of the ASL handshapes in symbol group three.\\ \bul I am able to recognize the vocabulary for this lesson.\\ \bul I am able to read the practice sentences for this lesson.\\ \bul I am able to read the practice story for this lesson.\\ \end{tabular} \subsection{Non-Manual Markers} You should think I'm being silly with bothering to mention this by now but here goes. A non-manual marker is anything not (non-) on the hands (manual), which we have seen quite a bit already. Things like the B518x518S30a00482x483 meaning a yes or no question verses B518x518S30c00482x483 meaning a question to answer. \subsection{The Iconicity of SignWriting} When selecting a symbol for index finger extended, you could use a symbol like \tikz{\draw(0,0)circle(5pt);\filldraw(0,0)circle(1pt);} and just learn that it means index finger extended. Given enough arbitrary symbols, this could be just as effective in recording ASL even if it took a little more memory at first. There is nothing wrong with choosing to do this. Instead, most of SignWriting has been designed to remind the reader and writer of what the item is. All of the handshapes, movement, and faces have been designed to look like the actual hands, movement, and faces involved. The handshapes even move where the fingers are drawn to match. When you see B508x515S10000493x485 and B508x515S10010493x485, they both look very like your hand: in one case with the closed fingers to the left of your index and in the other with the fist to the left of your index. Which is why the finger moves on B508x515S10020493x485, it now looks like where your actual finger is. When looking at movement, knowing that B507x508S22a00494x493 is up while B507x508S26500493x493 is forward is an arbitrary remembering of stems, both B507x508S22a02494x493 and B507x508S26502493x493 are to the left and look like it. And as for faces, you don't even really need to be told that B518x518S30a00482x483 means eyebrows up, it just looks like it. This has the additional side-effect that ASL words which are iconic are also iconic in SignWriting. B525x539S15a11502x476S15a19476x476S20500495x462S23904505x501S2391c477x501S2fb04494x533 looks like it might mean house, B531x538S14c10487x463S15a56470x526S37900497x492S2e208511x478 looks like a tree, and B535x539S14c00507x476S23108507x512S14c08470x488S23118468x522S22520503x461S22520465x474 looks like you are showing a fire. \subsection{Directionality in SignWriting} Most languages do something called conjugate --- that is change their form depending on either who is performing the action (the actor), or who the action is being perform on (the patient), or both. English is of the first type --- I \emph{speak} to myself, she \emph{speaks} to me, I \emph{speak} to her, and she \emph{speaks} to herself. If you know who the actor is then you can conjugate correctly regardless of the patient. ASL is of that third type, that conjugates based on both. \begin{center} \begin{tabular}{*{4}{c}} B514x552S10008486x480S10020492x487S20500504x485S26614487x449S26600491x522&B514x534S10008486x497S10020492x504S20500504x502S26614487x466&B514x536S10008486x464S10020492x471S20500504x469S26600491x506&B553x519S10008490x482S10020496x489S20500508x487S26616448x500S26602523x501\\ Meet&You meet me&I meet you&They meet\\ \end{tabular} \end{center} \subsection{The Numbers 21 through 30} \begin{center} \begin{tabular}{*{5}{c}} \textbf{21}&\textbf{22}&\textbf{23}&\textbf{24}&\textbf{25}\\ B522x515S1e020499x485S21800479x503& B518x524S10e50482x476S2890a489x509& B519x517S12420481x487S22114493x484& B525x516S1dc20476x486S14420503x485& B521x514S1c510491x486S22124480x504\\ \textbf{26}&\textbf{27}&\textbf{28}&\textbf{29}&\textbf{30}\\ B523x516S1dc10477x485S18720505x487& B525x516S1dc10476x485S1a520504x488& B525x516S1dc10476x485S1bb20504x488& B525x516S1dc10476x485S1ce20503x486& B522x516S11e20478x485S17620506x500\\ \end{tabular} \end{center} \subsection{The Timing Category} We informally call this category timing though it's official name is ``Dynamics \& Timing'' and has one base symbol in it. The symbol groups can be thought of as three groups of ten, and here is where knowing the categories in order starts to help. \begin{center} \begin{tabular}{ccc} \textbf{Symbol}\\ \textbf{Group}&\textbf{Name}&\textbf{Example}\\ \textbf{21}&Timing&B506x504S2f700494x497\\ \end{tabular} \end{center} With there only being one, and it having the same informal name (and official name) you should be able to remember it fairly easy but you technically don't have to yet. \subsection{Symbol Groups Six and Seven} The sixth Symbol Group we informally call six, though it's official name is ``Baby Finger''. Symbol Group Baby Finger (Six) have all the handshapes where either the baby finger is extended or all fingers except the baby finger are extended. The seventh Symbol Group we informally call seven, though it's official name is ``Ring Finger''. Symbol Group Ring Finger (Seven) have all the handshapes where either the ring finger is extended or all fingers except the ring finger are extended. Before you can consider this lesson complete, you need to be able to list off the symbol groups as: ``one, two, three, four, five, six, seven'' \subsection{The Side Palmshape} \begin{center} \begin{tabular}{r*{6}{c}} &\textbf{Fill 1}&\textbf{Fill 2}&\textbf{Fill 3}&\textbf{Fill 4}&\textbf{Fill 5}&\textbf{Fill 6}\\ \textbf{Right}& B512x508S18200489x493& B512x508S18210489x493& B512x508S18220489x493& B512x508S18230489x493& B512x508S18240489x493& B512x508S18250489x493\\ \textbf{Left}& B512x508S18208489x493& B512x508S18218489x493& B512x508S18228489x493& B512x508S18238489x493& B512x508S18248489x493& B512x508S18258489x493\\ \end{tabular} \end{center} \subsection{The Third Fill} \subsubsection{Hand Symbols} \begin{center} B508x515S10020493x485 B508x515S10e20493x485 B512x515S11e20489x485 \end{center} Any symbol drawn in the third fill means that the signer's palm is facing away from the signer. For all the hand symbols, the empty portion represents the signer's palm and the filled portion represents the back of the hand. So for fill three, even if the hand was open you would only see the back of your hand --- leaving fill three completely filled symbol. \subsubsection{Movement Symbols} \begin{center} B508x515S22b20492x485 B508x515S25620492x485 B508x515S26620492x485 \end{center} Any symbol drawn in the third fill means that the both hands are doing the movement together, so these are all both hands moving. \subsubsection{Everything Else} \begin{center} B506x504S2f720494x497 B518x518S30a20482x483 B537x504S38720463x496 \end{center} The fills for other categories tend to be a bit more variable. Here we have double-fast, left eyebrow raised, and slow comma. \subsection{First ASL Handshapes From Symbol Group Three} The twelve handshapes in Symbol Group Three used by ASL in order are: {\it Index Middle Thumb; Index Middle Bent, Thumb Straight; Index Middle Thumb Bent; Index Up, Middle Hinge, Thumb Side; Index Middle Thumb Cup; } Index Middle Thumb Circle; Index Middle Unit, Thumb Side; Index Middle Unit Hinge, Thumb Side; Index Middle Cross, Thumb Side; Middle Thumb Circle, Index Up; Index Middle Thumb, Angle; and Middle Thumb Angle Out, Index Up. In case you hadn't noticed before, the \emph{italic} handshapes are the ones we are covering in this lesson. \subsubsection{The Index Middle Thumb Handshape} \begin{center} \begin{tabular}{r*{6}{c}} &\textbf{Fill 1}&\textbf{Fill 2}&\textbf{Fill 3}&\textbf{Fill 4}&\textbf{Fill 5}&\textbf{Fill 6}\\ \multirow{2}{*}{\textbf{Right}}& B512x511S11e00488x489& B512x511S11e10488x489& B512x511S11e20488x489& B512x511S11e30488x489& B512x511S11e40488x489& B512x511S11e50488x489\\ & \includegraphics[scale=0.1]{images/03-01-1.jpg}& \includegraphics[scale=0.1]{images/03-01-2.jpg}& \includegraphics[scale=0.1]{images/03-01-3.jpg}& \includegraphics[scale=0.1]{images/03-01-4.jpg}& \includegraphics[scale=0.1]{images/03-01-5.jpg}& \includegraphics[scale=0.1]{images/03-01-6.jpg}\\ \textbf{Left}& B512x511S11e08488x489& B512x511S11e18488x489& B512x511S11e28488x489& B512x511S11e38488x489& B512x511S11e48488x489& B512x511S11e58488x489\\ \end{tabular} \end{center} \subsubsection{The Index Middle Bent, Thumb Straight Handshape} \begin{center} \begin{tabular}{r*{6}{c}} &\textbf{Fill 1}&\textbf{Fill 2}&\textbf{Fill 3}&\textbf{Fill 4}&\textbf{Fill 5}&\textbf{Fill 6}\\ \multirow{2}{*}{\textbf{Right}}& B512x511S12100488x489& B512x511S12110488x489& B512x511S12120488x489& B512x511S12130488x489& B512x511S12140488x489& B512x511S12150488x489\\ & \includegraphics[scale=0.1]{images/03-02-1.jpg}& \includegraphics[scale=0.1]{images/03-02-2.jpg}& \includegraphics[scale=0.1]{images/03-02-3.jpg}& \includegraphics[scale=0.1]{images/03-02-4.jpg}& \includegraphics[scale=0.1]{images/03-02-5.jpg}& \includegraphics[scale=0.1]{images/03-02-6.jpg}\\ \textbf{Left}& B512x511S12108488x489& B512x511S12118488x489& B512x511S12128488x489& B512x511S12138488x489& B512x511S12148488x489& B512x511S12158488x489\\ \end{tabular} \end{center} \subsubsection{The Index Middle Thumb Bent Handshape} \begin{center} \begin{tabular}{r*{6}{c}} &\textbf{Fill 1}&\textbf{Fill 2}&\textbf{Fill 3}&\textbf{Fill 4}&\textbf{Fill 5}&\textbf{Fill 6}\\ \multirow{2}{*}{\textbf{Right}}& B512x511S12200488x489& B512x511S12210488x489& B512x511S12220488x489& B512x511S12230488x489& B512x511S12240488x489& B512x511S12250488x489\\ & \includegraphics[scale=0.1]{images/03-03-1.jpg}& \includegraphics[scale=0.1]{images/03-03-2.jpg}& \includegraphics[scale=0.1]{images/03-03-3.jpg}& \includegraphics[scale=0.1]{images/03-03-4.jpg}& \includegraphics[scale=0.1]{images/03-03-5.jpg}& \includegraphics[scale=0.1]{images/03-03-6.jpg}\\ \textbf{Left}& B512x511S12208488x489& B512x511S12218488x489& B512x511S12228488x489& B512x511S12238488x489& B512x511S12248488x489& B512x511S12258488x489\\ \end{tabular} \end{center} \subsubsection{The Index Up, Middle Hinge, Thumb Side Handshape} This symbol introduces a new concept in fills 1, 3, 4, and 6. \begin{center} \begin{tabular}{r*{6}{c}} &\textbf{Fill 1}&\textbf{Fill 2}&\textbf{Fill 3}&\textbf{Fill 4}&\textbf{Fill 5}&\textbf{Fill 6}\\ \multirow{2}{*}{\textbf{Right}}& B512x511S12400488x489& B512x511S12410488x489& B512x511S12420488x489& B512x511S12430488x489& B512x511S12440488x489& B512x511S12450488x489\\ & \includegraphics[scale=0.1]{images/03-04-1.jpg}& \includegraphics[scale=0.1]{images/03-04-2.jpg}& \includegraphics[scale=0.1]{images/03-04-3.jpg}& \includegraphics[scale=0.1]{images/03-04-4.jpg}& \includegraphics[scale=0.1]{images/03-04-5.jpg}& \includegraphics[scale=0.1]{images/03-04-6.jpg}\\ \textbf{Left}& B512x511S12408488x489& B512x511S12418488x489& B512x511S12428488x489& B512x511S12438488x489& B512x511S12448488x489& B512x511S12458488x489\\ \end{tabular} \end{center} When a finger (or thumb) is pointing at the signer in fill 1, the finger (or thumb) becomes a circle. This convention matches with the diagonal movement that you will learn about later --- the circle means it is heading your direction. Normally, something heading away from you would be a line, but for handshapes we don't do that for fills 3 and 6. Instead we still use the circle to both match fills 1 and 4 and so that we won't confuse fill 3 with fill 6 or have to make sure our lines in fill 6 are separated ``enough'' to show. %%%%% When writing you are free to use either the circle or to use something that looks more like fills 2 and 5 --- whichever is easier for you, just like you can use ``g'' or {\em ``g''} in your writing of English. Both are the same letter regardless of what you may see in print. \subsubsection{The Index Middle Thumb Cup Handshape} \begin{center} \begin{tabular}{r*{6}{c}} &\textbf{Fill 1}&\textbf{Fill 2}&\textbf{Fill 3}&\textbf{Fill 4}&\textbf{Fill 5}&\textbf{Fill 6}\\ \multirow{2}{*}{\textbf{Right}}& B512x511S12800488x489& B512x511S12810488x489& B512x511S12820488x489& B512x511S12830488x489& B512x511S12840488x489& B512x511S12850488x489\\ & \includegraphics[scale=0.1]{images/03-05-1.jpg}& \includegraphics[scale=0.1]{images/03-05-2.jpg}& \includegraphics[scale=0.1]{images/03-05-3.jpg}& \includegraphics[scale=0.1]{images/03-05-4.jpg}& \includegraphics[scale=0.1]{images/03-05-5.jpg}& \includegraphics[scale=0.1]{images/03-05-6.jpg}\\ \textbf{Left}& B512x511S12808488x489& B512x511S12818488x489& B512x511S12828488x489& B512x511S12838488x489& B512x511S12848488x489& B512x511S12858488x489\\ \end{tabular} \end{center} \subsubsection{The Index Middle Thumb Circle Handshape} \begin{center} \begin{tabular}{r*{6}{c}} &\textbf{Fill 1}&\textbf{Fill 2}&\textbf{Fill 3}&\textbf{Fill 4}&\textbf{Fill 5}&\textbf{Fill 6}\\ \multirow{2}{*}{\textbf{Right}}& B512x511S12900488x489& B512x511S12910488x489& B512x511S12920488x489& B512x511S12930488x489& B512x511S12940488x489& B512x511S12950488x489\\ & \includegraphics[scale=0.1]{images/03-06-1.jpg}& \includegraphics[scale=0.1]{images/03-06-2.jpg}& \includegraphics[scale=0.1]{images/03-06-3.jpg}& \includegraphics[scale=0.1]{images/03-06-4.jpg}& \includegraphics[scale=0.1]{images/03-06-5.jpg}& \includegraphics[scale=0.1]{images/03-06-6.jpg}\\ \textbf{Left}& B512x511S12908488x489& B512x511S12918488x489& B512x511S12928488x489& B512x511S12938488x489& B512x511S12948488x489& B512x511S12958488x489\\ \end{tabular} \end{center} \subsection{Vocabulary} \begin{glossary} \textbf{21}\\ AS1e020S21800M522x515S1e020499x485S21800479x503 \textbf{22}\\ AS10e50S2890aM518x524S10e50482x476S2890a489x509 \textbf{23}\\ AS12420S22114M519x517S12420481x487S22114493x484 \textbf{24}\\ AS1dc20S14420M525x516S1dc20476x486S14420503x485 \textbf{25}\\ AS1c510S22124M521x514S1c510491x486S22124480x504 \textbf{26}\\ AS1dc10S18720M523x516S1dc10477x485S18720505x487 \textbf{27}\\ AS1dc10S1a520M525x516S1dc10476x485S1a520504x488 \textbf{28}\\ AS1dc10S1bb20M525x516S1dc10476x485S1bb20504x488 \textbf{29}\\ AS1dc10S1ce20M525x516S1dc10476x485S1ce20503x486 \textbf{30}\\ AS11e20S17620M522x516S11e20478x485S17620506x500 \textbf{angry}\\ AS14e02S14e0aS28808S28810S2f700S2fb00M534x527S14e02503x501S14e0a466x502S28808511x478S28810474x478S2f700493x474S2fb00491x484 \textbf{apologize}\\ AS1f701S21100M511x520S1f701490x481S21100495x506 \textbf{aunt}\\ AS1f710S2e230S2ff00M541x562S1f710521x510S2ff00482x483S2e230519x530 \textbf{baby}\\ AS15a32S15a3aS20500S27126S37906S37906M550x520S27126535x481S37906466x505S15a3a503x499S20500450x497S37906482x491S15a32460x487 \textbf{bed}\\ AS15a17S15a19S20500S30105M541x541S15a17514x518S15a19506x510S30105482x477S20500531x504 \textbf{bedroom}\\ AS15a17S20500S15a40S15a48S22104S22104S18040S18048S2ff00M539x593S15a17509x507S2ff00482x483S20500520x492S15a40512x541S15a48482x539S22104527x544S22104466x544S18048464x576S18040507x578 \textbf{box}\\ AS15a40S15a48S22104S22104S18040S18048M538x527S15a40511x475S15a48481x473S22104526x478S22104465x478S18048463x510S18040506x512 \textbf{brush teeth}\\ AS10012S27102S34700M528x566S34700482x483S10012498x520S27102495x526 \textbf{cry}\\ AS10000S10008S20e00S22f24S10608S10600S31400M523x579S20e00494x525S31400482x483S10008477x494S22f24488x542S10000508x494S10600508x553S10608476x552 \textbf{daughter}\\ AS15d10S20500S22a03S15d32S2ff00M533x569S2ff00482x483S15d10514x508S15d32487x550S20500520x496S22a03501x532 \textbf{don't want}\\ AS14c30S14c38S21600S21600S2ad03S2ad12S14c50S14c58S2fb04M534x557S2ad03513x495S2ad12476x495S2fb04493x517S14c58467x524S14c50510x526S14c30508x458S14c38470x460S21600479x444S21600516x444 \textbf{excuse}\\ AS18011S15d39S26a07S20f00M529x534S15d39471x511S18011485x501S20f00501x466S26a07508x484 \textbf{feel}\\ AS1c501S22a00S20e00M513x531S1c501488x506S20e00491x489S22a00490x470 \textbf{forgive}\\ AS18011S15d39S26a07S20f00M529x534S15d39471x511S18011485x501S20f00501x466S26a07508x484 \textbf{friend}\\ AS10651S1063aS20800S10659S10632S20800M527x534S1063a474x478S10651492x466S20800498x483S10659473x509S10632501x519S20800489x524 \textbf{happy}\\ AS15a02S15a09S20e00S2ea28M526x529S2ea28492x504S20e00494x485S15a01503x472S15a09474x472 \textbf{help}\\ AS15a37S1f502S20500S22b20M512x550S15a37489x527S1f502493x490S20500496x513S22b20494x450 \textbf{hurt}\\ AS10052S1000aS22a03S22a15S10002S1005aS2fb04M560x549S1000a440x466S10052530x452S10002503x534S1005a466x521S22a15474x492S22a03512x492S2fb04492x512 \textbf{idea}\\ AS19200S26504S20500S2ff00M550x518S2ff00482x483S19200529x459S26504519x469S20500520x487 \textbf{if}\\ AS19200S26a04S20500S2ff00M563x518S2ff00482x483S19200542x478S26a04519x487S20500504x501 \textbf{injury}\\ AS10052S1000aS22a03S22a15S10002S1005aS2fb04M560x549S1000a440x466S10052530x452S10002503x534S1005a466x521S22a15474x492S22a03512x492S2fb04492x512 \textbf{lay off}\\ AS18011S15d39S26a07S20f00M529x534S15d39471x511S18011485x501S20f00501x466S26a07508x484 \textbf{love}\\ AS20305S20301S20500S20500S37600M534x520S37600489x497S20305506x481S20301475x481S20500524x497S20500467x496 \textbf{pain}\\ AS10052S1000aS22a03S22a15S10002S1005aS2fb04M560x549S1000a440x466S10052530x452S10002503x534S1005a466x521S22a15474x492S22a03512x492S2fb04492x512 \textbf{pardon}\\ AS18011S15d39S26a07S20f00M529x534S15d39471x511S18011485x501S20f00501x466S26a07508x484 \textbf{regret}\\ AS1f701S21100M511x520S1f701490x481S21100495x506 \textbf{room}\\ AS15a40S15a48S22104S22104S18040S18048M538x527S15a40511x475S15a48481x473S22104526x478S22104465x478S18048463x510S18040506x512 \textbf{sad}\\ AS14c08S14c00S22a04S22a14S2fb04S2ff00M523x556S2fb04492x550S14c08475x498S2ff00482x483S22a04507x534S22a14479x534S14c00500x498 \textbf{son}\\ AS15d11S20500S22b03S15d32S2ff00M541x560S2ff00482x483S15d11518x485S15d32488x541S20500510x475S22b03505x512 \textbf{sorry}\\ AS1f701S21100M511x520S1f701490x481S21100495x506 \textbf{stop}\\ AS15a41S15a36S20500S22a04M515x524S15a36488x512S20500505x497S22a04492x477S15a41486x498 \textbf{suppose}\\ AS19200S26a04S20500S2ff00M563x518S2ff00482x483S19200542x478S26a04519x487S20500504x501 \textbf{uncle}\\ AS11510S2e230S2ff00M541x540S2e230522x508S2ff00482x483S11510523x474 \textbf{wash}\\ AS1f721S1f70fS21200M528x517S1f70f473x496S1f721473x484S21200500x491 \end{glossary} \subsection{Practice Sheet 4.A} \begin{multicols}{5} \begin{center} M508x515S10000493x485 % 1 \\M536x504S38800464x496 % . \\M518x518S30a00482x483 % y/n \\M510x523S10040495x493S26500491x478 % you \\M513x531S1c501488x506S20e00491x489S22a00490x470 % feel \\M526x529S2ea28492x504S20e00494x485S15a01503x472S15a09474x472 % happy \\M518x518S30c00482x483 % \? \\M522x529S10012492x496S10018488x499S2e736479x472 % when \\M536x507S38900464x493 % ? \vfil \columnbreak M508x515S10e00493x485 % 2 \\M536x504S38800464x496 % . \\M541x540S2e230522x508S2ff00482x483S11510523x474 % uncle \\M537x504S38700463x496 % , \\M510x523S10040495x493S26500491x478 % you \\M518x518S30c00482x483 % \? \\M526x535S22a20494x501S14c08474x465S14c00503x465S20338478x520S20330508x520 % how many \\M536x507S38900464x493 % ? \vfil \columnbreak M512x515S11e00489x485 % 3 \\M536x504S38800464x496 % . \\M518x518S30a00482x483 % y/n \\M507x523S15a28494x496S26500493x477 % your \\M518x518S2ff00482x483S20500495x469S14c10468x453 % father \\M537x504S38700463x496 % , \\M518x518S30c00482x483 % \? \\M526x535S22a20494x501S14c08474x465S14c00503x465S20338478x520S20330508x520 % how many \\M541x560S2ff00482x483S15d11518x485S15d32488x541S20500510x475S22b03505x512 % son \\M536x507S38900464x493 % ? \vfil \columnbreak M511x516S14400489x485 % 4 \\M536x504S38800464x496 % . \\M518x518S30a00482x483 % y/n \\M510x523S10040495x493S26500491x478 % you \\M532x518S18049468x483S18041507x483S20500486x507S20500504x507 % have \\M550x520S27126535x481S37906466x505S15a3a503x499S20500450x497S37906482x491S15a32460x487 % baby \\M536x507S38900464x493 % ? \vfil \columnbreak M512x516S14c00489x485 % 5 \\M536x504S38800464x496 % . \\M518x518S30a00482x483 % y/n \\M507x523S15a28494x496S26500493x477 % your \\M539x593S15a17509x507S2ff00482x483S20500520x492S15a40512x541S15a48482x539S22104527x544S22104466x544S18048464x576S18040507x578 % bedroom \\M539x527S1e110506x473S1e118470x473S26606509x511S26612462x511 % big \\M536x507S38900464x493 % ? \vfil \end{center} \end{multicols} \subsection{Practice Sheet 4.B} \begin{multicols}{5} \begin{center} M509x515S18720491x486 % 6 \\M536x504S38800464x496 % . \\M518x518S30a00482x483 % y/n \\M518x518S10043488x483S20500482x507 % me \\M512x524S10620489x476S22e04495x506 % need \\M528x566S34700482x483S10012498x520S27102495x526 % brush teeth \\M536x507S38900464x493 % ? \vfil \columnbreak M511x514S1a520490x486 % 7 \\M536x504S38800464x496 % . \\M510x523S10040495x493S26500491x478 % you \\M513x531S1c501488x506S20e00491x489S22a00490x470 % feel \\M534x543S14c30507x457S14c38469x458S15030508x512S15038467x511S26524493x493 % want \\M523x579S20e00494x525S31400482x483S10008477x494S22f24488x542S10000508x494S10600508x553S10608476x552 % cry \\M518x518S30a00482x483 % y/n \\M510x523S10040495x493S26500491x478 % you \\M536x507S38900464x493 % ? \vfil \columnbreak M511x514S1bb20490x486 % 8 \\M536x504S38800464x496 % . \\M518x518S30c00482x483 % \? \\M507x523S15a28494x496S26500493x477 % your \\M545x522S26500524x507S18510520x489S22104527x476S2ff00482x483 % boy \\M525x525S38701475x475 % / \\M525x554S1f540510x509S22a03486x540S20e00497x531S2ff00482x483 % girl \\M527x534S1063a474x478S10651492x466S20800498x483S10659473x509S10632501x519S20800489x524 % friend \\M522x525S11541498x491S11549479x498S20600489x476 % name \\M536x507S38900464x493 % ? \vfil \columnbreak M511x515S1ce20489x485 % 9 \\M536x504S38800464x496 % . \\M518x518S30a00482x483 % y/n \\M507x523S15a28494x496S26500493x477 % your \\M546x575S2ff00482x483S18510521x490S18518452x493S26500525x468S26510458x469S15a40507x524S15a48481x525S22a24493x560 % teacher \\M532x518S18049468x483S18041507x483S20500486x507S20500504x507 % have \\M533x569S2ff00482x483S15d10514x508S15d32487x550S20500520x496S22a03501x532 % daughter \\M536x507S38900464x493 % ? \vfil \columnbreak M513x528S2a538494x472S1f540488x504 % 10 \\M536x504S38800464x496 % . \\M543x567S15a37482x526S14c51500x541S22c00520x503S20500512x467S18510518x482S2ff00482x483S20500488x533 % learn \\M523x535S2ea48483x510S10011502x466S2ea04508x500S10019477x475 % sign (as in ``signing'') \\M537x504S38700463x496 % , \\M518x518S30a00482x483 % y/n \\M512x524S10620489x476S22e04495x506 % need \\M512x550S15a37489x527S1f502493x490S20500496x513S22b20494x450 % help \\M510x523S10040495x493S26500491x478 % you \\M536x507S38900464x493 % ? \vfil \end{center} \end{multicols} \subsection{Practice Sheet 4.C} \begin{multicols}{5} \begin{center} M512x520S10000489x490S21d00494x480 % 11 \\M536x504S38800464x496 % . \\M518x518S30c00482x483 % \? \\M510x523S10040495x493S26500491x478 % you \\M560x549S1000a440x466S10052530x452S10002503x534S1005a466x521S22a15474x492S22a03512x492S2fb04492x512 % hurt \\M518x525S10020482x476S27106503x485 % where \\M536x507S38900464x493 % ? \vfil \columnbreak M509x521S10e00491x491S21d00491x480 % 12 \\M536x504S38800464x496 % . \\M563x518S2ff00482x483S19200542x478S26a04519x487S20500504x501 % suppose \\M546x575S2ff00482x483S18510521x490S18518452x493S26500525x468S26510458x469S15a40507x524S15a48481x525S22a24493x560 % teacher \\M529x523S14c50476x492S22520472x478S26606499x505 % fingerspell \\M523x524S26503497x511S21100490x497S15a57477x476S15a51500x481 % slow \\M537x504S38700463x496 % , \\M518x518S30a00482x483 % y/n \\M510x523S10040495x493S26500491x478 % you \\M536x518S2ff00482x483S10000520x471S21c00530x461 % understand \\M515x519S10047485x498S26507501x481 % 3rd person \\M536x507S38900464x493 % ? \vfil \columnbreak M513x519S22114487x481S12d00489x489 % 13 \\M536x504S38800464x496 % . \\M518x518S30c00482x483 % \? \\M510x523S10040495x493S26500491x478 % you \\M534x520S37600489x497S20305506x481S20301475x481S20500524x497S20500467x496 % love \\M518x540S34600482x483S1e111473x512S21800463x502 % who \\M532x535S15a30475x466S15a30512x466S2fb04494x529S2e510468x498S2e508509x498 % here \\M536x507S38900464x493 % ? \vfil \columnbreak M513x515S14700493x493S22114487x486 % 14 \\M536x504S38800464x496 % . \\M518x518S30c00482x483 % \? \\M510x523S10040495x493S26500491x478 % you \\M523x556S2fb04492x550S14c08475x498S2ff00482x483S22a04507x534S22a14479x534S14c00500x498 % sad \\M574x535S22a05540x506S15d11520x488S19a37551x508S2ff00482x483 % why \\M536x507S38900464x493 % ? \vfil \columnbreak M513x518S22114487x483S15d00494x491 % 15 \\M536x504S38800464x496 % . \\M518x518S30a00482x483 % y/n \\M511x520S1f701490x481S21100495x506 % sorry \\M534x532S10001513x469S10009467x469S2b734507x513S2b745481x513S2fb00496x500 % come \\M527x528S16d10509x508S16d18473x508S2df06507x479S2df1e473x480S2fb00493x473 % class \\M536x507S38900464x493 % ? \vfil \end{center} \end{multicols} \subsection{Practice Sheet 4.D} \begin{multicols}{5} \begin{center} M520x522S18700502x492S2e00e480x479 % 16 \\M536x504S38800464x496 % . \\M518x518S30a00482x483 % y/n \\M534x543S14c30507x457S14c38469x458S15030508x512S15038467x511S26524493x493 % want \\M515x524S15a36488x512S20500505x497S22a04492x477S15a41486x498 % stop \\M543x567S15a37482x526S14c51500x541S22c00520x503S20500512x467S18510518x482S2ff00482x483S20500488x533 % learn \\M523x535S2ea48483x510S10011502x466S2ea04508x500S10019477x475 % sign (as in ``signing'') \\M536x507S38900464x493 % ? \vfil \columnbreak M522x522S1a500501x494S2e00e478x478 % 17 \\M536x504S38800464x496 % . \\M509x515S18720491x486 % w \\M510x508S1f720490x493 % a (letter) \\M508x508S20320493x493 % s \\M515x508S11502485x493 % h \\M518x518S30c00482x483 % \? \\M523x535S2ea48483x510S10011502x466S2ea04508x500S10019477x475 % sign (as in ``signing'') \\M536x507S38900464x493 % ? \vfil \columnbreak M523x522S1bb00502x492S2e00e478x479 % 18 \\M536x504S38800464x496 % . \\M518x518S30c00482x483 % \? \\M507x523S15a28494x496S26500493x477 % your \\M529x534S15d39471x511S18011485x501S20f00501x466S26a07508x484 % excuse \\M553x518S2fb04492x512S26c0a538x483S26c12448x488S14c39468x483S14c31506x483 % what \\M536x507S38900464x493 % ? \vfil \columnbreak M524x522S1ce00502x490S2e00e477x479 % 19 \\M536x504S38800464x496 % . \\M541x562S1f710521x510S2ff00482x483S2e230519x530 % aunt \\M510x523S10040495x493S26500491x478 % you \\M537x504S38700463x496 % , \\M518x518S30c00482x483 % \? \\M526x535S22a20494x501S14c08474x465S14c00503x465S20338478x520S20330508x520 % how many \\M536x507S38900464x493 % ? \vfil \columnbreak M517x513S22114484x488S1f420488x498 % 20 \\M536x504S38800464x496 % . \\M518x518S30a00482x483 % y/n \\M510x523S10040495x493S26500491x478 % you \\M534x543S14c30507x457S14c38469x458S15030508x512S15038467x511S26524493x493 % want \\M550x520S27126535x481S37906466x505S15a3a503x499S20500450x497S37906482x491S15a32460x487 % baby \\M536x507S38900464x493 % ? \vfil \end{center} \end{multicols} \subsection{Story 4} \begin{multicols}{5} \begin{center} M513x514S15a01490x486S20500487x503 % my \\M541x562S1f710521x510S2ff00482x483S2e230519x530 % aunt \\M534x543S14c30507x457S14c38469x458S15030508x512S15038467x511S26524493x493 % want \\M535x530S10110506x470S10118482x470S2df08514x509S2df10466x509S20500498x503S2fb00495x521 % divorce (initialized) \\M536x504S38800464x496 % . M541x540S2e230522x508S2ff00482x483S11510523x474 % uncle \\M515x519S10047485x498S26507501x481 % 3rd person \\M534x520S37600489x497S20305506x481S20301475x481S20500524x497S20500467x496 % love \\M521x520S26501480x481S10041500x490 % 3rd person (left) \\M536x504S38800464x496 % . R534x557S2ad03513x495S2ad12476x495S2fb04493x517S14c58467x524S14c50510x526S14c30508x458S14c38470x460S21600479x444S21600516x444 % don't want \\R535x531S10140504x469S10148484x469S20500498x473S28905508x504S2891d466x504S2fb04494x525 % divorce \\M536x504S38800464x496 % . M515x519S10047485x498S26507501x481 % 3rd person \\R523x556S2fb04492x550S14c08475x498S2ff00482x483S22a04507x534S22a14479x534S14c00500x498 % sad \\M537x504S38700463x496 % , \\R534x527S14e02503x501S14e0a466x502S28808511x478S28810474x478S2f700493x474S2fb00491x484 % angry \\M537x504S38700463x496 % , \\R593x613S1000a490x547S10052563x537S10002545x598S1005a508x585S22a15516x556S22a03554x556S2fb04534x576S2ff00482x483 % hurt (over heart) \\M536x504S38800464x496 % . R525x526S10018476x477S10018497x496S2882a503x475 % go \\R539x593S15a17509x507S2ff00482x483S20500520x492S15a40512x541S15a48482x539S22104527x544S22104466x544S18048464x576S18040507x578 % bedroom \\R529x528S2c407494x500S10e01472x473 % toss and turn \\R523x579S20e00494x525S31400482x483S10008477x494S22f24488x542S10000508x494S10600508x553S10608476x552 % cry \\M537x504S38700463x496 % , \\R523x579S20e00494x525S31400482x483S10008477x494S22f24488x542S10000508x494S10600508x553S10608476x552 % cry \\M536x504S38800464x496 % . R541x515S10000459x485S26706478x499S10600526x489 % ask (right to far right) \\R527x534S1063a474x478S10651492x466S20800498x483S10659473x509S10632501x519S20800489x524 % friend \\R526x532S15a37503x509S1f502503x471S20500510x495S22b21474x469 % help (far right to right) \\M536x504S38800464x496 % . M527x534S1063a474x478S10651492x466S20800498x483S10659473x509S10632501x519S20800489x524 % friend \\M539x511S10047461x490S26706497x493 % 3rd person (far right) \\M550x518S2ff00482x483S19200529x459S26504519x469S20500520x487 % idea \\M537x504S38700463x496 % , \\M563x518S2ff00482x483S19200542x478S26a04519x487S20500504x501 % suppose \\M541x540S2e230522x508S2ff00482x483S11510523x474 % uncle \\M534x543S14c30507x457S14c38469x458S15030508x512S15038467x511S26524493x493 % want \\M526x529S2ea28492x504S20e00494x485S15a01503x472S15a09474x472 % happy \\M537x504S38700463x496 % , \\M534x543S14c30507x457S14c38469x458S15030508x512S15038467x511S26524493x493 % want \\M515x524S15a36488x512S20500505x497S22a04492x477S15a41486x498 % stop \\M513x531S1c501488x506S20e00491x489S22a00490x470 % feel \\M523x556S2fb04492x550S14c08475x498S2ff00482x483S22a04507x534S22a14479x534S14c00500x498 % sad \\M515x519S10047485x498S26507501x481 % 3rd person \\M512x524S10620489x476S22e04495x506 % need \\L531x519S10008470x482S10020476x489S20500488x487S26602501x496 % meet (right to left) \\M513x520S15a20487x493S26507500x481 % his / hers / its \\M528x560S22a03497x526S17107479x534S16d21473x540S17107512x511S2ff00482x483 % wife \\M511x520S1f701490x481S21100495x506 % apologize \\M513x531S15a10501x504S15a18488x504S2b724494x470 % ask (formal) \\M529x534S15d39471x511S18011485x501S20f00501x466S26a07508x484 % forgive \\M536x504S38800464x496 % . \end{center} \end{multicols} \end{document}
{ "alphanum_fraction": 0.8366966093, "avg_line_length": 37.2391048292, "ext": "tex", "hexsha": "c612c552143ff5a0ee8260e4856778c422881b4a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "59b6ba846857e388341625c9f90b1c43737bcc56", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "dazitzel/SignWritingASLUniversity", "max_forks_repo_path": "src/supplement04.sw.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "59b6ba846857e388341625c9f90b1c43737bcc56", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "dazitzel/SignWritingASLUniversity", "max_issues_repo_path": "src/supplement04.sw.tex", "max_line_length": 324, "max_stars_count": null, "max_stars_repo_head_hexsha": "59b6ba846857e388341625c9f90b1c43737bcc56", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "dazitzel/SignWritingASLUniversity", "max_stars_repo_path": "src/supplement04.sw.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 13266, "size": 31616 }
\section{Subsets and powersets}
{ "alphanum_fraction": 0.7647058824, "avg_line_length": 8.5, "ext": "tex", "hexsha": "f7b6ddcebe16718e847c89c7234de6298a566d8c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/logic/sets/02-00-Subsets_and_powersets.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/logic/sets/02-00-Subsets_and_powersets.tex", "max_line_length": 31, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/logic/sets/02-00-Subsets_and_powersets.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 10, "size": 34 }
\section{Derived issues} While the presented idea seems to cover the majority of cases, some scenarios may break the system's stability. However, depending on the chosen parent chain and the tweaks of inner protocol, their impact can be limited to an acceptable risk. \subsection{Forks of the parent} Whatever happens to the parent, the similar shall happen to the child. The child chain is entirely vulnerable to forks of the parent chain, and it is quite hard to agree about on which branch to continue. Most likely, the hyperchain would fork as well. On the other hand, if the validators manage to decide on one branch, it would be technically possible to jump into the other if the chosen one becomes less attractive. \subsection{Attacks on the parent} The hyperchain can never be more secure than the parent. If somebody succeeds in a 51\% attack on the parent chain, they will also take control of the child chain. Therefore the choice of the parent should be made with regards to its security. \subsection{Finalization time} Since there is no single correct strategy on how to react to forks, the finalization time shall not be shorter on a hyperchain than on the parent chain. If the parent key block gets rolled back, so will all of the leader elections.
{ "alphanum_fraction": 0.7970102282, "avg_line_length": 43.8275862069, "ext": "tex", "hexsha": "caf52e0c1c588008634d09fdbc7158a6a83a865f", "lang": "TeX", "max_forks_count": 6, "max_forks_repo_forks_event_max_datetime": "2022-03-06T08:57:33.000Z", "max_forks_repo_forks_event_min_datetime": "2020-08-10T13:10:04.000Z", "max_forks_repo_head_hexsha": "15446fe05ddf20885f244be5a89174f5a4907d29", "max_forks_repo_licenses": [ "0BSD" ], "max_forks_repo_name": "isabella232/hyperchains-whitepaper", "max_forks_repo_path": "unsolved.tex", "max_issues_count": 3, "max_issues_repo_head_hexsha": "15446fe05ddf20885f244be5a89174f5a4907d29", "max_issues_repo_issues_event_max_datetime": "2020-08-13T14:44:51.000Z", "max_issues_repo_issues_event_min_datetime": "2020-08-12T15:30:16.000Z", "max_issues_repo_licenses": [ "0BSD" ], "max_issues_repo_name": "aeternity/hyperchains-whitepaper", "max_issues_repo_path": "unsolved.tex", "max_line_length": 80, "max_stars_count": 5, "max_stars_repo_head_hexsha": "15446fe05ddf20885f244be5a89174f5a4907d29", "max_stars_repo_licenses": [ "0BSD" ], "max_stars_repo_name": "isabella232/hyperchains-whitepaper", "max_stars_repo_path": "unsolved.tex", "max_stars_repo_stars_event_max_datetime": "2021-11-13T03:13:31.000Z", "max_stars_repo_stars_event_min_datetime": "2020-08-18T09:11:26.000Z", "num_tokens": 274, "size": 1271 }
\documentclass{article} \pdfminorversion=4 \usepackage{tikz} \usetikzlibrary{fit,patterns,decorations.pathreplacing} \usepackage{graphicx} \def\thindivider{\centerline{\tiny%% -- --- --- -- ~~~ --- --- --- ~~~ --- -- --- -- ~~~~~~ --- --- --- ~~~ -- --- -- ~~~~~~ --- --- -- ~~~ --- ~~~ -- -- --- -- ~~~ --- --- ---}} \begin{document} \title{This PDF is a Git Repository\\Containing its Own \LaTeX\ Source\\and a Copy of Itself} \author{Evan Sultanik} \date{April 11, 2017} \maketitle Have you ever heard of the \texttt{git bundle} command? I hadn't. It bundles a set of Git objects---potentially even an entire repository---into a single file. Git allows you to treat that file as if it were a standard Git database, so you can do things like clone a repo directly from it. Its purpose is to easily sneakernet pushes or even whole repositories across air gaps. \thindivider Neighbors, it's possible to create a PDF that is also a Git repository. \begin{center} \begingroup \setbox9=\hbox{\footnotesize\verb|Receiving objects: 100% (174/174), 103.48 KiB | 0 bytes/s, done.|} \begin{minipage}{\wd9} \footnotesize\begin{verbatim} $ git clone PDFGitPolyglot.pdf foo Cloning into 'foo'... Receiving objects: 100% (174/174), 103.48 KiB | 0 bytes/s, done. Resolving deltas: 100% (100/100), done. $ cd foo $ ls PDFGitPolyglot.pdf PDFGitPolyglot.tex \end{verbatim} \end{minipage} \endgroup \end{center} \section{The Git Bundle File Format} \label{sec:GitBundleFormat} The file format for Git bundles doesn't appear to be formally specified anywhere, however, inspecting \texttt{bundle.c} reveals that it's relatively straightforward: \def\returnkey{\hspace*{1pt}{\setbox9=\hbox{$\hookleftarrow$}\tikz\node[text width=\wd9,fill=lightgray,rounded corners=1pt] at (0,0){$\hookleftarrow$};}\hspace*{1pt}} \begin{center} \begin{tikzpicture} \node[text width=0.9\hsize] at (0,0) (sig) {\footnotesize\tt\# v2 git bundle{\tiny\returnkey}}; \node[inner sep=0mm,fit=(sig),draw,rounded corners,dashed,minimum width=0.9\hsize] (sigbox) {}; \node[draw,rounded corners,fill=white] at (sigbox.north) (siglabel) {\footnotesize Git Bundle Signature}; \node[text width=0.9\hsize,anchor=north west,yshift=-0.5mm] at (sig.south west) (b1) {\footnotesize\tt 3aa340a2e3d125ab6703e5c9bdfede2054a9c0c5 refs/heads/master{\tiny\returnkey}}; \node[text width=0.9\hsize,anchor=north west,yshift=-0.5mm] at (b1.south west) (b2) {\footnotesize\tt 3aa340a2e3d125ab6703e5c9bdfede2054a9c0c5 refs/remotes/origin/master{\tiny\returnkey}}; \node[text width=0.9\hsize,anchor=north west,yshift=-0.5mm] at (b2.south west) (b3) {\footnotesize\tt 4146cfe2fe9249fc14623f832587efe197ef5d2d refs/stash{\tiny\returnkey}}; \node[text width=0.9\hsize,anchor=north west,yshift=-0.5mm] at (b3.south west) (b4) {\footnotesize\tt babdda4735ef164b7023be3545860d8b0bae250a HEAD{\tiny\returnkey}}; \node[inner sep=0mm,fit=(b1) (b4),draw,rounded corners,dashed,minimum width=0.9\hsize] (digestbox) {}; \node[rotate=-90,draw,rounded corners,fill=white] at (digestbox.east) {\footnotesize Digest}; \node[text width=0.9\hsize,anchor=north west,yshift=-0.5mm] at (b4.south west) (empty) {{\tiny\returnkey}\footnotesize\ }; \node[text width=0.9\hsize,anchor=north west,yshift=-0.5mm] at (empty.south west) (pack) {{\footnotesize\tt PACK}$\ldots$}; \node[inner sep=0mm,fit=(pack),draw,rounded corners,dashed,minimum width=0.9\hsize] (packbox) {}; \node[draw,rounded corners,fill=white] at (pack.south) {\footnotesize Git Packfile}; \end{tikzpicture} \end{center} Git has another custom format called a \emph{Packfile} that it uses to compress the objects in its database, as well as to reduce network bandwidth when pushing and pulling. The packfile is therefore an obvious choice for storing objects inside bundles. This of course raises the question: What is the format for a Git Packfile? Git does have some internal documentation in \begin{center} \texttt{Documentation/technical/pack-format.txt} \end{center} however, it is rather sparse, and does not provide enough detail to fully parse the format. The documentation also has some ``observations'' that suggest it wasn't even written by the file format's creator and instead was written by a developer who was later trying to make sense of the code. Luckily, Aditya Mukerjee already had to reverse engineer the file format for his GitGo clean-room implementation of Git, and he wrote an excellent blog entry about it\footnote{\texttt{https://codewords.recurse.com/issues/three/unpacking-git-packfiles}}. \begin{center} {\small \def\char#1{\,`\texttt{#1}'\,} \setbox9=\hbox{\char{K}} \newdimen\bytewidth\bytewidth=\wd9 \def\byte#1{\hbox to \bytewidth{\hfil\texttt{#1}\hfil}} \def\desc#1#2{\hbox to #1\bytewidth{\hfil #2\hfil}} \def\underbrace#1{\draw [ thick, decoration={ brace, mirror, raise=2pt }, decorate ] ([xshift=1pt]#1.base west) -- ([xshift=-1pt]#1.base east) node (#1label) [pos=0.5,anchor=north,yshift=-2pt]} \begin{tikzpicture} \node[inner sep=0pt,minimum width=4\bytewidth] at (0,0) (pack) {\char{P}\char{A}\char{C}\char{K}}; \node[inner sep=0pt,anchor=base west] at (pack.base east) (version) {\byte{00}\byte{00}\byte{00}\byte{02}}; \node[inner sep=0pt,anchor=base west] at (version.base east) (numobj) {\desc{4}{\# objects}}; \underbrace{pack}{\tiny magic}; \underbrace{version}{\tiny version}; \underbrace{numobj}{\tiny big-endian 4 byte int}; \node[inner sep=0pt,anchor=north west] at (packlabel.south -| pack.west) (chunks) {one data chunk for each object}; \node[inner sep=0pt,anchor=north west] at ([yshift=-0.5\baselineskip]chunks.south west) (sha1) {20-byte SHA-1 of all the previous data in the pack}; \end{tikzpicture}} \end{center} Although not entirely required to understand the polyglot, I think it is useful to describe the git packfile format here, since it is not well documented elsewhere. If that doesn't interest you, it's safe to skip to the next section. But if you do proceed, I hope you like Soviet holes, dear neighbor, because chasing this rabbit might remind you of \raisebox{-0.5pt}{\includegraphics{kolskaya}}. \begin{center} \includegraphics[width=0.33\hsize]{RazvodityeKrolikov_small} \end{center} Right, the next step is to figure out the ``chunk'' format. The chunk header is variable length, and can be as small as one byte. It encodes the object's type and its \emph{uncompressed} size. If the object is a \textit{delta} (\textit{i.e.}, a diff, as opposed to a complete object), the header is followed by either the SHA-1 hash of the base object to which the delta should be applied, or a byte reference within the packfile for the start of the base object. The remainder of the chunk consists of the object data, zlib-compressed. This is the format of the variable length chunk header: \begin{center} {\newcount\bitnum\bitnum=0 \begin{tikzpicture} \node[coordinate] at (0,0) (b0) {}; \foreach \i in {1,0,1,1,0,1,0,0,1,1,0,1,0,1,1,0,0,1,0,0,1,0,0,1} { {\newcount\prevbit\prevbit=\bitnum \global\advance\bitnum by 1 \node[anchor=west] at (b\the\prevbit.east) (b\the\bitnum) {\i};} \draw ([xshift=1pt]b\the\bitnum.south west) -- ([xshift=-1pt]b\the\bitnum.south east); } \draw [ thick, decoration={ brace, mirror, raise=-2pt }, decorate ] ([xshift=-1pt]b8.north east) -- ([xshift=1pt]b1.north west) node [pos=0.5,anchor=south,yshift=-2pt] {\footnotesize first byte}; \draw [ thick, decoration={ brace, mirror, raise=-2pt }, decorate ] ([xshift=-1pt]b16.north east) -- ([xshift=1pt]b9.north west) node [pos=0.5,anchor=south,yshift=-2pt] {\footnotesize second byte}; \draw [ thick, decoration={ brace, mirror, raise=-2pt }, decorate ] ([xshift=-1pt]b24.north east) -- ([xshift=1pt]b17.north west) node [pos=0.5,anchor=south,yshift=-2pt] {\footnotesize third byte}; \draw [ thick, decoration={ brace, mirror, raise=2pt }, decorate ] ([xshift=1pt]b2.south west) -- ([xshift=-1pt]b4.south east) node [pos=0.5,yshift=-2pt,anchor=north] (type) {\footnotesize object type}; \draw[<-] ([yshift=-2pt]b1.south) -- (type.south -| b1) node[anchor=north,align=center,font=\footnotesize] {if the MSB is one,\\then this is not\\the last byte}; \draw [ thick, decoration={ brace, mirror, raise=2pt }, decorate ] ([xshift=1pt]b5.south west) -- ([xshift=-1pt]b8.south east) node [pos=0.5,yshift=-2pt,anchor=north,align=center,font=\footnotesize] (length) {first four\\bits of\\the length\\(big-endian)}; \draw[<-] ([yshift=-2pt]b9.south) -- (length.south -| b9) node[anchor=north,align=center,font=\footnotesize] {MSB is one,\\so this is \emph{not} the last byte}; \draw [ thick, decoration={ brace, mirror, raise=2pt }, decorate ] ([xshift=1pt]b10.south west) -- ([xshift=-1pt]b16.south east) node [pos=0.5,yshift=-2pt,anchor=north,align=center,font=\footnotesize] (next) {the next seven\\bits of the length\\(big-endian)}; \draw[<-] ([yshift=-2pt]b17.south) -- (next.south -| b17) node[anchor=north,align=center,font=\footnotesize] {MSB is zero,\\so this \emph{is} the last byte}; \draw [ thick, decoration={ brace, mirror, raise=2pt }, decorate ] ([xshift=1pt]b18.south west) -- ([xshift=-1pt]b24.south east) node [pos=0.5,yshift=-2pt,anchor=north,align=center,font=\footnotesize] {the next seven\\bits of the length\\(big-endian)}; \end{tikzpicture}} \end{center} The second through fourth most significant bits of the first byte are used to store the object type. The remainder of the bytes in the header are of the same format as bytes two and three in this example. This example header represents an object of type $11_2$, which happens to be a git blob, and an \emph{uncompressed} length of $(100_2\,\verb|<<|\,14) + (1010110_2\,\verb|<<|\,7) + 1001001_2 = 76$,617 bytes. Since this is not a delta object, it is immediately followed by the zlib-compressed object data. The header does not encode the \emph{compressed} size of the object, since the DEFLATE encoding can determine the end of the object as it is being decompressed. At this point, if you found The Life and Opinions of Tristram Shandy to be boring or frustrating, then it's probably best to skip to the next section, 'cause it's turtles all the way down. {\fontfamily{jkpvos}\selectfont\begin{center} \begin{tikzpicture} \node at (0,0) (text) {\begin{minipage}{0.8\hsize}\small To come at the exact weight of things= in the scientific steel-yard, the fulchrum, [Walter Shandy] would say, should be almost invisible, to avoid all friction from popular tenets=;---without this= the minuti\ae\ of philosophy, which should always= turn the balance, will have no weight at all. Knowledge, like matter, he would affirm, was= divisible in infinitum;---that the grains= and scruples= were as= much a part of it, as= the gravitation of the whole world. \end{minipage}}; \node[anchor=north east] at (text.north west) {\LARGE``}; \node[anchor=south west] at (text.south east) {\LARGE''}; \end{tikzpicture} \end{center}} There are two types of delta objects: \emph{references}~(object type~7) and \emph{offsets}~(object type~6). Reference delta objects contain an additional 20~bytes at the end of the header before the zlib-compressed delta data. These 20~bytes contain the SHA-1 hash of the base object to which the delta should be applied. Offset delta objects are exactly the same, however, instead of referencing the base object by its SHA-1 hash, it is instead represented by a negative byte offset to the start of the object within the pack file. Since a negative byte offset can typically be encoded in two or three bytes, it's significantly smaller than a 20-byte SHA-1 hash. One must understand how these offset delta objects are encoded if---say, for some strange, masochistic reason---one wanted to change the order of objects within a packfile, since doing so would break the negative offsets. (Foreshadowing!) One would \emph{think} that git would use the same multi-byte length encoding that they used for the uncompressed object length. But no! This is what we have to go off of from the git documentation: \begin{center} \begingroup \setbox9=\hbox{\footnotesize\verb|for n >= 2 adding 2^7 + 2^14 + ... + 2^(7*(n-1))|} \begin{minipage}{\wd9} \footnotesize\begin{verbatim} n bytes with MSB set in all but the last one. The offset is then the number constructed by concatenating the lower 7 bit of each byte, and for n >= 2 adding 2^7 + 2^14 + ... + 2^(7*(n-1)) to the result. \end{verbatim} \end{minipage} \endgroup \end{center} Right. Some experimenting resulted in the following decoding logic that appears to work: \begin{center} \begingroup \setbox9=\hbox{\footnotesize\verb| reference += (1 << (7 * (bytes_read - 1)))|} \begin{minipage}{\wd9} \footnotesize {\color{blue}\verb|def|}\verb| decode_obj_ref|{\color{gray}\verb|(|}\verb|data|{\color{gray}\verb|):|}\\ \verb| bytes_read = 0|\\ \verb| reference = 0|\\ \verb| |{\color{blue}\verb|for|}\verb| c |{\color{blue}\verb|in map|}{\color{gray}\verb|(|}\verb|ord, data|{\color{gray}\verb|):|}\\ \verb| bytes_read += 1|\\ \verb| reference <<= 7|\\ \verb| reference += c & 0b01111111|\\ \verb| |{\color{blue}\verb|if not|}\verb| |{\color{gray}\verb|(|}\verb|c & 0b10000000|{\color{gray}\verb|):|}\\ {\color{blue}\verb| break|}\\ {\color{blue}\verb| if|}\verb| bytes_read >= 2|{\color{gray}\verb|:|}\\ \verb| reference += |{\color{gray}\verb|(|}\verb|1 << |{\color{gray}\verb|(|}\verb|7 * |{\color{gray}\verb|(|}\verb|bytes_read - 1|{\color{gray}\verb|)))|}\\ {\color{blue}\verb| return|}\verb| reference, bytes_read|\\ \end{minipage} \endgroup \end{center} The rabbit hole is deeper still; we haven't yet discovered the content of the compressed delta objects, let alone how they are applied to base objects. At this point, we have more than sufficient knowledge to proceed with the PoC, and my canary died ages ago. Aditya Mukerjee did a good job of explaining the process of applying deltas in his blog post, so I will stop here and proceed with the polyglot. \section{A Minimal Polyglot PoC} We now know that a git bundle is really just a git packfile with an additional header, and a git packfile stores individual objects using zlib, which uses the DEFLATE compression algorithm. DEFLATE supports zero compression, so if we can store the PDF in a single object (as opposed to it being split into deltas), then we could theoretically coerce it to be intact within a valid git bundle. Forcing the PDF into a single object is easy: We just need to add it to the repo last, immediately before generating the bundle. Getting the object to be compressed with zero compression is also relatively easy. That's because git was built in almost religious adherence to The UNIX Philosophy: It is architected with hundreds of sub commands it calls ``plumbing,'' of which the vast majority you will likely have never heard. For example, you might be aware that \texttt{git pull} is equivalent to a \texttt{git fetch} followed by a \texttt{git merge}. In fact, the \texttt{pull} code actually spawns a new \texttt{git} child process to execute each of those subcommands. Likewise, the \texttt{git bundle} command spawns a \texttt{git pack-objects} child process to generate the packfile portion of the bundle. All we need to do is inject the \texttt{-\relax-compression=0} argument into the list of command line arguments passed to \texttt{pack-objects}. This is a one-line addition to \texttt{bundle.c}:\\ {\footnotesize {\color{gray} \verb| argv_array_pushl(&pack_objects.args,|\\ \verb| "pack-objects", "--all-progress-implied",|}\\ \verb| "--compression=0",|\\ {\color{gray} \verb| "--stdout", "--thin", "--delta-base-offset",|\\ \verb| NULL);| }} Using our patched version of git, every object stored in the bundle will be uncompressed! \begin{center} \begingroup \setbox9=\hbox{\footnotesize\verb|$ git bundle create PDFGitPolyglot.pdf --all|} \begin{minipage}{\wd9} \footnotesize\begin{verbatim} $ export PATH=/path/to/patched/git:$PATH $ git init $ git add article.pdf $ git commit article.pdf -m "added" $ git bundle create PDFGitPolyglot.pdf --all \end{verbatim} \end{minipage} \endgroup \end{center} Any vanilla, un-patched version of git will be able to clone a repo from the bundle. It will also be a valid PDF, since virtually all PDF readers ignore garbage bytes before and after the PDF. \section{Generalizing the PoC} There are, of course, several limitations to the minimal PoC given in the previous section: \begin{enumerate} \item Adobe, being Adobe, will refuse to open the polyglot unless the PDF is version 1.4 or earlier. I guess it doesn't like some element of the git bundle signature or digest if it's PDF~1.5. Why? Because Adobe, that's why. \item Leaving the entire Git bundle uncompressed is wasteful if the repo contains other files; really, we only need the PDF to be uncompressed. \item If the PDF is larger than 65,535 bytes---the maximum size of an uncompressed DEFLATE block---then git will inject 5-byte deflate block headers inside the PDF, likely corrupting it. \item Adobe will also refuse to open the polyglot unless the PDF is near the beginning of the packfile\footnote{Requiring the PDF header to start near the beginning of a file is common for many, but not all, PDF viewers.}. \end{enumerate} The first limitation is easy to fix by instructing \LaTeX\ to produce a version 1.4~PDF by adding \texttt{\textbackslash pdfminorversion=4} to the document. The second limitation is a simple matter of software engineering, adding a command line argument to the \texttt{git bundle} command that accepts the hash of the single file to leave uncompressed, and passing that hash to \texttt{git pack-objects}. I have created a fork of git with this feature, available here: \begin{center} \texttt{https://github.com/ESultanik/git/tree/UncompressedPack} \end{center} As an aside, while fixing the second limitation I discovered that if a file has multiple PDFs concatenated after one another (\textit{i.e.},~a git bundle polyglot with multiple uncompressed PDFs in the repo), then the behavior is viewer-dependent: Some viewers will render the first PDF, while others will render the last. That's a fun way to generate a PDF that displays completely different content in, say, macOS~Preview versus Adobe. The third limitation is very tricky, and ultimately why this polyglot was not used for the PDF of a digital issue of PoC$\|$GTFO. I've a solution, but it will not work if the PDF contains any objects (\textit{e.g.}, images) that are larger than 65,535 bytes. A universal solution would be to break up the image into smaller ones and tile it back together, but that is not feasible for a document the size of a PoC$\|$GTFO issue. DEFLATE headers for uncompressed blocks are very simple: The first byte encodes whether the following block is the last in the file, the next two bytes encode the block length, and the last two bytes are the ones' complement of the length. Therefore, to resolve this issue, all we need to do is move all of the DEFLATE headers that zlib created to different positions that won't corrupt the PDF, and update their lengths accordingly. Where can we put a 5-byte DEFLATE header such that it won't corrupt the PDF? We could use our standard trick of putting it in a PDF object stream that we've exploited countless times before to enable PoC$\|$GTFO polyglots. The trouble with that is: Object streams are fixed-length, so once the PDF is decompressed (\textit{i.e.},~when a repo is cloned from the git bundle), then all of the 5-byte DEFLATE headers will disappear and the object stream lengths would all be incorrect. Instead, I chose to use PDF comments, which start at any occurrence of the percent sign character~(\%) outside a string or stream and continue until the first occurrence of a newline. All of the PDF viewers I tested don't seem to care if comments include non-ASCII characters; they seem to simply scan for a newline. Therefore, we can inject ``\texttt{\%\textbackslash n}'' between PDF objects and move the DEFLATE headers there. The only caveat is that the DEFLATE header itself can't contain a newline byte~(\texttt{0x0A}), otherwise the comment would be ended prematurely. We can resolve that, if needed, by adding extra spaces to the end of the comment, increasing the length of the following DEFLATE block and thus increasing the length bytes in the DEFLATE header and avoiding the \texttt{0x0A}. The only concession made with this approach is that PDF Xref offsets in the deflated version of the PDF will be off by a multiple of 5, due to the removed DEFLATE headers. Fortunately, most PDF readers can gracefully handle incorrect Xref offsets (at the expense of a slower loading time), and this will only affect the PDF contained in the repository, \emph{not} the PDF polyglot. As a final step, we need to update the SHA-1 sum at the end of the packfile~(\textit{q.v.}~Section~\ref{sec:GitBundleFormat}), since we moved the locations of the DEFLATE headers, thus affecting the hash. At this point, we have all the tools necessary to create a generalized PDF/Git Bundle polyglot for \emph{almost} any PDF and git repository. The only remaining hurdle is that some viewers require that the PDF occur as early in the packfile as possible. At first, I considered applying another patch directly to the git source code to make the uncompressed object first in the packfile. This approach proved to be very involved, in part due to git's UNIX design philosophy and architecture of generic code reuse. We're already updating the packfile's SHA-1 hash due to changing the DEFLATE headers, so instead I decided to simply reorder the objects after-the-fact, subsequent to the DEFLATE header fix but before we update the hash. The only challenge is that moving objects in the packfile has the potential to break offset delta objects, since they refer to their base objects via a byte offset within the packfile. Moving the PDF to the beginning will break any offset delta objects that occur after the original position of the PDF that refer to base objects that occur before the original position of the PDF. I originally attempted to rewrite the broken offset delta objects, which is why I had to dive deeper into the rabbit hole of the packfile format to understand the delta object headers (as you saw at the end of Section~\ref{sec:GitBundleFormat}, if you were brave enough to finish it). Rewriting the broken offset delta objects is the \emph{correct} solution, but, in the end, I discovered a much simpler way. \begin{center} \begin{tikzpicture} \node at (0,0) (text) {\begin{minipage}{0.8\hsize}\small As a matter of fact, G-d just questioned my judgment. He said, `Terry, are you worthy to be the man who makes The Temple? If you are, you must answer: Is this [dastardly], or is this divine intellect?' \end{minipage}}; \node[anchor=north east] at (text.north west) {\LARGE``}; \node[anchor=south west] at (text.south east) {\LARGE''}; \node[anchor=north east] at (text.south east) (terry) {\setbox9=\hbox{\llap{---}Terry A.\ Davis, creator of TempleOS}\parbox{\wd9}{\raggedright\box9 self-proclaimed ``smartest programmer that's ever lived''}}; \end{tikzpicture} \end{center} Terry's not the only one who's written a compiler! In the previous section, recall that we created the minimal PoC by patching the command line arguments to \texttt{pack-objects}. One of the command line arguments that is already passed by default is \texttt{-{}-delta-base-offset}. Running \texttt{git help pack-objects} reveals the following: \begin{center} {\footnotesize\setbox9\hbox{\verb|A packed archive can express the base object of a delta as either a|} \begin{minipage}{\wd9} \begin{verbatim} A packed archive can express the base object of a delta as either a 20-byte object name or as an offset in the stream, but ancient versions of Git don't understand the latter. By default, git pack-objects only uses the former format for better compatibility. This option allows the command to use the latter format for compactness. Depending on the average delta chain length, this option typically shrinks the resulting packfile by 3-5 per-cent. \end{verbatim} \end{minipage}} \end{center} So all we need to do is \emph{remove} the \texttt{-{}-delta-base-offset} argument and git will not include any offset delta objects in the pack! \thindivider Okay, I have to admit something: There is one more challenge. You see, the PDF standard~(ISO~32000-1) says \begin{center} \begin{tikzpicture} \node at (0,0) (text) {\begin{minipage}{0.8\hsize}\small The \emph{trailer} of a PDF file enables a conforming reader to quickly find the cross-reference table and certain special objects. Conforming readers should read a PDF file from its end. The last line of the file shall contain only the end-of-file marker, \texttt{\%\%EOF}. \end{minipage}}; \node[anchor=north east] at (text.north west) {\LARGE``}; \node[anchor=south west] at (text.south east) {\LARGE''}; \end{tikzpicture} \end{center} Granted, we are producing a PDF that conforms to version~1.4 of the specification, which doesn't appear to have that requirement. However, at least as early as version 1.3, the specification did have an implementation note that Acrobat requires the \texttt{\%\%EOF} to be within the last 1024 bytes of the file. Either way, that's not guaranteed to be the case for us, especially since we are moving the PDF to be at the beginning of the packfile. There are always going to be at least 20 trailing bytes after the PDF's \texttt{\%\%EOF} (namely the packfile's final SHA-1 checksum), and if the git repository is large, there are likely to be more than 1024 bytes. Fortunately, most common PDF readers don't seem to care how many trailing bytes there are, at least when the PDF is version~1.4. Unfortunately, some readers such as Adobe's try to be ``helpful,'' silently ``fixing'' the problem and offering to save the fixed version upon exit. We can at least partially fix the PDF, ensuring that the \texttt{\%\%EOF} is exactly 20~bytes from the end of the file, by creating a second uncompressed git object as the very end of the packfile (right before the final 20~byte SHA-1 checksum). We could then move the trailer from the end of the original PDF at the start of the pack to the new git object at the end of the pack. Finally, we could encapsulate the ``middle'' objects of the packfile inside a PDF stream object, such that they are ignored by the PDF. The tricky part is that we would have to know how many bytes will be in that stream \emph{before} we add the PDF to the git database. That's theoretically possible to do \textit{a priori}, but it'd be very labor intensive to pull off. Furthermore, using this approach will completely break the inner PDF that is produced by cloning the repository, since its trailer will then be in a separate file. Therefore, I chose to live with Adobe's helpfulness and not pursue this fix for the PoC. \thindivider This PDF is a git bundle containing its \LaTeX\ source, as well as all of the code necessary to regenerate this polyglot. Clone it to take a look at the history of this document and its associated code! The code is also hosted on GitHub\footnote{\texttt{https://github.com/ESultanik/PDFGitPolyglot}}. {\fontfamily{jkpvos}\selectfont\begin{center} \begin{minipage}{0.8\hsize}\small Thus=---thus=, my fellow-neighbours= and associates= in this= great harvest of our learning, now ripening before our eyes=; thus= it is=, by slow steps= of casual increase, that our knowledge physical, metaphysical, physiological, polemical, nautical, mathematical, \ae{}nigmatical, technical, biographical, romantical, chemical, obstetrical, and polyglottical, with fifty other branches= of it, (most of 'em ending as= these do, in ical) have for these four last centuries= and more, gradually been creeping upwards= towards= that Akme of their perfections=, from which, if we may form a conjecture from the advances= of these last \thepage~pages=, we cannot possibly be far off. \end{minipage} \end{center}} \section{License} Copyright \textcopyright\ 2017 Evan A.~Sultanik Permission is hereby granted, free of charge, to any person obtaining a copy of this document and associated source files (the ``Software''), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice, this permission notice, and the entire contents and history of its associated git repository shall be included in all copies or substantial portions of the Software. \textsc{The Software is provided ``as is'', without warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose and noninfringement. In no event shall the authors or copyright holders be liable for any claim, damages or liability, whether in an action of contract, tort or otherwise, arising from, out of or in connection with the Software or the use or other dealings in the software.} \end{document}
{ "alphanum_fraction": 0.7417775377, "avg_line_length": 48.4680851064, "ext": "tex", "hexsha": "8bea3c5a7efc473495b60623a96b513cb0c1c11b", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2017-10-22T17:14:22.000Z", "max_forks_repo_forks_event_min_datetime": "2017-06-20T20:09:36.000Z", "max_forks_repo_head_hexsha": "4115327923a184d2aac7b76a10b4a4aca98e15ad", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ESultanik/PDFGitPolyglot", "max_forks_repo_path": "article.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "4115327923a184d2aac7b76a10b4a4aca98e15ad", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ESultanik/PDFGitPolyglot", "max_issues_repo_path": "article.tex", "max_line_length": 680, "max_stars_count": 32, "max_stars_repo_head_hexsha": "4115327923a184d2aac7b76a10b4a4aca98e15ad", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ESultanik/PDFGitPolyglot", "max_stars_repo_path": "article.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-18T23:37:27.000Z", "max_stars_repo_stars_event_min_datetime": "2017-06-19T14:58:17.000Z", "num_tokens": 8507, "size": 29614 }
\section{Concurrent process calculi and spatial logics }\label{sec:concurrent_process_calculi_and_spatial_logics_} % (fold) In the last thirty years the process calculi have matured into a remarkably powerful analytic tool for reasoning about concurrent and distributed systems. Process-calculus-based algebraical specification of processes began with Milner's Calculus for Communicating Systems (CCS) \cite{MilnerCCS80} and Hoare's Communicating Sequential Processes (CSP) \cite{CSP} \cite{CSP1} \cite{CSP2} \cite{CSP3}, and continue through the development of the so-called mobile process calculi, e.g. Milner, Parrow and Walker's $\pi$-calculus \cite{ParrowWalker}, Cardelli and Caires's spatial logic \cite{CairesC04} \cite{CairesC03} \cite{Caires04}, or Meredith and Radestock's reflective calculi \cite{MeredithR05} \cite{meredith2005rho}. The process-calculus-based algebraical specification of processes has expanded its scope of applicability to include the specification, analysis, simulation and execution of processes in domains such as: \begin{itemize} \item telecommunications, networking, security and application level protocols \cite{AbadiB02} \cite{AbadiB03} \cite{BrownLM05} \cite{LaneveZ05}; \item programming language semantics and design \cite{BrownLM05} \cite{djoin} \cite{JoCaml} \cite{WojciechowskiS99}; \item webservices \cite{BrownLM05} \cite{LaneveZ05} \cite{MeredithB03}; \item and biological systems \cite{Cardelli04} \cite{DanosL03} \cite{RegevS03} \cite{PriamiRSS01}. \end{itemize} Among the many reasons for the continued success of this approach are two central points. First, the process algebras provide a compositional approach to the specification, analysis and execution of concurrent and distributed systems. Owing to Milner's original insights into computation as interaction \cite{Milner93}, the process calculi are so organized that the behavior ---the semantics--- of a system may be composed from the behavior of its components \cite{Fokkink}. This means that specifications can be constructed in terms of components ---without a global view of the system--- and assembled into increasingly complete descriptions. The second central point is that process algebras have a potent proof principle, yielding a wide range of effective and novel proof techniques \cite{MilnerS92} \cite{SangiorgiWalker} \cite{Sangiorgi95} \cite{hop}. In particular, \emph{bisimulation} encapsulates an effective notion of process equivalence that has been used in applications as far-ranging as algorithmic games semantics \cite{Abramsky2005Algorithmic-Gam} and the construction of model-checkers \cite{Caires04}. The essential notion can be stated in an intuitively recursive formulation: a \emph{bisimulation} between two processes $P$ and $Q$ is an equivalence relation $E$ relating $P$ and $Q$ such that: whatever action of $P$ can be observed, taking it to a new state $P'$, can be observed of $Q$, taking it to a new state $Q'$, such that $P'$ is related to $Q'$ by $E$ and vice versa. $P$ and $Q$ are \emph{bisimilar} if there is some bisimulation relating them. Part of what makes this notion so robust and widely applicable is that it is parameterized in the actions observable of processes $P$ and $Q$, thus providing a framework for a broad range of equivalences and up-to techniques \cite{milner92techniques} all governed by the same core principle \cite{SangiorgiWalker} \cite{Sangiorgi95} \cite{hop}. % section concurrent_process_calculi_and_spatial_logics_ (end)
{ "alphanum_fraction": 0.8014831717, "avg_line_length": 50.8115942029, "ext": "tex", "hexsha": "5da109faddf85eb87a7d398ac47fad8dff6a0197", "lang": "TeX", "max_forks_count": 7, "max_forks_repo_forks_event_max_datetime": "2020-06-16T00:37:25.000Z", "max_forks_repo_forks_event_min_datetime": "2016-10-03T06:03:03.000Z", "max_forks_repo_head_hexsha": "c87163938857589153eb5225d0e4ac17597fd189", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "leithaus/pi4u", "max_forks_repo_path": "qm2pi/qm2pi.process.calculi.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "c87163938857589153eb5225d0e4ac17597fd189", "max_issues_repo_issues_event_max_datetime": "2019-08-19T22:39:58.000Z", "max_issues_repo_issues_event_min_datetime": "2018-07-06T19:01:06.000Z", "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "leithaus/pi4u", "max_issues_repo_path": "qm2pi/qm2pi.process.calculi.tex", "max_line_length": 123, "max_stars_count": 13, "max_stars_repo_head_hexsha": "c87163938857589153eb5225d0e4ac17597fd189", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "leithaus/pi4u", "max_stars_repo_path": "qm2pi/qm2pi.process.calculi.tex", "max_stars_repo_stars_event_max_datetime": "2020-06-16T00:37:17.000Z", "max_stars_repo_stars_event_min_datetime": "2015-10-12T20:35:01.000Z", "num_tokens": 941, "size": 3506 }
\documentclass[journal,12pt,twocolumn]{IEEEtran} % \usepackage{setspace} \usepackage{gensymb} \usepackage{xcolor} \usepackage{caption} %\usepackage{subcaption} %\doublespacing \singlespacing %\usepackage{graphicx} %\usepackage{amssymb} %\usepackage{relsize} \usepackage[cmex10]{amsmath} \usepackage{mathtools} %\usepackage{amsthm} %\interdisplaylinepenalty=2500 %\savesymbol{iint} %\usepackage{txfonts} %\restoresymbol{TXF}{iint} %\usepackage{wasysym} \usepackage{amsthm} \usepackage{mathrsfs} \usepackage{txfonts} \usepackage{stfloats} \usepackage{cite} \usepackage{cases} \usepackage{subfig} %\usepackage{xtab} \usepackage{longtable} \usepackage{multirow} %\usepackage{algorithm} %\usepackage{algpseudocode} \usepackage{enumitem} \usepackage{mathtools} %\usepackage{eenrc} %\usepackage[framemethod=tikz]{mdframed} \usepackage{hyperref} \usepackage{listings} \usepackage[latin1]{inputenc} %% \usepackage{color} %% \usepackage{array} %% \usepackage{longtable} %% \usepackage{calc} %% \usepackage{multirow} %% \usepackage{hhline} %% \usepackage{ifthen} %% %optionally (for landscape tables embedded in another document): %% \usepackage{lscape} \usepackage{tikz} \usepackage{circuitikz} \usepackage{karnaugh-map} \usepackage{pgf} \usepackage{url} \def\UrlBreaks{\do\/\do-} %\usepackage{stmaryrd} %\usepackage{wasysym} %\newcounter{MYtempeqncnt} \DeclareMathOperator*{\Res}{Res} %\renewcommand{\baselinestretch}{2} \renewcommand\thesection{\arabic{section}} \renewcommand\thesubsection{\thesection.\arabic{subsection}} \renewcommand\thesubsubsection{\thesubsection.\arabic{subsubsection}} \renewcommand\thesectiondis{\arabic{section}} \renewcommand\thesubsectiondis{\thesectiondis.\arabic{subsection}} \renewcommand\thesubsubsectiondis{\thesubsectiondis.\arabic{subsubsection}} % correct bad hyphenation here \hyphenation{op-tical net-works semi-conduc-tor} %\lstset{ %language=C, %frame=single, %breaklines=true %} %\lstset{ %%basicstyle=\small\ttfamily\bfseries, %%numberstyle=\small\ttfamily, %language=Octave, %backgroundcolor=\color{white}, %%frame=single, %%keywordstyle=\bfseries, %%breaklines=true, %%showstringspaces=false, %%xleftmargin=-10mm, %%aboveskip=-1mm, %%belowskip=0mm %} %\surroundwithmdframed[width=\columnwidth]{lstlisting} \def\inputGnumericTable{} %% \lstset{ %language=C, frame=single, breaklines=true, columns=fullflexible } \begin{document} % \theoremstyle{definition} \newtheorem{theorem}{Theorem}[section] \newtheorem{problem}{Problem} \newtheorem{proposition}{Proposition}[section] \newtheorem{lemma}{Lemma}[section] \newtheorem{corollary}[theorem]{Corollary} \newtheorem{example}{Example}[section] \newtheorem{definition}{Definition}[section] %\newtheorem{algorithm}{Algorithm}[section] %\newtheorem{cor}{Corollary} \newcommand{\BEQA}{\begin{eqnarray}} \newcommand{\EEQA}{\end{eqnarray}} \newcommand{\define}{\stackrel{\triangle}{=}} \bibliographystyle{IEEEtran} %\bibliographystyle{ieeetr} \providecommand{\nCr}[2]{\,^{#1}C_{#2}} % nCr \providecommand{\nPr}[2]{\,^{#1}P_{#2}} % nPr \providecommand{\mbf}{\mathbf} \providecommand{\pr}[1]{\ensuremath{\Pr\left(#1\right)}} \providecommand{\qfunc}[1]{\ensuremath{Q\left(#1\right)}} \providecommand{\sbrak}[1]{\ensuremath{{}\left[#1\right]}} \providecommand{\lsbrak}[1]{\ensuremath{{}\left[#1\right.}} \providecommand{\rsbrak}[1]{\ensuremath{{}\left.#1\right]}} \providecommand{\brak}[1]{\ensuremath{\left(#1\right)}} \providecommand{\lbrak}[1]{\ensuremath{\left(#1\right.}} \providecommand{\rbrak}[1]{\ensuremath{\left.#1\right)}} \providecommand{\cbrak}[1]{\ensuremath{\left\{#1\right\}}} \providecommand{\lcbrak}[1]{\ensuremath{\left\{#1\right.}} \providecommand{\rcbrak}[1]{\ensuremath{\left.#1\right\}}} %\providecommand{\ceil}[1]{\left \lceil #1 \right \rceil } \theoremstyle{remark} \newtheorem{rem}{Remark} \newcommand{\sgn}{\mathop{\mathrm{sgn}}} %\providecommand{\abs}[1]{\left\vert#1\right\vert} \providecommand{\res}[1]{\Res\displaylimits_{#1}} \providecommand{\norm}[1]{\lVert#1\rVert} \providecommand{\mtx}[1]{\mathbf{#1}} %\providecommand{\mean}[1]{E\left[ #1 \right]} \providecommand{\fourier}{\overset{\mathcal{F}}{ \rightleftharpoons}} %\providecommand{\hilbert}{\overset{\mathcal{H}}{ \rightleftharpoons}} \providecommand{\system}{\overset{\mathcal{H}}{ \longleftrightarrow}} %\newcommand{\solution}[2]{\textbf{Solution:}{#1}} \newcommand{\solution}{\noindent \textbf{Solution: }} \providecommand{\dec}[2]{\ensuremath{\overset{#1}{\underset{#2}{\gtrless}}}} %\numberwithin{equation}{subsection} \numberwithin{equation}{problem} %\numberwithin{problem}{subsection} %\numberwithin{definition}{subsection} \makeatletter \@addtoreset{figure}{problem} \makeatother \let\StandardTheFigure\thefigure %\renewcommand{\thefigure}{\theproblem.\arabic{figure}} \renewcommand{\thefigure}{\theproblem} %\numberwithin{figure}{subsection} %\numberwithin{equation}{subsection} %\numberwithin{equation}{section} %%\numberwithin{equation}{problem} %%\numberwithin{problem}{subsection} \numberwithin{problem}{section} %%\numberwithin{definition}{subsection} %\makeatletter %\@addtoreset{figure}{problem} %\makeatother \makeatletter \@addtoreset{table}{problem} \makeatother \let\StandardTheFigure\thefigure \let\StandardTheTable\thetable %%\renewcommand{\thefigure}{\theproblem.\arabic{figure}} %\renewcommand{\thefigure}{\theproblem} \renewcommand{\thetable}{\theproblem} %%\numberwithin{figure}{section} %%\numberwithin{figure}{subsection} \def\putbox#1#2#3{\makebox[0in][l]{\makebox[#1][l]{}\raisebox{\baselineskip}[0in][0in]{\raisebox{#2}[0in][0in]{#3}}}} \def\rightbox#1{\makebox[0in][r]{#1}} \def\centbox#1{\makebox[0in]{#1}} \def\topbox#1{\raisebox{-\baselineskip}[0in][0in]{#1}} \def\midbox#1{\raisebox{-0.5\baselineskip}[0in][0in]{#1}} \vspace{3cm} \title{ \logo{ ESP32 as incrementor in serial } } % paper title % can use linebreaks \\ within to get better formatting as desired %\title{Matrix Analysis through Octave} % % % author names and IEEE memberships % note positions of commas and nonbreaking spaces ( ~ ) LaTeX will not break % a structure at a ~ so this keeps an author's name from being broken across % two lines. % use \thanks{} to gain access to the first footnote area % a separate \thanks must be used for each paragraph as LaTeX2e's \thanks % was not built to handle multiple paragraphs % \author{Dishank Jain$^{*}$% <-this % stops a space \thanks{*The author is with the Department of Artificial Intelligence, Indian Institute of Technology, Hyderabad 502285 India e-mail: [email protected]. All content in this manual is released under GNU GPL. Free and open source.}% <-this % stops a space %\thanks{J. Doe and J. Doe are with Anonymous University.}% <-this % stops a space %\thanks{Manuscript received April 19, 2005; revised January 11, 2007.}} } % note the % following the last \IEEEmembership and also \thanks - % these prevent an unwanted space from occurring between the last author name % and the end of the author line. i.e., if you had this: % % \author{....lastname \thanks{...} \thanks{...} } % ^------------^------------^----Do not want these spaces! % % a space would be appended to the last name and could cause every name on that % line to be shifted left slightly. This is one of those "LaTeX things". For % instance, "\textbf{A} \textbf{B}" will typeset as "A B" not "AB". To get % "AB" then you have to do: "\textbf{A}\textbf{B}" % \thanks is no different in this regard, so shield the last } of each \thanks % that ends a line with a % and do not let a space in before the next \thanks. % Spaces after \IEEEmembership other than the last one are OK (and needed) as % you are supposed to have spaces between the names. For what it is worth, % this is a minor point as most people would not even notice if the said evil % space somehow managed to creep in. % The paper headers %\markboth{Journal of \LaTeX\ Class Files,~Vol.~6, No.~1, January~2007}% %{Shell \MakeLowercase{\textit{et al.}}: Bare Demo of IEEEtran.cls for Journals} % The only time the second header will appear is for the odd numbered pages % after the title page when using the twoside option. % % *** Note that you probably will NOT want to include the author's *** % *** name in the headers of peer review papers. *** % You can use \ifCLASSOPTIONpeerreview for conditional compilation here if % you desire. % If you want to put a publisher's ID mark on the page you can do it like % this: %\IEEEpubid{0000--0000/00\$00.00~\copyright~2007 IEEE} % Remember, if you use this you must call \IEEEpubidadjcol in the second % column for its text to clear the IEEEpubid mark. % make the title area \maketitle \tableofcontents \bigskip \begin{abstract} This manual shows how to establish serial communication between Raspberry Pi and ESP32 chip. \end{abstract} %\newpage \section{Components} %\begin{table}[!h] %\centering \input{./figs/components.tex} %\caption{} %\label{table:components} %\end{table} %\setcounter{section}{3} %\setcounter{problem}{3} %\renewcommand{\thetable}{\theproblem} %\setcounter{section}{2} %\setcounter{problem}{2} \section{Serial incrementor} \begin{problem} Flash the code from \begin{lstlisting} https://github.com/Dishank422/EE3900/tree/main/rpi_esp/codes/incr.ino \end{lstlisting} onto the ESP32 Dev Module using Arduino IDE. \end{problem} \begin{problem} Connect the pins of RPi and ESP32 according to table \ref{connect} and figure \ref{rpi_pinout}. \end{problem} \begin{figure}[h] \centering \resizebox{0.9\columnwidth}{!}{\includegraphics{figs/rpi_pinout.jpg}} \caption{RPi pinout} \label{rpi_pinout} \end{figure} \input{./figs/connections.tex} \begin{problem} Setup Raspberry pi and open a terminal in it. Run the following command in the terminal. \begin{lstlisting} sudo raspi-config \end{lstlisting} A GUI menu will open like in figure \ref{raspi_config}. \begin{figure}[h] \centering \resizebox{0.9\columnwidth}{!}{\includegraphics{figs/raspi-config.png}} \caption{Raspi config} \label{raspi_config} \end{figure} Go to \textbf{interfacing options $>$ Serial}. Turn off login shell over serial and enable serial port hardware. Reboot. \end{problem} \begin{problem} Download code from \begin{lstlisting} https://github.com/Dishank422/EE3900/tree/main/rpi_esp/codes/send_receive.py \end{lstlisting} Compile the code in terminal. \end{problem} \end{document}
{ "alphanum_fraction": 0.7131503558, "avg_line_length": 34.6826923077, "ext": "tex", "hexsha": "ce1d2bdd9e4f1977d1f40d5d83ed1897edebf4ab", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "1adcec2246579c3b6dfb77e3bcef9b6a6e7e0734", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Dishank422/EE3900", "max_forks_repo_path": "rpi_esp/latex_code.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "1adcec2246579c3b6dfb77e3bcef9b6a6e7e0734", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Dishank422/EE3900", "max_issues_repo_path": "rpi_esp/latex_code.tex", "max_line_length": 150, "max_stars_count": null, "max_stars_repo_head_hexsha": "1adcec2246579c3b6dfb77e3bcef9b6a6e7e0734", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Dishank422/EE3900", "max_stars_repo_path": "rpi_esp/latex_code.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3274, "size": 10821 }
% % % % % % % % % % % % % % % % %IMPORTANT %compiles with %pdflatex -shell-escape %IMPORTANT % % % % % % % % % % % % % % % % \documentclass% %[handout]% {beamer} %IMPORTANT: the following line selects the current lecture. Change number to select a different lecture. \newcommand{\currentLecture}{11} \mode<presentation> { \useinnertheme{rounded} \useoutertheme{infolines} \usecolortheme{orchid} \usecolortheme{whale} } %\setbeamertemplate{footline}{% % \raisebox{5pt}{\makebox[\paperwidth]{\hfill\makebox[10pt]{\scriptsize\insertframenumber}}}} \usepackage[english]{babel} \usepackage[latin1]{inputenc} \usepackage{times} \input{../example-templates} \input{../system-specific-config-Ubuntu-texlive} \input{../pstricks-commands} \mode<handout>{\pgfpagesuselayout{1 on A4}[a4paper,landscape]} \newcommand{\currentSemester}{2017} \usepackage[T1]{fontenc} % Or whatever. Note that the encoding and the font should match. If T1 % does not look nice, try deleting the line with the fontenc. \graphicspath{{../../modules/}} \newcommand{\lect}[4]{ \ifnum#3=\currentLecture \date{#1} \lecture[#1]{#2}{#3} #4 \else %include nothing \fi } \setbeamertemplate{footline} { \leavevmode% \hbox{% \begin{beamercolorbox}[wd=.333333\paperwidth,ht=2.25ex,dp=1ex,center]{author in head/foot}% \usebeamerfont{author in head/foot}\insertshortauthor \end{beamercolorbox}% \begin{beamercolorbox}[wd=.333333\paperwidth,ht=2.25ex,dp=1ex,center]{title in head/foot}% \usebeamerfont{title in head/foot}\insertshorttitle \end{beamercolorbox}% \begin{beamercolorbox}[wd=.333333\paperwidth,ht=2.25ex,dp=1ex,center]{date in head/foot}% \usebeamerfont{date in head/foot}\insertshortdate{} \end{beamercolorbox}}% \vskip0pt% } \setbeamertemplate{navigation symbols}{} \renewcommand{\Arcsin}{\arcsin} \renewcommand{\Arccos}{\arccos} \renewcommand{\Arctan}{\arctan} \renewcommand{\Arccot}{\text{arccot\hspace{0.03cm}}} \renewcommand{\Arcsec}{\text{arcsec\hspace{0.03cm}}} \renewcommand{\Arccsc}{\text{arccsc\hspace{0.03cm}}} % If you have a file called "university-logo-filename.xxx", where xxx % is a graphic format that can be processed by latex or pdflatex, % resp., then you can add a logo as follows: %\pgfdeclareimage[height=0.8cm]{logo}{bluelogo} %\logo{\pgfuseimage{logo}} \begin{document} \AtBeginLecture{% \title[\insertlecture]{Math 242} \subtitle{\insertlecture} \author[The freecalc team]{\Large The freecalc project team \\~\\ \normalsize Reference Lectures\\~\\ \url{https://github.com/tmilev/freecalc} } \date{2016} \begin{frame} \titlepage \end{frame} \begin{frame}{Outline} \tableofcontents[pausesections] \end{frame} }% % begin lecture \lect{\currentSemester}{Lecture 1}{-1}{ \input{../../modules/course-outline/course-outline-multivariable-calculus-UMB-242-Catalin-Zara} } % begin lecture \lect{\currentSemester}{Lecture 1}{1}{ %DesiredLectureName: Space_Cartesian_Coordinates \fcLicense \section{Space} %\input{../../modules/coordinate-systems/space-intro}<-needs rewrite from ground up. \input{../../modules/coordinate-systems/space-configurations-pairs-lines-or-planes} \input{../../modules/coordinate-systems/distances-and-angles} \input{../../modules/coordinate-systems/line-plane-configurations-vs-distances-and-angles} \input{../../modules/coordinate-systems/cartesian-coordinates} \input{../../modules/coordinate-systems/distance-in-cartesian-coordinates} \input{../../modules/coordinate-systems/distance-in-cartesian-coordinates-example-1} \input{../../modules/coordinate-systems/distance-in-cartesian-coordinates-example-2} \input{../../modules/coordinate-systems/sets-in-space} \input{../../modules/coordinate-systems/sets-in-space-equations} \input{../../modules/coordinate-systems/sets-in-space-sphere-equations} } % begin lecture \lect{\currentSemester}{Lecture 2}{2}{ %DesiredLectureName: Vector_Basics_Dot_Product \fcLicense \section{Vectors} %\input{../../modules/vectors/vectors-outline} %<-needs rewrite from ground up. \input{../../modules/vectors/vectors-definition} \input{../../modules/vectors/displacement-vectors} \input{../../modules/vectors/displacement-vector-equality} \input{../../modules/vectors/position-and-displacement-vectors} \input{../../modules/vectors/vector-addition-triangle-rule} \input{../../modules/vectors/vector-addition-commutativity-associativity} \input{../../modules/vectors/vector-difference} \input{../../modules/vectors/linear-combinations} %\input{../../modules/vectors/vector-decomposition-along-directions} %<-slide needs rewrite from ground up. \input{../../modules/vectors/vectors-in-coordinates} \input{../../modules/vectors/vectors-operations-in-coordinates} %\input{../../modules/vectors/work-constant-force} %<-needs a ground-up rewrite \section{Dot product of vectors} \input{../../modules/vectors/dot-product-def} \input{../../modules/vectors/dot-product-properties} \input{../../modules/vectors/dot-product-computation-in-coordinates} \input{../../modules/vectors/dot-product-computations-examples} \input{../../modules/vectors/projections-in-coordinates} \input{../../modules/vectors/projections-in-coordinates-examples} \input{../../modules/vectors/angles} \input{../../modules/vectors/direction-angles} %\input{../../modules/vectors/rotational-effect} %<-needs to be rewritten from ground up %\input{../../modules/vectors/torque} %<-needs to be rewritten from ground up } \lect{\currentSemester}{Lecture 3}{3}{ %DesiredLectureName: Cross_Product_Determinants \fcLicense \section{Cross product of vectors} \input{../../modules/vectors/torque-wrench-example} \input{../../modules/vectors/cross-product} \input{../../modules/vectors/cross-product-properties} \input{../../modules/vectors/orth-is-linear} \input{../../modules/vectors/cross-product-linearity-proof} \subsection{Determinants} \input{../../modules/determinants/permutations-def} \input{../../modules/determinants/permutation-sign} \input{../../modules/determinants/permutation-rook-placements} \input{../../modules/determinants/determinants-def} \input{../../modules/determinants/two-by-two} \input{../../modules/determinants/three-by-three} \subsection{Cross product in coordinates} \input{../../modules/vectors/cross-product-in-coordinates} \input{../../modules/vectors/cross-product-example} \input{../../modules/vectors/cross-product-find-orthogonal-vector-to-two-vectors-1} \input{../../modules/vectors/cross-product-to-find-area-triangle} \input{../../modules/vectors/cross-product-to-find-area-triangle-example-1} \input{../../modules/vectors/scalar-triple-product} \input{../../modules/vectors/scalar-triple-product-volume-slanted-box-example-1} \input{../../modules/vectors/scalar-triple-product-volume-tetrahedron-example-1} \input{../../modules/vectors/scalar-triple-product-four-points-coplanar-example-1} \input{../../modules/vectors/space-orientation} } %end lecture % begin lecture \lect{\currentSemester}{Lecture 4}{4}{ %DesiredLectureName: Distances_Angles_Between_Points_Lines_Planes \fcLicense \input{../../modules/vectors/incidence-questions-intro} \section{Equations of Lines} \subsection{Line from point and direction} \input{../../modules/vectors/line-from-point-and-direction} \input{../../modules/vectors/line-from-point-and-direction-example} \subsection{Line from two points} \input{../../modules/vectors/line-from-two-points} \input{../../modules/vectors/line-from-two-points-example} \section{Equations of planes} \subsection{Plane from point and normal} \input{../../modules/vectors/plane-from-point-and-normal} \input{../../modules/vectors/plane-from-point-and-normal-example} \subsection{Plane from two directions} \input{../../modules/vectors/plane-from-point-and-two-directions} \input{../../modules/vectors/plane-from-point-and-two-directions-example} \subsection{Plane from three points} \input{../../modules/vectors/plane-from-three-points} \input{../../modules/vectors/plane-from-three-points-example} \section{Distances, Angles, Parallelism, Incidence} \input{../../modules/vectors/angles-distances-questions-intro} \subsection{Distance Between Point and Line} \input{../../modules/vectors/distance-point-and-line} \input{../../modules/vectors/distance-point-and-plane} \subsection{Parallel Lines} \input{../../modules/vectors/parallel-lines} \subsection{Angle Between Lines} \input{../../modules/vectors/angle-between-lines} \subsection{Distance Between Skew Lines} \input{../../modules/vectors/distance-between-lines} \subsection{Distance Between Plane and Parallel Line} \input{../../modules/vectors/distance-parallel-plane-and-line} \subsection{Angle Between Plane and Line} \input{../../modules/vectors/angle-between-plane-and-line} \input{../../modules/vectors/intersection-plane-and-line} \subsection{Parallel Planes} \input{../../modules/vectors/parallel-planes} \subsection{Angle Between Planes} \input{../../modules/vectors/angle-between-planes} } %end lecture % begin lecture \lect{\currentSemester}{Lecture 5}{5}{ %DesiredLectureName: Polar_Cylindrical_Spherical_Coordinates \fcLicense \section{Polar Coordinates} \input{../../modules/polar-coordinates/polar-intro} \input{../../modules/polar-coordinates/polar-questions} \input{../../modules/polar-coordinates/polar-many-representations} \input{../../modules/polar-coordinates/polar-two-points-coincide-iff} \input{../../modules/polar-coordinates/polar-to-cartesian} \input{../../modules/polar-coordinates/polar-to-cartesian-ex2} \input{../../modules/polar-coordinates/polar-to-cartesian-ex3} %\input{../../modules/polar-coordinates/polar-intersection-ex3} \section{Cylindrical Coordinates} \input{../../modules/cylindrical-coordinates/cylindrical-coordinates-def} \input{../../modules/cylindrical-coordinates/cylindrical-coordinates-to-rectangular} \input{../../modules/cylindrical-coordinates/cylindrical-coordinates-constant-coordinate-sets} \section{Spherical Coordinates} \input{../../modules/spherical-coordinates/spherical-coordinates} \input{../../modules/spherical-coordinates/spherical-coordinates-to-cartesian} \input{../../modules/spherical-coordinates/spherical-coordinates-constant-coordinate-sets} \section{Curvilinear boxes} \input{../../modules/coordinate-systems/curvilinear-boxes-polar} \input{../../modules/coordinate-systems/curvilinear-boxes-cylindrical} \input{../../modules/coordinate-systems/curvilinear-boxes-spherical} }% end lecture \lect{\currentSemester}{Lecture 6}{6}{ %DesiredLectureName: Curves_in_Space \fcLicense \section{Curves in space} \input{../../modules/coordinate-systems/r-two-r-n} \input{../../modules/parametric-curves/parametric-equation-line-segment} \input{../../modules/parametric-curves/parametric-equation-line-segment-example} \input{../../modules/parametric-curves/parametrized-curves-def} \input{../../modules/parametric-curves/lines-as-parametric-curves-example} \input{../../modules/parametric-curves/parametric-curve-example-tornado} \input{../../modules/parametric-curves/limits} \input{../../modules/parametric-curves/continuity} \section{Tangent vectors, tangents} \input{../../modules/parametric-curves/derivatives} \input{../../modules/parametric-curves/tangent-lines} \input{../../modules/parametric-curves/tangent-examples} \input{../../modules/parametric-curves/differentiation-rules} \input{../../modules/parametric-curves/acceleration} \input{../../modules/parametric-curves/acceleration-example-loxodrome} \section{Line integrals} \input{../../modules/parametric-curves/line-integrals} \input{../../modules/parametric-curves/line-integral-properties} \input{../../modules/parametric-curves/line-integral-applications} \input{../../modules/parametric-curves/parametric-arc-length-formula-3d} \input{../../modules/parametric-curves/arc-length-function} \input{../../modules/parametric-curves/arc-length-example-helix} \input{../../modules/parametric-curves/parametrization-by-arc-length} \input{../../modules/parametric-curves/curve-reparametrizations} \input{../../modules/parametric-curves/curve-reparametrization-example} \section{Curvature} \input{../../modules/parametric-curves/curvature} \input{../../modules/parametric-curves/curvature-formula-proof} \input{../../modules/parametric-curves/curvature-example-helix} \input{../../modules/parametric-curves/curvature-example-torus-curve} \input{../../modules/parametric-curves/curvature-example-loxodrome} \input{../../modules/parametric-curves/components-of-acceleration} } % begin lecture \lect{\currentSemester}{Lecture 7}{7}{ %DesiredLectureName: Multivariable_Functions \fcLicense \section{Functions of Several Variables} \input{../../modules/multivariable-functions/multivariable-functions-intro} \subsection{Verbal description} \input{../../modules/multivariable-functions/multivariable-functions-verbal-description} \subsection{Numerical description} \input{../../modules/multivariable-functions/multivariable-functions-numerical-description} \subsection{Analytical description} \input{../../modules/multivariable-functions/multivariable-functions-analytical-description} \section{Graphical descriptions} \subsection{Functions of two variables} \input{../../modules/multivariable-functions/multivariable-functions-graphical-description} \subsection{Slices and level curves} \input{../../modules/multivariable-functions/multivariable-functions-slices-and-level-curves} \subsection{Level sets} \input{../../modules/multivariable-functions/level-sets} \subsection{Vector Fields} \input{../../modules/multivariable-functions/vector-fields} %\input{../../modules/multivariable-functions/vector-fields-decomposition-wrt-other-vector-fields} %<- needs rewrite. } \lect{\currentSemester}{Lecture 8}{8}{ %DesiredLectureName: Multivariable_Limits_and_Continuity \fcLicense \section{Limits of Functions of Several Variables} \input{../../modules/continuity-multivariable/limits-intro} \input{../../modules/continuity-multivariable/limit-example} \input{../../modules/continuity-multivariable/limit-definition} \input{../../modules/continuity-multivariable/infinite-limits-example} \input{../../modules/continuity-multivariable/directional-limits-def} \input{../../modules/continuity-multivariable/limit-non-existence-example} \input{../../modules/continuity-multivariable/directional-limits-and-one-variable-limits} \input{../../modules/continuity-multivariable/directional-limits-example} \input{../../modules/continuity-multivariable/limits-along-paths} \section{Continuity of Functions of Several Variables} \input{../../modules/continuity-multivariable/continuity} \input{../../modules/continuity-multivariable/continuity-vector-fields} \input{../../modules/continuity-multivariable/continuity-vector-fields-example} } \lect{\currentSemester}{Lecture 9}{9}{ %DesiredLectureName: Partial_Derivatives_Differentiability_Differentials \fcLicense \section{Partial Derivatives} \input{../../modules/partial-derivatives/rate-of-change-example} \input{../../modules/partial-derivatives/rate-of-change-along-lines} \input{../../modules/partial-derivatives/directional-derivatives} \input{../../modules/partial-derivatives/partial-derivatives} \input{../../modules/partial-derivatives/partial-derivatives-notation} \input{../../modules/partial-derivatives/partial-derivatives-example} \input{../../modules/partial-derivatives/partial-derivatives-graphical-interpretation} \input{../../modules/partial-derivatives/partial-derivatives-higher-order} \input{../../modules/partial-derivatives/partial-derivatives-higher-order-example} \section{Linearizations} \input{../../modules/partial-derivatives/multivariable-function-linearizations} \section{Differentiability} \input{../../modules/partial-derivatives/differentiability-intro} \input{../../modules/partial-derivatives/differentiable-function-def} \section{Differentials} \input{../../modules/partial-derivatives/total-differential} \input{../../modules/partial-derivatives/total-differential-and-linearization-example} } \lect{\currentSemester}{Lecture 10}{10}{ %DesiredLectureName: Multivariable_Chain_Rule_Differential_Operators \fcLicense \section{Multivariable Chain Rule} \input{../../modules/chain-rule-multivariable/multivariable-chain-rule-motivation} \input{../../modules/chain-rule-multivariable/multivariable-chain-rule} \input{../../modules/chain-rule-multivariable/multivariable-chain-tree-diagrams} \input{../../modules/chain-rule-multivariable/multivariable-3-2-chain-rule} \input{../../modules/chain-rule-multivariable/multivariable-3-2-chain-rule-example-power-exponential} \section{Directional Derivatives via the Chain Rule } \input{../../modules/chain-rule-multivariable/directional-derivatives-via-chain-rule} \input{../../modules/chain-rule-multivariable/directional-derivatives-via-chain-rule-example} \section{Gradient} \input{../../modules/optimization-multivariable/gradient} \input{../../modules/optimization-multivariable/gradient-in-coordinates} \input{../../modules/chain-rule-multivariable/covariant-derivative} \input{../../modules/optimization-multivariable/gradient-in-polar} \input{../../modules/optimization-multivariable/gradient-in-polar-application} \input{../../modules/optimization-multivariable/gradient-and-gravity} \section{Differential Operators} \input{../../modules/chain-rule-multivariable/differential-operator-definition} \input{../../modules/chain-rule-multivariable/differential-operator-notation} \subsection{Differential Operators Variable Changes} \input{../../modules/chain-rule-multivariable/multivariable-3-2-chain-rule-example-derivatives-polar-coordinates} \input{../../modules/chain-rule-multivariable/partial-derivatives-in-polar-coordinates-part1} \input{../../modules/chain-rule-multivariable/partial-derivatives-in-polar-coordinates-part2} \input{../../modules/harmonic-functions/laplace-operator} \input{../../modules/harmonic-functions/harmonic-functions-definition} } \lect{\currentSemester}{Lecture 11}{11}{ %DesiredLectureName: Surfaces_Quadratic_Surfaces_Tangent_Planes \begin{frame} \begin{center} \Large Please view this lecture in full screen mode. Scroll down the slides (in full screen mode) to see a demo of the freecalc graphics and presentation. \end{center} \end{frame} \fcLicense \section{Surfaces} % % % % %\input{../../modules/surfaces/surfaces-intro} %<-needs a rewrite % % % % %\input{../../modules/surfaces/surfaces-parametrizations}<-needs a rewrite \input{../../modules/surfaces/implicit-equation-vs-explicit-parametrization} % % % % %\input{../../modules/surfaces/parametrized-surfaces} % % % % %\input{../../modules/surfaces/surfaces-example-plane-parametric-equation} % % % % %\input{../../modules/surfaces/surfaces-example-sphere-open-chart} % % % % %\input{../../modules/surfaces/level-surfaces-as-parametrized-surfaces} \subsection{Quadric Surfaces} \input{../../modules/quadratic-surfaces/quadratic-surfaces-intro} \input{../../modules/quadratic-surfaces/quadratic-surfaces-imaginary-spheres} \input{../../modules/quadratic-surfaces/quadratic-surfaces-ellipsoids-spheres} \input{../../modules/quadratic-surfaces/quadratic-surfaces-cone} \input{../../modules/quadratic-surfaces/quadratic-surfaces-hyperboloid-one-sheet} \input{../../modules/quadratic-surfaces/quadratic-surfaces-hyperboloid-two-sheets} \input{../../modules/quadratic-surfaces/quadratic-surfaces-paraboloid} \input{../../modules/quadratic-surfaces/quadratic-surfaces-hyperbolic-paraboloid} \input{../../modules/quadratic-surfaces/quadratic-surfaces-summary-form1} \input{../../modules/quadratic-surfaces/quadratic-surfaces-summary-form2} \section{Tangent Planes} \input{../../modules/surfaces/tangent-plane-intro} % % % % %\input{../../modules/surfaces/tangent-plane-definition} % % % % %\input{../../modules/surfaces/tangent-plane-example-1} \input{../../modules/surfaces/tangent-plane-to-graph-surface} %\input{../../modules/surfaces/tangent-plane-to-level-surface} % % % % % %WARNING % % % % % %the following slides need serious editing, are not ready for lecturing at the moment % % % % %\input{../../modules/surfaces/surfaces-of-revolution} % % % % %\input{../../modules/surfaces/surfaces-of-revolution-example-torus} % % % % %\input{../../modules/surfaces/tangent-plane-definition} \input{../../modules/chain-rule-multivariable/directional-derivatives-tangent-plane-def-justification} % % % % %\input{../../modules/surfaces/tangent-plane-example-1} % % % % %\input{../../modules/surfaces/regular-points-level-surface} % % % % %\input{../../modules/surfaces/tangent-plane-equation} % % % % %\input{../../modules/surfaces/tangent-plane-to-level-surface-example-1} % % % % %\input{../../modules/surfaces/tangent-planes-to-quadratic-surfaces} % % % % %\input{../../modules/surfaces/graph-and-level-surfaces} \input{../../modules/surfaces/is-level-surface-graph-of-function} % % %\input{../../modules/surfaces/is-level-surface-graph-of-function-example} % % % % %<-needs rewrite % % % % %\input{../../modules/surfaces/implicit-functions} % % % % %\input{../../modules/surfaces/implicit-functions-example-sphere} % % % % %\input{../../modules/surfaces/implicit-function-theorem} % % % % %\input{../../modules/surfaces/implicit-function-theorem-and-graph-surfaces} % % % % %\input{../../modules/surfaces/implicit-differentiation} % % % % %\input{../../modules/surfaces/implicit-differentiation-example-1} % % % % %\input{../../modules/surfaces/implicit-differentiation-and-tangent-plane-example-1} } \lect{\currentSemester}{Lecture 12}{12}{ %DesiredLectureName: Maxima_Minima_Lagrange_Multipliers \section{Minima, Maxima} \input{../../modules/optimization-multivariable/maxima-minima-definitions} \input{../../modules/optimization-multivariable/critical-points} \input{../../modules/optimization-multivariable/critical-points-example} \input{../../modules/optimization-multivariable/second-derivative-test} \input{../../modules/optimization-multivariable/second-derivative-test-example} \input{../../modules/optimization-multivariable/second-derivative-test-example-distance-point-to-plane} \input{../../modules/optimization-multivariable/extreme-value-theorem} \input{../../modules/optimization-multivariable/second-derivative-test-example-box-no-lid-volume} \section{Lagrange Multipliers} % % % % %\input{../../modules/optimization-multivariable/lagrange-multipliers-intro-via-Cobb-Douglas-Function}<-needs rewrite % % % % %\input{../../modules/optimization-multivariable/lagrange-multipliers-intro-via-Cobb-Douglas-Function-part-2} % % % %<-needs rewrite %\input{../../modules/optimization-multivariable/lagrange-multipliers-intro-via-Cobb-Douglas-Function-part-3}<-needs rewrite \input{../../modules/optimization-multivariable/gradient-normal-to-level-curve} \input{../../modules/optimization-multivariable/lagrange-multipliers} % % % % %\input{../../modules/optimization-multivariable/lagrange-multipliers-remarks}<-needs rewrite \input{../../modules/optimization-multivariable/lagrange-multipliers-example-1} \input{../../modules/optimization-multivariable/lagrange-multipliers-example-box-no-lid-volume} %\input{../../modules/optimization-multivariable/lagrange-multipliers-least-squares-method} <-only a stub, needs to be written properly. \input{../../modules/optimization-multivariable/lagrange-multipliers-multiple-constraints} \input{../../modules/optimization-multivariable/lagrange-multipliers-multiple-constraints-example-1} } % begin lecture \lect{\currentSemester}{Lecture 13}{13}{ %DesiredLectureName: Double_Integrals \section{Double Integrals} \subsection{Riemann Sums, Double Integral Definition} \input{../../modules/integration-multivariable/integral-motivation-census-example} \input{../../modules/integration-multivariable/Riemann-sum-two-variables} \input{../../modules/integration-multivariable/double-integral-definitions} \input{../../modules/integration-multivariable/double-integral-midpoint-rule} \input{../../modules/integration-multivariable/double-integral-midpoint-rule-example} \subsection{Double integral properties} \input{../../modules/integration-multivariable/double-integral-theoretical-example-1} \input{../../modules/integration-multivariable/double-integral-properties} \input{../../modules/integration-multivariable/double-integral-applications-1} \input{../../modules/integration-multivariable/double-integral-mvt-area-limit} \input{../../modules/integration-multivariable/double-integral-vector-valued} \input{../../modules/integration-multivariable/double-integral-vector-valued-example-theoretical-1} \subsection{Iterated integrals} \input{../../modules/integration-multivariable/double-iterated-integrals} \input{../../modules/integration-multivariable/Fubini-theorem-double-integrals} \input{../../modules/integration-multivariable/double-iterated-integrals-example} \input{../../modules/integration-multivariable/double-integrals-regions-intro} \input{../../modules/integration-multivariable/double-integrals-curvilinear-trapezoids-base-vertical} \input{../../modules/integration-multivariable/double-integrals-curvilinear-trapezoids-base-horizontal} \input{../../modules/integration-multivariable/double-integrals-strategy} \input{../../modules/integration-multivariable/double-integrals-curvilinear-trapezoids-example-1} \input{../../modules/integration-multivariable/double-integrals-curvilinear-trapezoids-example-2} \input{../../modules/integration-multivariable/double-integrals-curvilinear-trapezoids-example-3} \input{../../modules/integration-multivariable/double-integrals-curvilinear-trapezoids-example-4} \input{../../modules/integration-multivariable/double-integrals-curvilinear-trapezoids-example-5} } \lect{\currentSemester}{Lecture 14}{14}{ %DesiredLectureName: Triple_Integrals \section{Triple Integrals} \input{../../modules/integration-multivariable/triple-integrals-intro-density-mass} \input{../../modules/integration-multivariable/triple-integrals-def} \input{../../modules/integration-multivariable/triple-integrals-examples-theoretical-1} \input{../../modules/integration-multivariable/triple-iterated-integrals} \input{../../modules/integration-multivariable/triple-integrals-example-moment-rectangular-box} \input{../../modules/integration-multivariable/triple-integrals-example-volume-1-slices} \input{../../modules/integration-multivariable/triple-integrals-example-volume-1-rods} } \lect{\currentSemester}{Lecture 15}{15}{ %DesiredLectureName: Parallelotopes_Variable_Changes_In_Multivariable_Integrals \section{Parallelotopes} \input{../../modules/integration-multivariable-variable-change/parallelotope-definition} \input{../../modules/integration-multivariable-variable-change/k-volume-parallelotope-in-n-space-definition} \input{../../modules/integration-multivariable-variable-change/k-volume-naming-conventions} \input{../../modules/integration-multivariable-variable-change/volume-parallelotope-equals-integral-one} \input{../../modules/integration-multivariable-variable-change/volume-parallelotope} \input{../../modules/determinants/multiply-determinant-column-by-number} \input{../../modules/integration-multivariable-variable-change/segment-in-3-space-length-example-1} \input{../../modules/integration-multivariable-variable-change/segment-in-3-space-length-formula} \input{../../modules/integration-multivariable-variable-change/parallelogram-in-2-space-area-example-1} \input{../../modules/integration-multivariable-variable-change/parallelogram-in-2-space-area-formula} \input{../../modules/integration-multivariable-variable-change/parallelogram-in-3-space-area-example-1} \input{../../modules/integration-multivariable-variable-change/parallelogram-in-3-space-area-formula} \input{../../modules/integration-multivariable-variable-change/parallelepiped-volume-example-1} \input{../../modules/integration-multivariable-variable-change/parallelepiped-volume-formula} \section{Variable Changes in Multivariable Integrals} \input{../../modules/integration-multivariable-variable-change/variable-change-terminology} \input{../../modules/integration-multivariable-variable-change/variable-change-notation} \input{../../modules/integration-multivariable-variable-change/jacobian} \input{../../modules/integration-multivariable-variable-change/variable-change-motivation-part-1} \input{../../modules/integration-multivariable-variable-change/variable-change-motivation-part-2} \input{../../modules/integration-multivariable-variable-change/variable-change-motivation-part-3} \input{../../modules/integration-multivariable-variable-change/variable-change-motivation-part-4} \input{../../modules/integration-multivariable-variable-change/integral-variable-change} \input{../../modules/integration-multivariable-variable-change/volume-of-ball} \input{../../modules/integration-multivariable-variable-change/volume-of-spherical-curvilinear-box} \input{../../modules/integration-multivariable-variable-change/volume-of-ellipsoid} \input{../../modules/integration-multivariable-variable-change/volume-of-toroid-part1} \input{../../modules/integration-multivariable-variable-change/volume-of-toroid-part2} \input{../../modules/integration-multivariable-variable-change/volume-of-toroid-part3} \input{../../modules/integration-multivariable-variable-change/volume-of-toroid-part4} \input{../../modules/integration-multivariable-variable-change/volume-of-raised-horn} \input{../../modules/integration-multivariable-variable-change/integral-variable-change-positivity-jacobian} %\input{../../modules/integration-multivariable-variable-change/area-of-ellipse} } \lect{\currentSemester}{Lecture 16}{16}{% %DesiredLectureName: Double_Integrals_Polar_Coordinates \section{Double Integrals in Polar Coordinates} \input{../../modules/integration-multivariable-variable-change/double-integrals-in-polar-coordinates} \input{../../modules/integration-multivariable-variable-change/moment-inertia-2d-object-axis-in-same-plane} \input{../../modules/integration-multivariable-variable-change/moment-inertia-annulus-axis-in-same-plane} \input{../../modules/integration-multivariable-variable-change/use-of-polar-coordinates-in-double-integrals} \input{../../modules/integration-multivariable-variable-change/polar-coordinates-double-integral-example-1} \input{../../modules/integration-multivariable-variable-change/polar-coordinates-double-integral-improper-example-1} \section{Triple Integrals in Cylindrical Coordinates} \input{../../modules/integration-multivariable-variable-change/triple-integrals-in-cylindrical-coordinates} \input{../../modules/integration-multivariable-variable-change/moment-inertia-cylindrical-coordinates} \input{../../modules/integration-multivariable-variable-change/moment-inertia-cylindrical-shell} \input{../../modules/integration-multivariable-variable-change/mass-center-regular-circular-cone} \section{Triple Integrals in Spherical Coordinates} \input{../../modules/integration-multivariable-variable-change/triple-integrals-in-spherical-coordinates} \input{../../modules/integration-multivariable-variable-change/triple-integrals-in-spherical-coordinates-example-centroid-half-ball} } \lect{\currentSemester}{Lecture 17}{17}{% %DesiredLectureName: Line_Integrals \section{Line integrals} \input{../../modules/integration-line-integrals/line-surface-integrals-intro} %%%%%\input{../../modules/integration-line-integrals/line-integral-intro} %<- needs a rewrite \input{../../modules/integration-line-integrals/line-integrals-Riemann-sums} \input{../../modules/integration-line-integrals/line-integral-definition} \input{../../modules/integration-line-integrals/line-integral-parametrization-and-computation} \input{../../modules/integration-line-integrals/line-integral-example-1} \subsection{Line Integral from Vector Field} \input{../../modules/integration-line-integrals/line-integral-from-vector-field} \input{../../modules/integration-line-integrals/line-integral-from-vector-field-parametrization-and-computation} \input{../../modules/integration-line-integrals/line-integral-from-vector-field-example-1} \subsection{Differential 1-forms} \input{../../modules/integration-line-integrals/differential-1-forms} \input{../../modules/integration-line-integrals/integrals-of-1-forms} \input{../../modules/integration-line-integrals/line-integrals-over-closed-path-notation} \input{../../modules/integration-line-integrals/integral-of-1-form-example-1-darctanydivx} \input{../../modules/integration-line-integrals/integral-of-1-form-example-1-dlnr} \input{../../modules/integration-line-integrals/integral-of-1-form-example-1-darctanydivx-in-polar} \input{../../modules/integration-line-integrals/integral-of-1-form-example-1-dlnr-in-polar} \input{../../modules/integration-line-integrals/integral-of-1-form-inverse-square-distance-fields} \input{../../modules/integration-line-integrals/integral-of-1-form-independence-of-path-intro} \input{../../modules/integration-line-integrals/conservative-fields} \input{../../modules/integration-line-integrals/conservative-field-is-gradient-field} \input{../../modules/integration-line-integrals/conservative-field-criterion} \input{../../modules/integration-line-integrals/conservative-fields-simply-connected-regions} \input{../../modules/integration-line-integrals/conservative-field-find-potential-example-1} \input{../../modules/integration-line-integrals/exact-1-forms} } \lect{\currentSemester}{Lecture 18}{18}{% %DesiredLectureName: 2d_orientation_Greens_Theorem % % % % % %\input{../../modules/integration-line-integrals/conservation-of-energy}<-needs a rewrite % % % % % %\input{../../modules/integration-line-integrals/vector-fields-2d-tangents-normals-intro}<-needs a rewrite % % % %\input{../../modules/integration-line-integrals/flux-and-divergence-2d}<-needs companion slides to be used % % % %\input{../../modules/integration-line-integrals/circulation-and-curl-2d}<-needs companion slides to be used % % % % % % % %\input{../../modules/integration-line-integrals/curl-and-divergence-2d-in-coordinates} %<-needs a rewrite \section{Orientation in 2D} \input{../../modules/integration-line-integrals/curve-orientation} \input{../../modules/integration-line-integrals/orientation-of-2d-space} \input{../../modules/integration-line-integrals/orientation-of-2d-space-and-clock-direction} \input{../../modules/integration-line-integrals/closed-curve-orientation-2d} \section{Green's Theorem} \input{../../modules/integration-line-integrals/greens-theorem} \input{../../modules/integration-line-integrals/greens-theorem-proof-part1} \input{../../modules/integration-line-integrals/greens-theorem-proof-part2} \input{../../modules/integration-line-integrals/greens-theorem-proof-part3} \input{../../modules/integration-line-integrals/areas-and-greens-theorem} \input{../../modules/integration-line-integrals/areas-and-greens-theorem-example-1} % % %\input{../../modules/integration-line-integrals/line-integrals-via-double-integrals-example1}<-needs a rewrite \input{../../modules/integration-line-integrals/line-integrals-via-double-integrals-example2} \input{../../modules/integration-line-integrals/greens-theorem-to-integrate-darctanydivx-part1} \input{../../modules/integration-line-integrals/greens-theorem-to-integrate-darctanydivx-part2} % % % % % %%\input{../../modules/integration-line-integrals/divergence-and-curl-2d} % % % % % %%%\input{../../modules/integration-line-integrals/divergence-2d-interpretation} % % % % % %%\input{../../modules/integration-line-integrals/curl-2d-interpretation} % % % % % %%\input{../../modules/integration-line-integrals/curl-divergence-2d-polar} % % % % % %%\input{../../modules/integration-line-integrals/curl-2d-example-rotational-field} % % % % % %%\input{../../modules/integration-line-integrals/curl-2d-example-inverse-distance-square-field} % % % % % %%\input{../../modules/integration-line-integrals/average-value-over-2d-region} % % % % % %%\input{../../modules/integration-line-integrals/centroids-mass-centers-2d-regions} } \lect{\currentSemester}{Lecture 19}{19}{ %DesiredLectureName: Surface_Integrals \section{Surface Integrals} \input{../../modules/integration-surface-integrals/surface-integrals-intro} \input{../../modules/vectors/magnitude-cross-product-and-gramm-determinants} \subsection{Surface area} \input{../../modules/integration-surface-integrals/surface-area} \input{../../modules/integration-surface-integrals/surface-area-graph-surface} \input{../../modules/integration-surface-integrals/surface-area-surface-of-revolution} \input{../../modules/integration-surface-integrals/pappus-first-centroid-theorem} \input{../../modules/integration-surface-integrals/surface-area-surface-of-revolution-example-torus} \input{../../modules/integration-surface-integrals/surface-area-surface-of-revolution-example-sphere} \input{../../modules/integration-surface-integrals/surface-integral-definition} \input{../../modules/integration-surface-integrals/surface-integral-example-hemisphere-centroid} %\input{../../modules/integration-surface-integrals/vector-valued-surface-integrals}<-needs rewrite %\input{../../modules/integration-surface-integrals/vector-fields-in-space-geometric-intro}<-needs rewrite \input{../../modules/integration-surface-integrals/surface-orientation} \subsection{Flux and Divergence} \input{../../modules/integration-surface-integrals/flux} \input{../../modules/integration-surface-integrals/divergence} \input{../../modules/integration-surface-integrals/flux-and-divergence} \input{../../modules/integration-surface-integrals/flux-computation} \input{../../modules/integration-surface-integrals/flux-example-1} \input{../../modules/integration-surface-integrals/flux-example-2} } \lect{\currentSemester}{Lecture 20}{20}{ %DesiredLectureName: Divergence_Theorem_Stokes_Theorem \section{Divergence Theorem} \input{../../modules/integration-surface-integrals/divergence-theorem} \input{../../modules/integration-surface-integrals/divergence-theorem-example-1} \input{../../modules/integration-surface-integrals/divergence-theorem-application-balloon-pressure} \input{../../modules/integration-surface-integrals/divergence-theorem-archimedes-law} %\input{../../modules/integration-surface-integrals/rotational-effect}<-needs a rewrite %\input{../../modules/integration-surface-integrals/vector-field-components} %<-needs a rewrite %\input{../../modules/integration-surface-integrals/induced-orientation-and-curl} %<-needs a rewrite \input{../../modules/integration-surface-integrals/curl-definition-in-coordinate-form} \input{../../modules/integration-surface-integrals/induced-orientation-boundary-curve} \input{../../modules/integration-surface-integrals/induced-orientation-boundary-curve-example-hemisphere} %\input{../../modules/integration-surface-integrals/coordinate-computation-of-curl-from-coordinate-free-definition} %<-needs a rewrite \input{../../modules/integration-surface-integrals/stokes-theorem} %\input{../../modules/integration-surface-integrals/stokes-theorem-corollaries}<-needs a rewrite \input{../../modules/integration-surface-integrals/stokes-theorem-applications} \input{../../modules/integration-surface-integrals/stokes-theorem-example-line-via-surface-integral-1} \input{../../modules/integration-surface-integrals/vector-potential} \input{../../modules/integration-surface-integrals/stokes-theorem-example-surface-via-line-integral-1} \input{../../modules/integration-surface-integrals/div-curl-grad} %\input{../../modules/integration-surface-integrals/unifying-theme-rough-draft}<-needs a rewrite } \end{document}
{ "alphanum_fraction": 0.7586060561, "avg_line_length": 55.7148817803, "ext": "tex", "hexsha": "b2b529e4a70bf10c74837b3804bfc985667fb9be", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2017-09-21T13:51:45.000Z", "max_forks_repo_forks_event_min_datetime": "2017-09-21T13:51:45.000Z", "max_forks_repo_head_hexsha": "b34b2ad2946fc0b6a9b403d4399a0cf9d23c19dd", "max_forks_repo_licenses": [ "CC-BY-3.0" ], "max_forks_repo_name": "tmilev/freecalc", "max_forks_repo_path": "lectures/UMB-Reference-Lectures/Calculus_III(Multivariable).tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b34b2ad2946fc0b6a9b403d4399a0cf9d23c19dd", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-3.0" ], "max_issues_repo_name": "tmilev/freecalc", "max_issues_repo_path": "lectures/UMB-Reference-Lectures/Calculus_III(Multivariable).tex", "max_line_length": 148, "max_stars_count": 1, "max_stars_repo_head_hexsha": "b34b2ad2946fc0b6a9b403d4399a0cf9d23c19dd", "max_stars_repo_licenses": [ "CC-BY-3.0" ], "max_stars_repo_name": "tmilev/freecalc", "max_stars_repo_path": "lectures/UMB-Reference-Lectures/Calculus_III(Multivariable).tex", "max_stars_repo_stars_event_max_datetime": "2017-07-12T11:15:57.000Z", "max_stars_repo_stars_event_min_datetime": "2017-07-12T11:15:57.000Z", "num_tokens": 10222, "size": 40059 }
\unnumberedchapter{Abstract} \chapter*{Abstract} \subsection*{\thesistitle} Provide a concise summary of your proposed research of approximately 250 words. Remember that the abstract is {\it not\/} an introduction, it is a {\it summary\/} of the entire document. It makes sense to wait to write the abstract until the rest of the document has been written.
{ "alphanum_fraction": 0.7777777778, "avg_line_length": 51.4285714286, "ext": "tex", "hexsha": "9b8e1995aeafec712c655c9b67dc7505a7b0d2b5", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-12-12T12:29:24.000Z", "max_forks_repo_forks_event_min_datetime": "2021-12-12T12:29:24.000Z", "max_forks_repo_head_hexsha": "3b698ea64f94249ca462e3e4073c9857dc2804af", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "Allegheny-Computer-Science-Thesis-2021/cmpsc-600-senior-thesis-proposal", "max_forks_repo_path": "preamble/abstract.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3b698ea64f94249ca462e3e4073c9857dc2804af", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "Allegheny-Computer-Science-Thesis-2021/cmpsc-600-senior-thesis-proposal", "max_issues_repo_path": "preamble/abstract.tex", "max_line_length": 106, "max_stars_count": null, "max_stars_repo_head_hexsha": "3b698ea64f94249ca462e3e4073c9857dc2804af", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "Allegheny-Computer-Science-Thesis-2021/cmpsc-600-senior-thesis-proposal", "max_stars_repo_path": "preamble/abstract.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 84, "size": 360 }
\section*{Attachments} % \begin{equation} % distancia = \sqrt{ (x_{goal} - x_{state})^2 + (y_{goal} - y_{state})^2 } % \label{eq:distancia-dois-pontos} % \end{equation} % \begin{figure}[h!] % \centering % \includegraphics[width=0.5\hsize]{figuras/cnn.png} % \caption{Estrutura da Rede Convolucional.} % \label{fig:cnn-arq} % \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.75\columnwidth]{figuras/figura1.png} \caption{Classical pipeline for Visual Odometry.} \label{fig:1} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.75\columnwidth]{figuras/figura2.png} \caption{DeepVO pipeline for Visual Odometry.} \label{fig:2} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.65\columnwidth]{figuras/figura8.png} \caption{Architecture of the RCNN monocular VO system.} \label{fig:8} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.65\columnwidth]{figuras/figura9.png} \caption{Configuration of the CNN for monocular VO system.} \label{fig:9} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.65\columnwidth]{figuras/figura10.png} \caption{Training losses for the monocular VO system for the Wang paper.} \label{fig:10} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.65\columnwidth]{figuras/figura11.png} \caption{Obtained map for the monocular VO system under the training dataset for the Wang paper.} \label{fig:11} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.65\columnwidth]{figuras/figura12.png} \caption{Obtained map for the monocular VO system under the test dataset for the Wang paper.} \label{fig:12} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.65\columnwidth]{figuras/figura13.png} \caption{Average errors on translation and rotation against different path lengths and speeds for the Wang paper.} \label{fig:13} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.65\columnwidth]{figuras/neig.png} \caption{Example of image obtained at the Neighbourhood scene from AirSim.} \label{fig:neig} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.65\columnwidth]{figuras/africa.png} \caption{Example of image obtained at the Africa scene from AirSim.} \label{fig:africa} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.65\columnwidth]{figuras/mountains.png} \caption{Example of image obtained at the LandscapeMountains scene from AirSim.} \label{fig:mountain} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.65\columnwidth]{figuras/train_batch_dropout_90.png} \caption{Training losses without the weights of FlowNet.} \label{fig:train90} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.65\columnwidth]{figuras/train_batch_dropout_150.png} \caption{Training losses with the weights of FlowNet.} \label{fig:train150} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.65\columnwidth]{figuras/testeSeq2.png} \caption{Comparison between the ground truth position and the odometry provided by the system for the first test sequence.} \label{fig:seq2} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.65\columnwidth]{figuras/testeSeq4.png} \caption{Comparison between the ground truth position and the odometry provided by the system for the second test sequence.} \label{fig:seq4} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.65\columnwidth]{figuras/testeSeq5.png} \caption{Comparison between the ground truth position and the odometry provided by the system for the third test sequence.} \label{fig:seq5} \end{figure}
{ "alphanum_fraction": 0.6380563124, "avg_line_length": 35.5161290323, "ext": "tex", "hexsha": "d7b8c1e641dd295628ee5f817366747bc7e44e86", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2020-09-11T01:27:01.000Z", "max_forks_repo_forks_event_min_datetime": "2020-04-30T20:08:40.000Z", "max_forks_repo_head_hexsha": "58258759fd6d750e5ec4fc1a27472b78614d5add", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "luizcartolano2/mc907-mobile-robotics", "max_forks_repo_path": "project3/[MC907]project3/7-anexos.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "58258759fd6d750e5ec4fc1a27472b78614d5add", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "luizcartolano2/mc907-mobile-robotics", "max_issues_repo_path": "project3/[MC907]project3/7-anexos.tex", "max_line_length": 132, "max_stars_count": 7, "max_stars_repo_head_hexsha": "58258759fd6d750e5ec4fc1a27472b78614d5add", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "luizcartolano2/mc907-mobile-robotics", "max_stars_repo_path": "project3/[MC907]project3/7-anexos.tex", "max_stars_repo_stars_event_max_datetime": "2020-12-30T03:05:00.000Z", "max_stars_repo_stars_event_min_datetime": "2020-01-08T04:54:27.000Z", "num_tokens": 1251, "size": 4404 }
\subsubsection{File Format} \label{file_format} Interesting for the evaluation of the experiment are the acceleration data in three axis and the \textit{B} button down and up events to mark the beginning and the end of a gesture. This results in three different kinds of events, incoming acceleration data, the \textit{B} button is pressed down and the \textit{B} button is released up. The controller is sending the acceleration data with an average gap of five milliseconds. Every event has been marked with the time in milliseconds that have passed since the starting of the experiment. The representing line in a recorded file for a acceleration data event is starting with the time in milliseconds followed by the acceleration data of all three axis. The acceleration data of a axis is transfered as integer value with the unit decimeter per second squared. A \textit{B} button down event is represented by a line also starting with the time in milliseconds followed by the keyword \textit{START} and a counter for the gesture. The \textit{B} button up event is the same with the keyword \textit{END} instead of \textit{START}. An example excerpt of a recorded file can look like the following. \medskip \noindent {\it Example excerpt of a recorded file} \begin{verbatim} 3045 20 19 74 3050 16 12 70 3055 START 1 3055 14 13 88 3060 0 11 76 \end{verbatim} \noindent {\small Line 1, 2, 4 and 5 are representations of acceleration data and line 3 marks the beginning of the first gesture triggered by a \textit{B} button down event.} \medskip The recorded files are evaluated in section \ref{evaluation}. All recorded files used in the evaluation are available on GitHub\footnote{https://github.com/GordonLesti/SlidingWindowFilter-evaluator/tree/v1.0.1/src/main/resources}.
{ "alphanum_fraction": 0.7970770096, "avg_line_length": 55.59375, "ext": "tex", "hexsha": "0f5206500c502a74c78ea753d999e9fa6bc4773f", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-01-11T23:15:57.000Z", "max_forks_repo_forks_event_min_datetime": "2019-01-11T23:15:57.000Z", "max_forks_repo_head_hexsha": "22c11f2912a5c523ae8ad85a849e2d0b123536ec", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "GordonLesti/SlidingWindowFilter", "max_forks_repo_path": "bachelor-thesis/experiment/recording_setup/file_format.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "22c11f2912a5c523ae8ad85a849e2d0b123536ec", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "GordonLesti/SlidingWindowFilter", "max_issues_repo_path": "bachelor-thesis/experiment/recording_setup/file_format.tex", "max_line_length": 119, "max_stars_count": 2, "max_stars_repo_head_hexsha": "22c11f2912a5c523ae8ad85a849e2d0b123536ec", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "GordonLesti/SlidingWindowFilter", "max_stars_repo_path": "bachelor-thesis/experiment/recording_setup/file_format.tex", "max_stars_repo_stars_event_max_datetime": "2021-03-14T11:43:53.000Z", "max_stars_repo_stars_event_min_datetime": "2017-06-22T09:37:30.000Z", "num_tokens": 442, "size": 1779 }
\section{Education} \resumeSubHeadingListStart \resumeSubheading {Master of Science in Computer Science,}{UQAM | Université du Québec À Montréal}{\WorkLocation{Montréal, Qc, Ca}} {\WorkPeriod{Sep 2016}{\break Dec 2018}} \resumeItemListStart \renewcommand{\labelitemii}{\raisebox{.39cm}{$\bullet$}} \resumeItem{Main reseach project}{Android : Automatic detection of presentation layer architectural design patterns (MVC, MVP, MVVM) in Android apps through bytecode analysis. A tooled approach and an empirical study. Source: \underline{\url{https://github.com/AymenDaoudi/Rimaz}}} \renewcommand{\labelitemii}{\raisebox{.15cm}{$\bullet$}} \resumeItem{Research Papers}{Published a research paper \textbf{"\textit{\underline{\href{https://dl.acm.org/citation.cfm?id=3297447&dl=ACM&coll=DL}{An Exploratory Study of MVC-based Architectural Patterns in Android Apps}}}"} to the international ACM SAC Conference (SAC’19).} \renewcommand{\labelitemii}{\raisebox{.27cm}{$\bullet$}} \resumeItem{Research Project}{Participated in a research project with the company Savoir Faire Linux. Proposing solutions to assess P2P VOIP apps such as Jami (prev. Ring), skype and tox software quality for performance and energy consumption.} \renewcommand{\labelitemii}{\raisebox{.05cm}{$\bullet$}} \resumeItem{Supervising}{Supervising interns building Xamarin applications.} \renewcommand{\labelitemii}{\raisebox{0.05cm}{$\bullet$}} \resumeItem{Teaching}{Giving Java courses and workshops for Bachelor students.} \resumeItemListEnd \resumeSubheading {Master degree in computer science (Information systems),}{ESI | Ecole Nationale Supérieure d’Informatique}{\WorkLocation{Algiers, Algeria}} {\WorkPeriod{Sep 2007}{\break Juin 2014}} \resumeItemListStart \renewcommand{\labelitemii}{\raisebox{.24cm}{$\bullet$}} \resumeItem{Final Project}{Ranking web pages using Google’s PageRank algorithm: study, modeling, implementation and simulation. Source: \underline{\url{https://github.com/SamTheDev/PageRank-Analyzer}}} \resumeItemListEnd \resumeSubheading {High school diploma (baccalauréat)}{Lycée Chihani Bachir}{\WorkLocation{Khenchela, Algeria}} {\WorkPeriod{2004}{ 2007}} \resumeSubHeadingListEnd
{ "alphanum_fraction": 0.7448096886, "avg_line_length": 68, "ext": "tex", "hexsha": "c469ba7efe88207ac816e2a6d83ab017aa8d6c5b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "00bfb33ba3d50d96d489aad53afff4f7164c7816", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "AymenDaoudi/Resume", "max_forks_repo_path": "Parts/Sections/Education.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "00bfb33ba3d50d96d489aad53afff4f7164c7816", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "AymenDaoudi/Resume", "max_issues_repo_path": "Parts/Sections/Education.tex", "max_line_length": 214, "max_stars_count": 1, "max_stars_repo_head_hexsha": "00bfb33ba3d50d96d489aad53afff4f7164c7816", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "AymenDaoudi/Resume", "max_stars_repo_path": "Parts/Sections/Education.tex", "max_stars_repo_stars_event_max_datetime": "2018-11-03T17:53:52.000Z", "max_stars_repo_stars_event_min_datetime": "2018-11-03T17:53:52.000Z", "num_tokens": 643, "size": 2312 }
\section{Model definition} \label{model-definition} This Section gives an overview of the models used on the different training sets. We first start with the multilayer perceptrons and end with the convolutional networks used on the image dataset. For both models we present the architecture, the initial results and the hyperparameter tuning phase. \subsection{Multilayer perceptron} \label{mlp} The first model used in the experiments is a multilayer perceptron. The considered training sets are the ones presented on Subsection 2.2 and 2.3 namely the first and extended datasets. For the first one, scaled and unscaled versions are considered, while for the second the normal and PCA version are tested. \subsubsection{Network structure} \label{net-str} The starting point for the neural network structure is a reasonable network in terms of hidden neurons to prevent over-fitting, indeed a high number of units in the hidden layers would end up in learning too much from the dataset, leading to poor performances on the test sets. For this reason, the rule of thumb followed to decide hidden neurons quantity is the following: $$\#\mathit{hidden\; neurons} = \frac{2}{3}\big(\#\mathit{input\;neurons} + \#\mathit{output\;neurons}\big)$$ The next step is to decide the hidden layers number. As using the rule presented above gives a quite small amount of units, only two layers are considered, indeed, an higher quantity would mean having a real small number of neurons per layer. Applying this rule ended up in the following architectures on the four different training sets, where the output layer is fixed at 10: \begin{center} \begin{tabular}{ |l|r|r|r| } \hline Training set & Input & 1st Hidden & 2nd Hidden \\ \hline 132 features unscaled & 132 & 60 & 30 \\ 132 features scaled & 132 & 60 & 30 \\ 180 features scaled & 180 & 80 & 46 \\ 102 features reduced with PCA & 102 & 45 & 30 \\ \hline \end{tabular} \end{center} To build the actual model \emph{Tensorflow} and \emph{Keras} libraries are used.~\cite{tensorflow}\cite{keras} \paragraph{Starting point model} To give reference, Figure \ref{mod} shows the model used on the PCA training set, mentioned at the end of Subsection \vref{extended-dataset}, with 102 input features. \begin{figure} \begin{center} \begin{tikzpicture}[x=1.6cm, y=0.9cm, >=stealth] % layers \foreach \m/\l [count=\y] in {1,2,3,missing,4} \node [every neuron/.try, neuron \m/.try] (input-\m) at (0,2.5-\y) {}; \foreach \m [count=\y] in {1,missing,2} \node [every neuron/.try, neuron \m/.try ] (hidden-\m) at (1.75,3-\y*1.75) {}; \foreach \m [count=\y] in {1,missing,2} \node [every neuron/.try, neuron \m/.try ] (hidden2-\m) at (3.5,2-\y*1.25) {}; \foreach \m [count=\y] in {1,missing,2} \node [every neuron/.try, neuron \m/.try ] (output-\m) at (5.25,1.5-\y) {}; % label \foreach \l [count=\i] in {1,2,3,{102}} \draw [<-] (input-\i) -- ++(-1,0) node [above, midway] {$I_{\l}$}; \foreach \l [count=\i] in {1,45} \node [above] at (hidden-\i.north) {$H_{\l}$}; \foreach \l [count=\i] in {1,30} \node [above] at (hidden2-\i.north) {$H_{\l}$}; \foreach \l [count=\i] in {1,10} \draw [->] (output-\i) -- ++(1,0) node [above, midway] {$O_{\l}$}; \foreach \i in {1,...,4} \foreach \j in {1,...,2} \draw [->] (input-\i) -- (hidden-\j); \foreach \i in {1,...,2} \foreach \j in {1,...,2} \draw [->] (hidden-\i) -- (hidden2-\j); \foreach \i in {1,...,2} \foreach \j in {1,...,2} \draw [->] (hidden2-\i) -- (output-\j); \foreach \l [count=\x from 0] in {Input, 1st Hidden, 2nd Hidden, Output} \node [align=center, above] at (\x*1.75,2) {\l \\ layer}; \end{tikzpicture} \end{center} \caption{MLP architecure used on the PCA dataset} \label{mod} \end{figure} \paragraph{Activation and loss functions} The activation function for the input and hidden layers is a \emph{Relu}, typically used to prevent the \emph{vanishing gradient problem}, and the output one uses a \emph{Softmax} to have classification probabilities among classes.~\cite{relu}\cite{soft}\cite{vanishing}\\ The loss used is the \emph{Sparse Categorical Crossentropy loss} as it is well suited for multiclass classification and, since the classes are integers and not one-hot encoded, the sparse version is preferred.~\cite{entropy} \paragraph{Choosing an optimizer} When choosing an optimizer for a neural network one must take into account the cost of reaching a minimum point on the error function. Although more complex optimizers exist, build to reduce training cost or achieve better performances on deep networks, the one choosen for this model is a classic Stochastic Gradient Descent optimizer. \subsubsection{Training set results} Section \vref{feature-extraction} talks about the four different training sets obtained from the dataset and, without going into details, states that there is continuous improvement. We now give a more detailed look on training performances, after detailing how class imbalance is faced and the applied validation method. Note that all the random seeds used by Tensorflow are fixed to make results reproducible. This step is necessary as many parameters initial value is random, for instance, the neural network weights. \paragraph{Class imbalance} To deal with the minority of some classes, balancing techniques should be applied when fitting the model. One of the possible approaches, and the one followed here, is to assign class weights. The main idea is to penalize errors made on not well represented classes to account for their minority, in particular, each class has weight: $$w_i = \frac{\#\mathit{total\;samples}}{\#\mathit{classes} * \#\mathit{samples\;of\;class\;i}}$$ Class weights computation relies on the \emph{compute class weights} function with the balanced logic from sklearn.~\cite{classweight}\\ The following are the computed quantities for the dataset classes: \begin{center} \begin{tabular}{ |l|c|c| } \hline Class name & Number of samples & Class weight \\ \hline air conditioner & 500 & 0.8998 \\ car horn & 208 & 2.1629 \\ children playing & 500 & 0.8998 \\ dog bark & 500 & 0.8998 \\ drilling & 500 & 0.8998 \\ engine idling & 517 & 0.8702 \\ gun shot & 190 & 2.3678 \\ jackhammer & 548 & 0.8209 \\ siren & 536 & 0.8393 \\ street music & 500 & 0.8998 \\ \hline \end{tabular} \end{center} As expected, the less numerous classes have higher class weight than the rest, in particular, the misclassification of a car horn sample counts more than double than an air conditioner one. \paragraph{Stratified cross-validation} To estimate performance on the training set, stratified cross-validation with five folds is used. Basically the dataset is divided into five parts and a model is repeatedly trained on four and tested on one, all while considering class distribution, indeed, the original distribution of the classes is maintained in the splits.~\cite{stratified} The stratified approach is required as there is class imbalance on the training set. In fact, applying a classical cross-validation could show misleading results, for instance when the minority classes are more present in the test fold rather than the training ones; in such cases the loss would be higher. The mean accuracy on the test folds gives a hint about the model performance. For this step the \emph{Stratified KFold} class from scikit learn is used.~\cite{cross-scikit} \paragraph{Results} The following are the results on the training sets, using the architectures presented at \vref{net-str}. The runs are performed with a five fold stratified cross-validation. At each fit, epochs are fixed at 100, batch size is set at 64 and class weights, computed once on the entire training set, are considered. The optimizer is the stochastic gradient descent with zero momentum and 0.01 learning rate. Mean accuracy and standard deviation are computed on the folds results. \begin{center} \begin{tabular}{ |l|r|r| } \hline Training set & Mean accuracy & St. deviation \\ \hline 132 features unscaled & 0.1138 & 0.0039 \\ 132 features scaled & 0.5743 & 0.0324 \\ 180 features scaled & 0.6363 & 0.0494 \\ 102 features reduced with PCA & 0.6188 & 0.0420 \\ \hline \end{tabular} \end{center} There is a great improvement after scaling the training set, after that small refinements are made. As accuracy is the best on the last two training sets, those are the two selected to perform the hyperparameter tuning. \subsubsection{Hyperparameter tuning} The last two tries with feature extraction lead to the best results with stratified cross-validation. Although the two models are reasonable, they can not be the final ones as many parameters are left on their default value, for instance, learning rate and momentum of the optimizer are untouched. The main goal now is to experiments with ranges of model parameters to find a better one. From now on, the model build on the 180 features training set is called \emph{Extended model}, while the one tested on the PCA training set is named \emph{PCA model}. \paragraph{Grid and random search comparison} Two of the most commonly used strategies in hyperparameter optimization are \emph{grid} and \emph{random search}~\cite{random-grid}. In both cases we define ranges of parameters to test different combinations, for instance, fixed the number of neurons, one could try to find the best combination of learning rate and momentum that optimize accuracy on the training set. While similar, the two methodologies differs in the amount of exploration they do. The grid search try all the possible combinations of parameters, while the random approach fixes a number of iterations and picks an arbitrary unseen combination each time. Obviously the first one is more computationally expensive than the second, if we fix a small amount of possible iterations, but in theory it finds a better result than going the random route. Nonetheless the grid search can led to over-fitting and in practice random search in preferred. \paragraph{Random search} We now run a random search with various parameters to optimize the initial models. Note that class weights are still considered and the models are evaluated again with a five fold stratified cross-validation. The optimizer used is the stochastic gradient descent. The considered ranges for parameters for this run are: \begin{enumerate} \item \emph{Neurons}: input layer has dimension $I$, last layer is fixed at 10, while the two hidden layers are tested with a number of neurons respectively equals to: $$H_1 + 2i\;\;\text{and}\;\;H_2 + 2j,\;\;\text{with}\;\; i, j \in \{-2,-1,0,1,2\}$$ \item \emph{Learning rate}: $0.001, 0.01, 0.1, 0.5$; \item \emph{Momentum}: $0.0, 0.01, 0.1, 1$; \item \emph{Epochs}: $60, 80, 100$; \item \emph{Batch size}: $32, 64$. \end{enumerate} The quantities $I$, $H_1$, $H_2$, which are input, first hidden and second hidden layer dimensions, depend on the initial network architecture. As stated before, the two models considered are the following: \begin{center} \begin{tabular}{ |l|r|r|r|} \hline Model name & $I$ & $H_1$ & $H_2$ \\ \hline Extended model & 180 & 80 & 46 \\ PCA model & 102 & 45 & 30 \\ \hline \end{tabular} \end{center} An \emph{early stopper} with patience equals to three is used on the training to stop it when no progress is made with respect to the last three epochs result.~\cite{early} The search is performed with 100 iterations for both rounds. The Extended model is trained on the 180 features dataset, while the PCA model on the 102 features dataset. \paragraph{Final models} The first round of the random search, performed on the Extended model, results in the following parameters: \begin{itemize} \item Neurons: 180 for input, 76 for the first hidden layer, 50 for the second and 10 for output; \item Momentum: 0.01 \item Learning rate: 0.01 \item Epochs: 80 \item Batch size: 64 \end{itemize} We can see an improvement in accuracy by comparing it to the starting point model: \begin{center} \begin{tabular}{ |l|r|r| } \hline Model & Mean accuracy & St. deviation \\ \hline Initial extended model & 0.6363 & 0.0494\\ Tuned extended model & 0.6497 & 0.0431\\ \hline \end{tabular} \end{center} The second random search on the smaller PCA model, finds these parameters: \begin{itemize} \item Neurons: 102 for input, 45 for the first hidden layer, 32 for the second and 10 for output; \item Momentum: 0.01 \item Learning rate: 0.1 \item Epochs: 80 \item Batch size: 32 \end{itemize} As before, comparing it with the starting model, we can see some better results: \begin{center} \begin{tabular}{ |l|r|r| } \hline Model & Mean accuracy & St. deviation \\ \hline Initial PCA model & 0.6188 & 0.0420\\ Tuned PCA model & 0.6270 & 0.0315\\ \hline \end{tabular} \end{center} \paragraph{Final remarks on MLP} Unexpectedly PCA application lead to worse results compared to the 180 feature dataset. Even after the random search, the so called Tuned Extended model performs better. For this reason this is the final multilayer perceptron evaluated on the test set. \subsection{Convolutional neural network} \label{cnn} This Subsection presents the results obtained by a convolutional neural network trained on the image datasets presented at the end of Section \vref*{feature-extraction}. We start by giving an overview of the network, to then proceed with the training results and hyperparameter tuning. \subsubsection{Network structure} The structure of a neural network for image classification consists of convolutional layers followed by pooling layers, and, at the end, densely connected ones to have the output classification. The main idea is to extract relevant features from the images with the convolutional layers, that apply a kernel to the input to detect patterns and image features. After the convolution part there is a pooling layer that reduces dimensionality. At the end, densely connected layers classify the obtained features. \paragraph{Stacking convolutions} Taking inspiration from the \emph{VGG19 architecture}, that stacks two or more convolutional layers to then apply pooling, the network is built with the following layers:~\cite{vgg} \begin{itemize} \item \emph{Two Convolution layers}: each one with 8 filters, kernel size of $5 \times 5$, Relu activation, $1 \times 1$ stride; \item \emph{MaxPooling}: pooling size of $2 \times 2$, $3 \times 3$ strides; \item \emph{Two Convolution layers}: each one with 16 filters, kernel size of $3 \times 3$, Relu activation, $1 \times 1$ stride; \item \emph{MaxPooling}: pooling size of $2 \times 2$, $3 \times 3$ strides; \item \emph{Flatten layer}; \item \emph{Dense layers}: dimensions are 2000, 800, 200 and 60, Relu activation; \item \emph{Output}: 10 neurons and softmax activation. \end{itemize} \begin{figure}[h] \includegraphics[width=\textwidth]{images/cnn.png} \caption{Diagram of the network used in the experiments. We can see the first two convolutions, followed by the max pooling layer, first two blue layers and the red one respectively. After that, two other convolutions and another max pooling, and finally the densely connected ones, in gray. We can see the principle of learning higher level features and fine grain ones, indeed the dimensionality increases from the first convolutions to the seconds.} \label{cnn} \end{figure} \paragraph{Network parameters} Two crucial parameters for a convolutional layer are the number of filters, the kernel size and strides dimensions. The filter number represent the number of ways the layer sees the image, while the kernel size makes the area considered by the layer bigger or smaller. One can see a small kernel size as a way of learning fine grain features, on the opposite, a bigger size capture higher level image characteristics. Finally, strides determine the reduction on the output shape, for instance, a stride equals to two, reduces the output shape in half. One important factor in choosing the values for this parameters is the output dimensionality, in fact, while strides reduces it, higher filters and smaller kernels increase the feature space. For this reasons, filters number is smaller and kernel size is bigger for the first convolutional layers, while we have more filters and smaller kernel for the consequent ones. The idea is to learn higher level features first, to then capture small peculiarities. For the first architecture stride on convolutional layers is kept at one, later, an higher value is tested. Regarding the max pooling layers, they run a window on the convolution result to apply a max function and reduce dimensionality. A crucial parameter is the pool size. The first architecture keeps it at two, but later three is tested. \subsubsection{Training set results} We now present the results on the image dataset using the model introduced in the \emph{Stacking convolutions} paragraph. Stratified cross validation with five folds is once again used, the number of epochs is fixed at 15 and we use an early stopper with patience equals to two, to stop training if no improvement is made on accuracy for two epochs. The principle of class weights, introduced with the previous MLP experiments, is reused this time. The optimizer is the Adam one, as it is built to speed up training on deep neural networks, with learning rate set to 0.001. Batch size is fixed at 32. We can compare results with previously tuned multilayer perceptrons, as the validation technique is the same. \begin{center} \begin{tabular}{ |l|r|r| } \hline Model & Mean accuracy & St. deviation \\ \hline Tuned extended model& 0.6497 & 0.0431 \\ Tuned PCA model & 0.6270 & 0.0315 \\ Initial CNN & 0.5383 & 0.0395 \\ \hline \end{tabular} \end{center} Results with the CNN are a big downgrade from previously tested models, but we need to consider the fact that those have their hyperparameters tuned. \subsubsection{Hyperparameter tuning} Training a CNN is really expensive compared to the MLP networks. Performing a full random search is infeasible with the resources we have, therefore, to have an idea about the impact on accuracy of the network parameters, we run cross-validation on the following models, that, for simplicity, have a name. \begin{center} \begin{tabular}{ |c|r|r|r|l| } \hline Name & Filters & C. strides & P. strides & Dense structure \\ \hline $C_0$ & 8 & $1 \times 1$ & $3 \times 3$ & 2000, 800, 200, 60 \\ $C_1$ & 8 & $2 \times 2$ & $2 \times 2$ & 300, 150, 80, 25 \\ $C_2$ & 16 & $1 \times 1$ & $3 \times 3$ & 5000, 2000, 800, 200, 60 \\ $C_2$ & 16 & $2 \times 2$ & $2 \times 2$ & 600, 200, 80, 25 \\ \hline \end{tabular} \end{center} The filter quantity represent the number of filters for the first convolutions. The subsequent ones have double that quantity. Kernel size is fixed at $5 \times 5$ and $3 \times 3$ for the first two and last two convolutional layers respectively. The \emph{C. strides} values are the strides for convolutions, a value of one does not reduce dimensionality, while a value of three reduces the output dimensionality by an half.\\ The \emph{P. strides} are the pooling layers strides. As before, two reduces dimensionality by an half and three by two thirds.\\ Lastly, the \emph{dense structure} sums up the last layers architecture. For instance, the model $C_1$ have four consequent densely connected layers, with 300, 150, 80 and 25 neurons respectively, followed by a final output layer of 10 neurons. A different learning rate is also tested for the Adam optimizer, indeed 0.001 and 0.003 runs are performed. Also, two different batch sizes are tested, 32 and 64. The rest of the network is the same as the initial CNN, thus activation is always a Relu, apart from the Softmax on the output layer. Trying all combinations of learning rate and batch size is too expensive, so, we first try to increase learning rate with 32 batch size, then, on the best learning rate, we try to increase batch size to 64. \paragraph{Cross-validation results} The following are cross-validation results, again with five folds stratified cross-validation and class weights for training. We can see that the $C_3$ model is the best performer, even if the differences among models are not really big. \begin{center} \begin{tabular}{ |c|r|r|r|r| } \hline \multirow{2}{*}{Model name} & \multicolumn{2}{c|}{0.001 l.r.} & \multicolumn{2}{c|}{0.003 l.r.} \\ \cline{2-5} & M. accuracy & St. deviation & M. accuracy & St. deviation \\ \hline $C_0$ & 0.5383 & 0.0395 & 0.5150 & 0.0253 \\ $C_1$ & 0.5347 & 0.0480 & 0.5223 & 0.0480 \\ $C_2$ & 0.5285 & 0.0325 & 0.3909 & 0.1455 \\ $C_3$ & 0.5412 & 0.0474 & 0.5341 & 0.0451 \\ \hline \end{tabular} \end{center} The table shows that increasing the learning rate always leads to worse results, thus we now keep learning rate at 0.001 and try with 64 batch size. \begin{center} \begin{tabular}{ |c|r|r|r|r| } \hline \multirow{2}{*}{Model name} & \multicolumn{2}{c|}{32 batch size} & \multicolumn{2}{c|}{64 batch size} \\ \cline{2-5} & M. accuracy & St. deviation & M. accuracy & St. deviation \\ \hline $C_0$ & 0.5383 & 0.0395 & 0.5272 & 0.0446 \\ $C_1$ & 0.5347 & 0.0480 & 0.5096 & 0.0403 \\ $C_2$ & 0.5285 & 0.0325 & 0.5325 & 0.0512 \\ $C_3$ & 0.5412 & 0.0474 & 0.5212 & 0.0427 \\ \hline \end{tabular} \end{center} No improvement again after increasing the batch size, apart on $C_2$ architecture. The best model found, and the one that is measured on the test set is the $C_3$ architecture with 0.001 learning rate for the Adam optimizer and 32 batch size. \paragraph{Overfitting and underfitting} Cross-validation results gives an hint about overfitting and underfitting. Indeed, by looking at the trainable parameters quantity using Keras summary function we have the following results. \begin{center} \begin{tabular}{ |c|r| } \hline Model name & Trainable parameters\\ \hline $C_0$ & 13011950 \\ $C_1$ & 256095 \\ $C_2$ & 67957294 \\ $C_3$ & 923789 \\ \hline \end{tabular} \end{center} The numbers might indicate that the $C_1$ model is underfitting, as the number of parameters does not create the required complexity in the model to capture the dataset, the $C_2$ and initial models are overfitting, as they have too many parameters thus they are learning too much from the training. $C_3$ seems to be at a sweet spot between the two, confirmed by the higher accuracy. \paragraph{Final remarks on CNN} Unfortunately, the high cost of training a CNN made impossibile to run a Random search starting from the initial network structure, nonetheless a total of twelve models are tested. Results are worse than the ones obtained with the MLP models. \newpage
{ "alphanum_fraction": 0.7018900699, "avg_line_length": 44.8589981447, "ext": "tex", "hexsha": "3e4b83e1b8b97b195d8df5e056e0a546c8f65f9e", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2022-02-08T22:16:39.000Z", "max_forks_repo_forks_event_min_datetime": "2022-02-08T22:16:39.000Z", "max_forks_repo_head_hexsha": "9516e9a4f6ed3af2c5847c13321f8c0624ff827d", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "tomfran/urban-sound-classification", "max_forks_repo_path": "report/chapters/model.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "9516e9a4f6ed3af2c5847c13321f8c0624ff827d", "max_issues_repo_issues_event_max_datetime": "2021-11-17T10:16:19.000Z", "max_issues_repo_issues_event_min_datetime": "2021-11-17T10:16:19.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "tomfran/urban-sound-classification", "max_issues_repo_path": "report/chapters/model.tex", "max_line_length": 133, "max_stars_count": 1, "max_stars_repo_head_hexsha": "9516e9a4f6ed3af2c5847c13321f8c0624ff827d", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "tomfran/urban-sound-classification", "max_stars_repo_path": "report/chapters/model.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-08T22:33:40.000Z", "max_stars_repo_stars_event_min_datetime": "2022-02-08T22:33:40.000Z", "num_tokens": 6462, "size": 24179 }
%---------------------------------------------------------------------------------------- % LAGRANGIAN POINTS %---------------------------------------------------------------------------------------- \section{Computation of the Lagrangian points} \subsection{Introduction} Given a set of forces which apply to an object in the plane, our goal is to compute the equilibrium positions of this object. We will restrain ourselves to three kind of interactions: elastic, centrifugal and gravitational forces. We will need the previously written Newton-Raphson method, which will be applied to the resultant force. As a result, we will obtain the roots of this three-dimensional function, corresponding to the equilibrium positions. \subsection{Forces} Before explaining and analysing the method, we need to express both the general form of the forces and their Jacobian matrices. Each force will be parameterized with an integer - the intensity - and their origin. We will start with the elastic force, which represent a spring action on an object. In a one dimensional space, this force is expressed by the following formula: \[\vec{f}_c = -k (l - l_0)\vec{x},\] with $l$ the spring length and $l_0$ its free length. In the plane $(O, \vec{x}, \vec{y})$, this force can be reduced using orthogonals projections to the following form: \[f_e:\begin{pmatrix}x\\y\end{pmatrix}\rightarrow \begin{pmatrix}\frac{-k(x-x_0)}{\sqrt[]{(x-x_0)^2+(y-y_0)^2}}\\\frac{-k(y-y_0)}{\sqrt[]{(x-x_0)^2+(y-y_0)^2}}\end{pmatrix},\] where $\begin{pmatrix}x_0\\y_0\end{pmatrix}$ is the origin coordinate and $k$ the intensity. We can now express the $f_e$ Jacobian matrix: \[H_{f_e}:\begin{pmatrix}x\\y\end{pmatrix}\rightarrow \begin{pmatrix} \frac{(y_0-y)^2}{((x_0-x)^2+(y_0-y)^2)^\frac{3}{2}} & \frac{(x_0-x)(y_0-y)}{((x_0-x)^2+(y_0-y)^2)^\frac{3}{2}}\\ -\frac{(x_0-x)(y_0-y)}{((x_0-x)^2+(y_0-y)^2)^\frac{3}{2}} & \frac{(x_0-x)^2}{((x_0-x)^2+(y_0-y)^2)^\frac{3}{2}} \end{pmatrix}\] Since the force expressions have already been given in the subject, we will only provide the forms of the Jacobian matrices for the centrifugal and gravitational forces: \[H_{f_c}:\begin{pmatrix}x\\y\end{pmatrix}\rightarrow \begin{pmatrix} k & 0\\ 0 & k \end{pmatrix}\] \[H_{f_g}:\begin{pmatrix}x\\y\end{pmatrix}\rightarrow \begin{pmatrix} \frac{k(2x_0^2 - y_0^2 - 4x_0x + 2x^2 + 2y_0y - y^2)}{((x_0 - x)^2 + (y_0 - y)^2)^\frac{5}{2}} & \frac{3k(x_0 - x)(y_0 - y)}{((x_0 - x)^2 + (y_0 - y)^2)^\frac{5}{2}}\\ \frac{3k(x_0 - x)(y_0 - y)}{((x_0 - x)^2 + (y_0 - y)^2)^\frac{5}{2}} & -\frac{c (x_0^2 - 2 y_0^2 - 2 x_0 x + x^2 + 4 y_0 y - 2 y^2)}{((x_0 - x)^2 + (y_0 - y)^2)^\frac{5}{2}} \end{pmatrix}\] \paragraph{Implementation} The implementation use a functionnal programming approach, which means that we use a kind of functions which will build and return a function as expression. This allow us to be more versatile by storing and using the function itself rather than using a general function, and to give it as a parameter to the Newton-Raphson method for example. \paragraph{Tests} The tests check in particular cases the validity of the result for one force. In particular, we check for the gravitational force that the norm is invariant by rotation around the origin. Moreover, these tests rose the issue of dividing by a norm equal to zero. We handle this case by setting the value to the asymptotic value of the function in this point (either infinite or a constant). % Bring examples... \subsection{Results} \subsubsection{First approach} We consider the following case: \begin{itemize} \item Two gravitational forces with coefficients $1$ (resp. $0.01$) and originating from $\begin{pmatrix}0\\0\end{pmatrix}$ (resp. $\begin{pmatrix}1\\0\end{pmatrix}$). \item A centrifugal force with coefficient $1$ at the barycenter of the two masses, i.e., at $\begin{pmatrix}\frac{0.01}{1.01}\\0\end{pmatrix}$. \end{itemize} We can easily plot the situation using the norm of the resultant force. So we get the Figure~\vref{fig:resultant_force_norm}. From above, the graph is more explicit, as shown in the Figure~\vref{fig:equilibrium}. The situation is similar to the Earth rotation around the Sun. Centrifugal force and gravitational attraction by the Sun are applied to the Earth. The centrifugal force applied to the barycenter of the two bodies is characteristic of a two-body interaction. \begin{figure}[ht] \centering \subfloat[3D shape]{\includegraphics[width=.6\columnwidth]{resultant_force_norm}\label{fig:resultant_force_norm}} \subfloat[Equilibrium]{\includegraphics[width=.45\columnwidth]{equilibrium}\label{fig:equilibrium}} \caption[Resultant force norm]{Resultant force norm} \end{figure} \subsubsection{Lagrangian points} The main issue is that the Newton-Raphson method can only compute one root while we seem to have an infinite number of roots. We can call the algorithm by specifying a step over the whole grid. We will see that in this kind of interaction, the equilibrium points are particular and well-known as the Lagrangian points. It is not obvious with the last example that only five points correspond to an equilibrium situation. However, we can modify a bit the datas to make this phenomenon more clear and spread the values with the logarithm, as in Figure~\vref{fig:lagrange_points}. \begin{figure}[ht] \centering \includegraphics[width=0.8\columnwidth]{lagrange_points} \caption[Lagrange points]{Lagrangian points.} \label{fig:lagrange_points} \end{figure} As the Newton-Raphson algorithm follows the slope of the curve, we can call it on equireparted points on the grid, distant by a fixed step we will note $\tau$. The algorithm must work in a closed domain. Moreover, at each iteration, the algorithm will output a position corresponding to a root. Assuming that we will perform enough iterations to find at least all the roots, we must be able to determine a maximal precision on a root coordinate. The minimal distance between two distinct roots, noted $\varepsilon$, will be another parameter of this algorithm.
{ "alphanum_fraction": 0.7235099338, "avg_line_length": 81.6216216216, "ext": "tex", "hexsha": "7cf583dfdb6d61cc3cfefe1de89f008c6cf78662", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "1bdea5c70a5bb8fd589f95e73ed476b90693fcf0", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "gdzx/numerical-algorithms", "max_forks_repo_path": "4-non-linear-systems-newton-raphson/doc/lagrangian_points.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "1bdea5c70a5bb8fd589f95e73ed476b90693fcf0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "gdzx/numerical-algorithms", "max_issues_repo_path": "4-non-linear-systems-newton-raphson/doc/lagrangian_points.tex", "max_line_length": 560, "max_stars_count": null, "max_stars_repo_head_hexsha": "1bdea5c70a5bb8fd589f95e73ed476b90693fcf0", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "gdzx/numerical-algorithms", "max_stars_repo_path": "4-non-linear-systems-newton-raphson/doc/lagrangian_points.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1710, "size": 6040 }
\chapter*{Abstract} write your abstract here and make sure to keep it below 350 words. You can use the program \textbf{texcount} to calculate the total number of words in your document and give you a word count specifically for this abstract. Just google it and figure out how to use it.
{ "alphanum_fraction": 0.7862068966, "avg_line_length": 41.4285714286, "ext": "tex", "hexsha": "299e80c2f1a340f107b20075093df6ac4185a150", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "547fec9d24f9d81d31c8628ceabe499acb022443", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "TJLW/UARK-Thesis-Disertation-LaTex-Template", "max_forks_repo_path": "frontmatter/abstract.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "547fec9d24f9d81d31c8628ceabe499acb022443", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "TJLW/UARK-Thesis-Disertation-LaTex-Template", "max_issues_repo_path": "frontmatter/abstract.tex", "max_line_length": 79, "max_stars_count": 1, "max_stars_repo_head_hexsha": "547fec9d24f9d81d31c8628ceabe499acb022443", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "TJLW/UARK-Thesis-Disertation-LaTex-Template", "max_stars_repo_path": "frontmatter/abstract.tex", "max_stars_repo_stars_event_max_datetime": "2019-03-26T18:47:43.000Z", "max_stars_repo_stars_event_min_datetime": "2019-03-26T18:47:43.000Z", "num_tokens": 69, "size": 290 }
\documentclass{article} \title{\underline{CREATING EQUATIONS}} \usepackage{amsmath} \begin{document} \maketitle \section*{QUADRATIC EQUATION} \begin{equation*} f(y) = ax^2 + bx + c \end{equation*} \end{document}
{ "alphanum_fraction": 0.6738197425, "avg_line_length": 23.3, "ext": "tex", "hexsha": "70dbf64f775e408e0bf5a921be0da7ff53052cea", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "9a20f57f5164ce75a0ba7f84f3f3e766d093f16b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Nnadiukwu-Miracle/miracleCSC101", "max_forks_repo_path": "CSC 101 - miracle - LaTEX/practice-8 equations.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9a20f57f5164ce75a0ba7f84f3f3e766d093f16b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Nnadiukwu-Miracle/miracleCSC101", "max_issues_repo_path": "CSC 101 - miracle - LaTEX/practice-8 equations.tex", "max_line_length": 39, "max_stars_count": null, "max_stars_repo_head_hexsha": "9a20f57f5164ce75a0ba7f84f3f3e766d093f16b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Nnadiukwu-Miracle/miracleCSC101", "max_stars_repo_path": "CSC 101 - miracle - LaTEX/practice-8 equations.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 92, "size": 233 }
\chapter{Containers} So far, you needed to define all global objects at the top of your application file. This is OK for a little exercise, but when your project grows in size, this becomes a problem. You also have to know upfront how many objects you need. Even for a little game like asteroids, it is impossible to know how many rocks there will be on the screen at all times. When you need several objects of the same type, you can use a container. When you declare a container for a certain object type, you can add objects to this container during the course of the application. An easy to use container is \eeClass{Memc}. The declaration of a container requires that you provide the type of objects it will contain. When you need a container for floats, you would declare it as a \eeClass{Memc<float>}. A container for rectangles would be a \eeClass{Memc<Rect>}. Look at this code for an example of a container with circles: \begin{code} // Declare a container for circles Memc<Circle> circles; void InitPre() { EE_INIT(); } bool Init() { // add 10 circles to this container for(int i = 0; i < 10; i++) { // The method New() adds a new circle to the container. // At the same time the Circle method set() is used to // assign a radius and a position. circles.New().set(0.1, RandomF(-D.w(), D.w()) , RandomF(-D.h(), D.h()) ); } return true; } void Shut() {} bool Update() { if(Kb.bp(KB_ESC)) return false; return true; } void Draw() { D.clear(BLACK); // Go over all circles in the container and // draw them on the screen. for(int i = 0; i < circles.elms(); i++) { circles[i].draw(RED); } } \end{code} \begin{exercise} What would happen if, by mistake, you place the code to generate circles in Update instead of Init? \end{exercise} \begin{exercise} Put this code back in Init, but add code to the Update function: every time you press the space bar, an extra circle should be added to the container. \end{exercise} \begin{exercise} Show an image on the screen instead of a circle. \textit{(Too hard? Start with a rectangle!)} \end{exercise} \section{New()} The method \eeFunc{New()} creates a new element at the end of the container. At the same time, it returns a reference to this new element, which is why can use the \eeClass{set()} method of circle in the example above. But suppose you need to use two methods of the newly created object? You could try something like this: \begin{code} for(int i = 0; i < 10; i++) { circles.New().set(0.1, RandomF(-D.w(), D.w()), RandomF(-D.h(), D.h())); circles.New().extend(-0.05); } \end{code} \ldots but it won't work. Instead you are creating two new circles at every iteration. The method \eeFunc{set} is called on the first circle, the method \eeFunc{extend} at the second. The solution is simple: Pass the result of \eeFunc{New()} to a temporary variable. The type of this variable must be a reference to a circle. \begin{code} for(int i = 0; i < 10; i++) { Circle & c = circles.New(); c.set(0.1, RandomF(-D.w(), D.w()), RandomF(-D.h(), D.h())); c.extend(-0.05); } \end{code} \begin{note} If you don't know what a reference is, don't worry. We'll talk about it later. For now, remember that you need to put an ampersand (\&) between the type and the name. \end{note} \section{Using objects} Very often, you need to iterate over all elements in a container. For example when you draw them all on the screen. It would be very annoying if you had to remember somehow exactly how many elements a container contains. Fortunately, you do not have to. Containers provide a method \eeFunc{elms()} which returns the current number of elements. And to access individual elements you can use square brackets, just like with primitive C arrays. \begin{code} for(int i = 0; i < circles.elms(); i++) { circles[i].draw(RED); } \end{code} Because you will need an iteration like this very, very often, Esenthel provides a `shortcut'. A macro \eeFunc{REPA} exists to replace the whole for-loop declaration with one instruction: \begin{code} REPA(circles) { circles[i].draw(RED); } \end{code} Remember this as `repeat all'. \textit{(Or don't remember it at all. Plain for-loops will always work just as well.)} And you can do more with this than just draw every element on the screen. Take a look at the next example and try to figure out what it does. \begin{code} REPA(circles) { circles[i].pos.y += Time.d(); if(circles[i].pos.y > D.h()) { circles[i].pos.y -= (2*D.h() + RandomF(1)); } } \end{code} \begin{exercise} \begin{itemize} \item Test the code above in an application. What function would you place this code in? \item Add a function to add an extra circle every time you hit the space bar. \item Instead of a fixed radius, use a random value between 0.01 and 0.1. \item Draw only the perimeter of the circle, in white, on a blue background. \item If there are any people nearby, shout out loud what this looks like. \end{itemize} \end{exercise} \section{Adding Objects} You will add objects to a container quite a lot. This might happen in the Init function as well as the Update function. Next are a few examples to get you started, but there are a lot of different ways to add objects. It is up to you to figure out what is the best approach in your application. \subsection{During Init} Ten circles on random positions: \begin{code} for(int i = 0; i < 10; i++) { Circle & c = circles.New(); c.set(0.1, RandomF(-D.w(), D.w()), RandomF(-D.h(), D.h())); } \end{code} Circles from the left to the right side of the screen: \begin{code} for(float i = -D.w(); i < D.w(); i += 0.2) { circles.New().set(0.1, i, 0); } \end{code} Squares placed evenly over the screen: \begin{code} for(float i = -D.w(); i < D.w(); i += 0.2) { for(float j = -D.h(); j < D.h(); j += 0.2) { rects.New().set(i - 0.05, j - 0.05, i + 0.05, j + 0.05); } } \end{code} \begin{exercise} Test all of the examples above and make sure you understand every one of them. Always add code to display all elements on the screen. \end{exercise} \subsection{During Update} Respond to keyboard input: \begin{code} if(Kb.bp(KB_SPACE)) { circles.New().set( RandomF(0.05, 0.2) , RandomF(-D.w(), D.w()) , RandomF(-D.h(), D.h()) ); } \end{code} Use the mouse position: \begin{code} if(Ms.bp(0)) { circles.New().set(0.05, Ms.pos()); } \end{code} With a timer: \begin{code} Flt timer = 3; // Put this line on top of the file. // These lines belong in Update() if(timer > 0) timer -= Time.d(); else { timer = 3; circles.New().set( RandomF(0.05, 0.2) , RandomF(-D.w(), D.w()) , RandomF(-D.h(), D.h()) ); } \end{code} \begin{exercise} Test all of the examples above and make sure you understand every one of them. Always add code to display all elements on the screen. \end{exercise} \section{Removing Objects} Of course you also want to remove objects from a container. Which is not that hard: \begin{code} Memc<Vec2> dots; // ... add a lot of dots dots.remove(0); // remove the first dot \end{code} With the method \eeFunc{remove} and the index of the element as an argument, you delete an object in a container. Be careful though. Very often you will want to remove an element while iterating over a container. It is a common beginner mistake to alter an object after you've deleted it: \begin{code} for(int i = 0; i < dots.elms(); i++) { if(dots[i].y < -D.h()) { dots.remove(i); } dots[i].y -= Time.d(); } \end{code} In the example above, all dots are moved down at every update. When a dot arrives at the bottom of the screen, it will be removed from the container. After removing a dot, it is not the current dot that is moved down, but the next one in the container. This is not a big problem, unless this was actually the last dot in the container. In which case you try to move down an object past the end of the container. The result will be a program crash, your computer might explode and probably a kitten will die somewhere. To prevent this from happening it is a good rule to put the remove method as the last statement in the loop: \begin{code} for(int i = 0; i < dots.elms(); i++) { dots[i].y -= Time.d(); if(dots[i].y < -D.h()) dots.remove(i); } \end{code} Things start to be a bit more complicated when you combine more than one container. In the next example we have container for dots and a container for circles. The code tries to verify if a dot hits a circle. If this is the case, both the circle and the dot must be removed from their container. To do this, we have to check every dot against every circle. \begin{code} for(int i = 0; i < dots.elms(); i++) { for(int j = 0; j < circles.elms(); j++) { if(Cuts(dots[i], circles[j])) { dots.remove(i); circles.remove(j); // At this point there is one less dot in the container, but // the remaining circles will still be compared against the current dot. // current dot. If we are at the last dot, it will no longer be valid. // To prevent a crash, we add a break statement to go back to // the outer for loop: break; } } } \end{code} And if you'd like to clear all container elements at once: \begin{code} dots.clear(); \end{code} \section{A little Game} \begin{enumerate} \item Create a triangle at the bottom of the screen. This triangle can be moved back and forth with the arrow keys. \item Add a container for the class \eeClass{Vec2}. Every time you press the space bar, you add an element on the location of the triangle. \item Increase the y value of every container element in the Update function.(Use \eeClass{Time.d()}!) If an element reaches the top of the screen, remove it from the container. \item Draw all elements on the screen in the \eeFunc{Draw()} function. \item Create a second container for circles. Every second a circle must be added somewhere at the top of the screen. \item Move all circles down in the \eeFunc{Update()} function. \item Show all circles in the \eeFunc{Draw()} function. \item When a circle hits a \eeClass{Vec2} from the other container, both must be removed. \item When a circle hits the triangle, `Game Over' must be shown on the screen. \end{enumerate} You could go even further with this game. Don't create new circles after the game is finished, and disable movement and shooting. Circles might move faster the longer you play, a score can be shown or you might give the player more than one life. And instead of triangles and circles, images might be used. Have fun!
{ "alphanum_fraction": 0.6963296724, "avg_line_length": 37.6431095406, "ext": "tex", "hexsha": "57874e98105c5507b75a57d56e77505255879d76", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2522fd91dfba1f93fd623eb0b50e55d560d6c803", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "yvanvds/EsenthelCourse", "max_forks_repo_path": "course/en/basics/containers.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2522fd91dfba1f93fd623eb0b50e55d560d6c803", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "yvanvds/EsenthelCourse", "max_issues_repo_path": "course/en/basics/containers.tex", "max_line_length": 551, "max_stars_count": null, "max_stars_repo_head_hexsha": "2522fd91dfba1f93fd623eb0b50e55d560d6c803", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "yvanvds/EsenthelCourse", "max_stars_repo_path": "course/en/basics/containers.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2832, "size": 10653 }
\part{Reference documentation} \section{Overview} \label{sec:mskref} This part contains the reference documentation for how parameters need to be formatted as well as the output of the simulator. It does not explain how to access the simulator from Java code. One needs to refer to Section~\ref{sec:mskusejava}, and the ContactCenters API documentation for this. Section~\ref{sec:mskxml} describes how XML is used by the simulator. It provides a brief overview of XML as well as the main data structures specific to our paramater file format. We also give examples on how the HTML documentation for specific parameters of the simulator can be retrieved. The following subsections give the available performance measures, arrival processes, routing and dialing policies, which are not listed in the HTML documentation for parameters. % Types of performance measures, arrival processes, % dialer's, and router's policies are represented as strings in the XML % documents. % These strings are not constrained by the XML % schema representing parameter files, because plug-ins may % eventually provide additional strings. % As a result, the authorized values of these strings are not given in % the HTML documentation of the schemas. % The following sections therefore give the builtin % types of performance measures, % arrival processes, % dialer's, and router's policies. %Section~\ref{sec:mskccparams} details the elements and attributes used %to describe a call center to be simulated. %Section~\ref{sec:msksimparams} provides a detailed description of the %parameters used to perform experiments. %For each XML element described in these two sections, the %its name, %acceptable attributes, and allowed %nested contents are specified and defined. Required elements and %attributes are identified. If an element or attribute is not %explicitly marked as required, it is optional and a default value or %behavior is specified in the documentation. Section~\ref{sec:mskexp} explains in more details the two supported methods of experiment. In particular, it contains information on how sequential sampling and batch means work in the simulator. Section~\ref{sec:mskoutput} describes the output produced by the simulator. It describes the contents and format of any report produced by the simulator, and how performance measures are regrouped into matrices. Every supported type of performance measure is also presented in detail.
{ "alphanum_fraction": 0.8066366243, "avg_line_length": 43.5892857143, "ext": "tex", "hexsha": "133197b55e9dbaafab89295fbeb842957709c54b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f5ddb4a0a4b30dbf436ac36e6d97facce2f3576d", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "umontreal-simul/contactcenters", "max_forks_repo_path": "doc/msk/mskref.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f5ddb4a0a4b30dbf436ac36e6d97facce2f3576d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "umontreal-simul/contactcenters", "max_issues_repo_path": "doc/msk/mskref.tex", "max_line_length": 71, "max_stars_count": null, "max_stars_repo_head_hexsha": "f5ddb4a0a4b30dbf436ac36e6d97facce2f3576d", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "umontreal-simul/contactcenters", "max_stars_repo_path": "doc/msk/mskref.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 532, "size": 2441 }
\documentclass{article} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} % Main maths packages \usepackage{amsmath} \usepackage{amssymb} \usepackage{mathrsfs} \begin{document} \section*{Matrices} \subsection*{Basic matrix} \[ \begin{matrix} a_{11} & \cdots & a_{1k}\\ \vdots & \ddots & \vdots\\ a_{k1} & \cdots & a_{kk} \end{matrix} \] \subsection*{Matrix with braces} \[ \begin{pmatrix} a_{11} & \cdots & a_{1k}\\ \vdots & \ddots & \vdots\\ a_{k1} & \cdots & a_{kk} \end{pmatrix} \] A lot of environments exist with different brackets. \[ \begin{vmatrix} a_{11} & \cdots & a_{1k}\\ \vdots & \ddots & \vdots\\ a_{k1} & \cdots & a_{kk} \end{vmatrix} \] \[ \begin{Vmatrix} a_{11} & \cdots & a_{1k}\\ \vdots & \ddots & \vdots\\ a_{k1} & \cdots & a_{kk} \end{Vmatrix} \] \[ \begin{bmatrix} a_{11} & \cdots & a_{1k}\\ \vdots & \ddots & \vdots\\ a_{k1} & \cdots & a_{kk} \end{bmatrix} \] \[ \begin{Bmatrix} a_{11} & \cdots & a_{1k}\\ \vdots & \ddots & \vdots\\ a_{k1} & \cdots & a_{kk} \end{Bmatrix} \] \end{document}
{ "alphanum_fraction": 0.6074498567, "avg_line_length": 15.1739130435, "ext": "tex", "hexsha": "24ca9f003ff7853a391661392ef60c30047401d6", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "cc623a88ab05ca90430338333003293baea00f8c", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ZenLulz/LatexCompendium", "max_forks_repo_path": "compendium/mathematics/matrices.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "cc623a88ab05ca90430338333003293baea00f8c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ZenLulz/LatexCompendium", "max_issues_repo_path": "compendium/mathematics/matrices.tex", "max_line_length": 52, "max_stars_count": 2, "max_stars_repo_head_hexsha": "cc623a88ab05ca90430338333003293baea00f8c", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ZenLulz/LatexCompendium", "max_stars_repo_path": "compendium/mathematics/matrices.tex", "max_stars_repo_stars_event_max_datetime": "2019-09-23T20:16:19.000Z", "max_stars_repo_stars_event_min_datetime": "2016-07-30T21:43:55.000Z", "num_tokens": 447, "size": 1047 }
\subsection{Wave functions} For a vector in hermitian basis, for each eigenvector we have component. wave function is function on ith component.
{ "alphanum_fraction": 0.7972972973, "avg_line_length": 24.6666666667, "ext": "tex", "hexsha": "b76e52e136962838d4769376fd1d75e6d92ac669", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/geometry/functionalAnalysis/06-02-wave_functions.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/geometry/functionalAnalysis/06-02-wave_functions.tex", "max_line_length": 116, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/geometry/functionalAnalysis/06-02-wave_functions.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 33, "size": 148 }
\input{coredef.tex} %opening \def\ntitle{Principles of Quantum Mechanics} \def\nauthor{mostanes} \def\npart{II} \def\nterm{Michelmas} \def\nyear{2018} \def\nlecturer{Skinner} \renewcommand*{\H}{\mathcal{H}} \begin{document} \mktitlepage \newpage \setcounter{section}{-1} \section{Preface} This course draws heavily from material in IB, with references to Linear Algebra, Analysis II, Methods; as well as some references to IB Quantum Mechanics. Note however that this course is likely to prove a lot of the intuitive notions from IB QM, so it should be accessible even to those who were left baffled by IB QM. The author of these notes will make generous references to IB material without restating it, unlike the course lecturer. References to II material will also be made where appropriate. \newpage \section{Introduction} \subsection{Comparison of Classical and Quantum Mechanics} \subsubsection{Classical Mechanics} Classical mechanics are governed by Newton's laws, which are 2nd order differential equations in the variables $\vec{x}$, $\vec{p}$. By combining the 2 variables as in classical dynamics, we obtain the \textbf{phase space}, which in Newtonian dynamics is $\R^{2n}$, with the particular case $n=3$ our universe.\\ In classical mechanics, the observables are simple quantities, represented by functions $\R^{2n} \rightarrow \R$. \subsubsection{Quantum Mechanics} Particles are instead represented by points in a Hilbert space (equivalent of the phase space).\\ Observables are represented by linear operators $\H \rightarrow \H$. \subsection{Hilbert spaces} \begin{remark} There is an entire chapter of linear analysis dedicated to Hilbert spaces. \end{remark} \begin{definition} A Hilbert space $\H$ is a vector space (over $\C$) with a complete inner product $(\cdot,\cdot) : \H \times \H \rightarrow \C$. \end{definition} Therefore Hilbert spaces satisfy the usual vector space and complex inner product properties and any Cauchy sequence converges to a vector within the space under the norm induced by the inner product. \subsubsection{Examples} Every finite dimensional Hilbert space (of dimension $n$) is isomorphic to $\C^n$ (see Linear Analysis).\\ A simple $\infty$-dimensional space is $l^2$, the space of infinite sequences (converging under the 2-norm; see Analysis II).\\ Another example is $L^2$, the function space of Lebesgue integrable functions (integrals converging under the 2-norm; see Probability and Measure). Their inner product is defined analogously to the inner product of $l^2$ and the norm of $L^2$. \subsection{Dual spaces} The dual $\H^\star$ of a Hilbert space $\H$ is space of linear operators $\H \rightarrow \C$. One way of obtaining such operators is by considering the inner product $(\chi,\cdot) : \H \rightarrow \C$.\\ By the Riesz representation theorem (see Linear Analysis), all elements in $\H^\star$ can be written in inner product form. \subsection{Dirac notation} Dirac invented the following notation which is standard today in QM:\\ \begin{tabular}{ccc} Elements of $\H$ & written $\ket{\cdot}$ & called 'ket'\\ Elements of $\H^\star$ & written $\bra{\cdot}$ & called 'bra'\\ Inner product & written $\bra{\cdot}\ket{\cdot}$ & called 'braket'\\ \end{tabular} \end{document}
{ "alphanum_fraction": 0.7614228764, "avg_line_length": 54.35, "ext": "tex", "hexsha": "9fdb45ea198a6622200fedf8d5b58a42e4e2ee33", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "bf40a5250be7c15037a937e1521c5aa7e0cb0841", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mostanes/cam-maths-tripos", "max_forks_repo_path": "pqm.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "bf40a5250be7c15037a937e1521c5aa7e0cb0841", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mostanes/cam-maths-tripos", "max_issues_repo_path": "pqm.tex", "max_line_length": 504, "max_stars_count": null, "max_stars_repo_head_hexsha": "bf40a5250be7c15037a937e1521c5aa7e0cb0841", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mostanes/cam-maths-tripos", "max_stars_repo_path": "pqm.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 835, "size": 3261 }
\subsection{Overview}\label{picoExecutable_picoOverview} The {\bfseries PICO} executable used by {\ttfamily sp} that solves linear programs and mixed-\/integer linear programs. \subsection{Usage}\label{picoExecutable_picoUsage} \begin{verb} PICO [options...] <input-file> \end{verb} \subsection{Options}\label{picoExecutable_picoOptions} Documentation of {\bfseries PICO} options is available from the PICO User Manual, which is available from http://software.sandia.gov/Acro/PICO. \subsection{Description}\label{picoExecutable_picoDescription} {\bfseries PICO} is a general-\/purpose solver for linear and integer programs. This command is not directly used by the user. PICO uses public-\/domain software components, and thus it can be used without licensing restrictions. The integer programming format used in SPOT is defined with the AMPL modeling language. PICO integrates the GLPK mathprog problem reader, which is compatible with a subset of the AMPL modeling language. This enables PICO to process an integer programming formulation in SPOT that can also be used with AMPL. \subsection{Notes}\label{picoExecutable_picoNotes} \begin{itemize} \item On large-\/scale tests, we have noted that PICO's performance is often limited by the performance of the public-\/domain LP solvers that it employs. In some cases, we have noted that these solvers can be over 100 times slower than the state-\/of-\/the-\/art CPLEX LP solver. \end{itemize}
{ "alphanum_fraction": 0.7984869326, "avg_line_length": 76.5263157895, "ext": "tex", "hexsha": "b2e9772f88f25e20c284cb7c611431b7c1720a90", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2020-12-05T18:11:43.000Z", "max_forks_repo_forks_event_min_datetime": "2020-09-24T19:04:14.000Z", "max_forks_repo_head_hexsha": "6b6b68e0e1b3dcc8023b453ab48a64f7fd740feb", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "USEPA/Water-Security-Toolkit", "max_forks_repo_path": "doc/wst/executables/picoExecutable.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6b6b68e0e1b3dcc8023b453ab48a64f7fd740feb", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "USEPA/Water-Security-Toolkit", "max_issues_repo_path": "doc/wst/executables/picoExecutable.tex", "max_line_length": 410, "max_stars_count": 3, "max_stars_repo_head_hexsha": "6b6b68e0e1b3dcc8023b453ab48a64f7fd740feb", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "USEPA/Water-Security-Toolkit", "max_stars_repo_path": "doc/wst/executables/picoExecutable.tex", "max_stars_repo_stars_event_max_datetime": "2020-12-05T18:11:40.000Z", "max_stars_repo_stars_event_min_datetime": "2019-06-10T18:04:14.000Z", "num_tokens": 348, "size": 1454 }
Figures~\ref{fig:transformationsA}, \ref{fig:transformationsB} and \ref{fig:transformationsC} show the transformation rules from the elements of a $\pi$-SCM meta-model into elements of the $\pi$-{\sc Pews} meta-model. There are two groups of rules: those that transform service composition elements of the $\pi$-SCM to $\pi$-{\sc Pews} meta-models elements; and those that transform rules grouped by policies into {\em A-policy} types. \begin{figure} \centering{ \includegraphics[width=0.96\textwidth]{figs/Mapping-1a.png} } \caption{ $\pi$-SCM to $\pi$-{\sc Pews} transformations (Services).} \label{fig:transformationsA} \end{figure} \subsection{Transformation of $\pi$-SCM service composition elements into $\pi$-{\sc Pews} elements} The transformation rules concerning service composition elements of $\pi$-SCM are shown in figure~\ref{fig:transformationsA}. Named actions of $\pi$-SCM (represented by {\sc\em Action} and {\sc\em Action:name}) are transformed into a named class {\sc Operation} with a corresponding attribute name {\sc Operation:name}. Named service activities represented by the elements {\sc\em ServiceActivity} and {\sc\em ServiceActivity:name} of the $\pi$-SCM are transformed into named operations of the $\pi$-{\sc Pews} model, represented by the elements {\sc CompositeOperation} and {\sc CompositeOperation:name}. Workflow definitions are translated to the usual control flow structures, using the sequential, parallel and choice combinators of PEWS. \begin{example} In the scenario ``To Publish Music'' of Example~\ref{ex:toPublicMusic}, the service activity {\sf PublishMusic} of the $\pi$-SC model calls to two {\sf Activities} of type {\em UpdateMusic}, respectively concerning the {\em Facebook} and {\em Twitter} business services. The {\sf Composite Operation} named {\em PublishSong} of the $\pi$-{\sc Pews} model is obtained. This operation uses the parallel constructor and it is written: {\sf PublishFacebook} $\parallel$ {\sf PublishTwitter}. \end{example} \subsection{Transformation of A-policies and rules into $\pi$-{\sc Pews}} The {\em A-policies} defined for the elements of $\pi$-SCM are transformed into $\pi$-{\sc Pews} {\sc A-Policy} classes. These classes keep their names, as expressed in the source model. The transformation of \textit{rules} is guided by the event types associated to these rules. The $\pi$-SCM {\em A-policy} variables, given as {\sc\em $<$Variable:name, Variable:type$>$} are transformed into elements of type {\sc Variable} with attributes {\sc name} and {\sc type} of the $\pi$-SCM model. As shown in Figures~\ref{fig:transformationsB} and~\ref{fig:transformationsC}, for an event of type {\sc\em Pre} the corresponding transformed rule is of type {\sc Precondition}; for an event of type {\sc\em Post} the corresponding transformed rule is of type {\sc Postcondition}; finally, for an event of type {\sc\em TimeRestriction} the corresponding transformed rule is of type {\sc Time}. The condition inside a $\pi$-SCM rule ({\sc\em Rule:condition}) is transformed into a class {\sc\em Condition:expression} where the attributes of the expression are transformed into elements of type {\sc Attribute}. \begin{figure} \centering{ \includegraphics[width=0.96\textwidth]{figs/Mapping-1b.png} } \caption{ $\pi$-SCM to $\pi$-{\sc Pews} transformations (\textit{A-Policies}).} \label{fig:transformationsB} \end{figure} \begin{example} In the ``To Publish Music'' scenario of Example~\ref{ex:toPublicMusic}, the {\sf Policies} {\em OAuthPolicy} and {\em HTTPAuthPolicy} of the $\pi$-SCM model are transformed into {\em A-policies} of type {\sf Precondition} of the $\pi$-{\sc Pews} model. In both cases the events are of type {\sf ActivityPrepared}. These policies, as stated in the $\pi$-SCM model, are associated to {\sf Activities}. Their transformed counterparts are associated to operations {\em PublishFacebook} and {\em PublishTwitter}. \end{example} \begin{figure} \centering{ \includegraphics[width=0.96\textwidth]{figs/Mapping-1c.png} } \caption{ $\pi$-SCM to $\pi$-{\sc Pews} transformations (Rules).} \label{fig:transformationsC} \end{figure}
{ "alphanum_fraction": 0.7519305019, "avg_line_length": 69.0666666667, "ext": "tex", "hexsha": "b519e2dcb91ffbcb2e8258c8a52431e06947e1a7", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "06e4ff673766438a1c6c61731036454e2087e240", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mmusicante/PlacidoConfArtigos", "max_forks_repo_path": "JournalToSubmit2015/OldVersion/transformations.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "06e4ff673766438a1c6c61731036454e2087e240", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mmusicante/PlacidoConfArtigos", "max_issues_repo_path": "JournalToSubmit2015/OldVersion/transformations.tex", "max_line_length": 394, "max_stars_count": null, "max_stars_repo_head_hexsha": "06e4ff673766438a1c6c61731036454e2087e240", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mmusicante/PlacidoConfArtigos", "max_stars_repo_path": "JournalToSubmit2015/OldVersion/transformations.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1096, "size": 4144 }
\documentclass[12pt]{article} \usepackage{geometry} \geometry{margin=1in} \geometry{a4paper} \usepackage{textcomp} \usepackage{booktabs} \usepackage{array} \usepackage{paralist} \usepackage{verbatim} \usepackage{subfigure} \usepackage{graphicx,caption} \usepackage{placeins} \usepackage{lipsum} \usepackage{xcolor} \usepackage{dcolumn} \usepackage{sectsty} \allsectionsfont{\sffamily\mdseries\upshape} \usepackage{gensymb,amsmath,mathtools,amssymb} \usepackage{flafter} %\usepackage{parskip} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usepackage{tocbibind} \usepackage[toc,page]{appendix} \captionsetup{width=\linewidth} \usepackage{bm} \usepackage{lscape} \graphicspath{{./figs/}} \title{First Order Optimal Sizing of Ascent Vehicles} \author{Devansh Agrawal} %\date{} \begin{document} \maketitle \section{Problem Description} In designing a ascent vehicle, it is important to start a design relatively close to the optimal solution. In this document, I optimize the thrust of a one dimensional rocket to maximise the apogee altitude. The parameters that are specified fall into the following categories: \begin{itemize} \item Environment parameters: $g, \rho(h)$ \item Design parameters: $m_0, m_p, c_d, A,c$\footnote{$m_0$ is the launch mass, $m_p$ is the propellant mass} \item Control parameters: $F(t)$ \end{itemize} For initial sizing, we make the following assumptions on the rocket dynamics: \begin{itemize} \item Earth is inertial, non-rotating, and the acceleration due to gravity is constant with altitude, $g = 9.81$~m/s$^2$ \item Earths atmosphere follows an exponential relationship, $\rho(h) = \rho_0 e^{-\beta h}$, where for the earth, $\rho_0 = 1.225$~kg/m$^3$, $\beta = 1/8.5$~km$^{-1}$ \item The rockets drag coefficient is constant, independent of the mach number\footnote{In general, this is not a good assumption, however, as we will see later, the maximum mach number of the rocket is to be kept below $M\sim0.7$. A numeric optimiser should be used later to refine the vehicle.} \item The rockets specific impulse is a constant $I_{sp} = c g_0$, and does not depend on altitude. \item The thrust curve is extremely simple. The rocket thrusts at a constant $F$~newtons for $t_b$~seconds. For this simplifying case, we do not need to include $t_b$ in our parameters since it is implicitly defined: \begin{align} I_t &= \int F dt = F t_b\\ \text{and } I_t &= \int -\dot{m} c dt = m_p c\\ \therefore t_b &= \frac{m_p c}{F} \end{align} \end{itemize} Hence, we will try to determine the optimum value of $F$ given all the other parameters. \section{State space description, and non-dimensionalisation} The dynamics of a 1-D rocket can be written as \begin{align} \dot{h} &=v \\ \dot{v} &=\frac{1}{m}(F(t)-D(v, h))-g \\ \dot{m} &=\frac{-F(t)}{c} \end{align} Directly integrating this is difficult, but we can do so numerically, using Mathematica. From their, the maximum height is readily determined. However there are 6 parameters that can be chosen in this problem, and the dependence due to 3 more. To simplify the analysis we introduce non-dimensional parameters. The reference quantities used are \begin{itemize} \item Mass: $m_0$ \item Acceleration: $g$ \item Speed: $c$ \end{itemize} Therefore the following reference dimensions can be found: \begin{itemize} \item Force: $m_0 g$ \item Time: $c/g$ \item Length: $c^2/g$ \item Total Impulse: $1/(m_0 c)$ \end{itemize} Using hats to denote the non-dimensionalised quantities, the state space equation can be written as \begin{align} \dot{\hat{h}} &=\hat{v} \\ \dot{\hat v} &=\frac{\hat F}{\hat m} - \left( \frac{\rho c^2 c_d A}{2 m_0 g} \right) e^{- (\beta c^2/g) \hat h} \frac{ \hat{v}^2}{\hat m} - 1 \\ \dot{\hat m} &=-\hat F \end{align} Furthermore, the switching occurs at $t = t_b$. Interestingly, the non-dimensionalized total impulse is directly related to the propellant mass fraction: \begin{align} \hat{I_t} &= \frac{I_t}{m_0 c} = \frac{F t_b}{m_0 c} = \frac{m_p c}{m_0 c} = \frac{m_p}{m_0}\\ \therefore \hat{t_b} &= \frac{I_t}{F} = \frac{m_p}{m_0} \frac{1}{\hat F} \end{align} Therefore, the important parameters are now immediately obvious, and can be interpreted as, \begin{enumerate} \item The thrust to initial weight ratio, \begin{equation} \hat F = \frac{F}{m_0 g} \end{equation} \item A drag parameter defined as, \begin{equation} x \equiv \frac{\frac{1}{2} \rho c^2 c_d A}{m_0 g} \end{equation} Note: This parameter is like the drag to weight ratio (except it uses $c$ as the velocity and $m_0$ as the mass) \item Propellant mass fraction, \begin{equation} MR \equiv m_p/m_0 \end{equation} \item Atmosphere parameter, \begin{equation} \hat \beta = \beta \frac{c^2}{g} \end{equation} which scales the atmosphere's height to the height we are interested in. \end{enumerate} Finally, the parameter to be optimised is the final altitude, \begin{equation} h_f = \hat h_f \frac{c^2}{g} \end{equation} Therefore, our 8 parameter space has reduced to a four parameter space, three if we assume the specific impulse of the rocket is fixed. The typical range of these parameters, and a rough order of magnitude for a 10k solid rocket are \begin{center} \begin{tabular}{@{} ccc@{}} \toprule Parameter & Typical value & Range\\ \midrule $\hat F$ & 15 & $>1$\\ $x$ & 60 & $>0$\\ $MR$ & 0.15 & $0.3\sim0.9$\\ $\hat \beta$ & 50 & $25 \sim 75$\\ $\hat h_f$ & 0.0075 \\ \bottomrule \end{tabular} \end{center} where the $\hat h_f = 0.0075$ is the value that corresponds to 10k ft. Note, while the thrust to weight ratio is approximately 15 for rockets for IREC, most big rockets have $\hat F$ closer to 1.5! The next section shows this is actually very close to the optimum. \section{Optimization Results} A relatively brute force form of optimisation is attempted, in the interest of time. A Mathematica function is written to solve the dynamics equations, given the four parameters. The apogee height is returned by this function. The numerical schemes used are Mathematica's 'automatic' choices, which generally performs well, even through stiff systems. Since the problem is initial valued, it is fairly trivial to solve numerically. A few simulated trajectories are shown below, as samples. It is hard to believe that these are optimally sized vehicles. %\FloatBarrier \begin{figure}[htbp] \centering \includegraphics[width=0.8\linewidth]{sample_flights.eps} \caption{Sample simulated flights based on data from 2018 reports. Red indicates thrusting, blue indicates coasting.} \begin{tabular}{@{} llcccc|ccc@{}} \toprule Team &Type & $C_D$ (est) & $c$ (m/s, assumed) & dia (in) & $m_0$ (kg) & $x$ & $m_p/m_0$ & $T/W$ \\ \midrule McGill & Solid & 0.5 & 2000 & 5 & 24 & 64.6 &0.146 & 8.8\\ Waterloo& Hybrid & 0.5 & 2000 & 6 & 64.8 & 34.48 &0.280 & 1.5\\ UCLA & Hybrid &0.5 & 2000 & 6 & 26.7 & 83.7 & 0.21 & 8.5\\ \bottomrule \end{tabular} \label{fig:} \end{figure} %FloatBarrier Now we can see the trade between thrust to weight ratio and propellant mass fraction. \begin{landscape} %\FloatBarrier \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{Tw-MR_contours.eps} \caption{} \label{fig:} \end{figure} %FloatBarrier \end{landscape} The red line indicates the 10k target altitude (assuming $c = 2000~m/s$). This shows that for a range of $x$, the easiest to design scenarios are for T/W>3, and propellant mass fraction around 15\%. Below a thrust to weight ratio of 2, the height is very sensitive to the propellant mass fraction. It is also interesting to see that as the propellant mass fraction increases, it becomes better to reduce thrust to weight ratio towards T/W $\sim$ 2. Ultimately however, we need to find a design that can achieve the desired altitude, while minimising the total mass and thrust level required. If we limit max thrust to 1~kN, and assume $c=2000$~m/s and $c_d = 0.5$, we can figure out possible designs: %\FloatBarrier \begin{figure}[htbp] \centering \includegraphics[width=0.8\linewidth]{results.eps} \caption{} \label{fig:} \end{figure} %FloatBarrier This figure shows the required propellant mass fraction is about 17\%, over a range of vehicle masses. It is obvious that reducing the vehicles mass would be beneficial, as it gives more flexibility in the future, and increases the acceleration, and therefore stability, but is also important for the propellant mass fraction to be as close to 17\% as possible. Why is there such a steep slope to the graphs? One way to think about it is as follows. For very large thrust to weight, the rocket burns very rapidly, and acts as impulse. From there, the vehicle is basically passive, and the ratio of drag to inertia (the parameter $x$) controls the final altitude. Therefore in the limit that thrust to weight becomes large, \end{document}
{ "alphanum_fraction": 0.7182332704, "avg_line_length": 36.044, "ext": "tex", "hexsha": "ddc704c33cbe4c6d3995a355d5207f9d0d1e9df4", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-12-06T05:32:10.000Z", "max_forks_repo_forks_event_min_datetime": "2020-12-06T05:32:10.000Z", "max_forks_repo_head_hexsha": "a841745e4097c55fa4996dddc8a7dc45b2059023", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "icl-rocketry/optimalAscent", "max_forks_repo_path": "bang-off-control/writeup/writeup.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a841745e4097c55fa4996dddc8a7dc45b2059023", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "icl-rocketry/optimalAscent", "max_issues_repo_path": "bang-off-control/writeup/writeup.tex", "max_line_length": 432, "max_stars_count": 2, "max_stars_repo_head_hexsha": "a841745e4097c55fa4996dddc8a7dc45b2059023", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "icl-rocketry/optimalAscent", "max_stars_repo_path": "bang-off-control/writeup/writeup.tex", "max_stars_repo_stars_event_max_datetime": "2020-12-06T05:32:08.000Z", "max_stars_repo_stars_event_min_datetime": "2019-08-19T01:06:38.000Z", "num_tokens": 2680, "size": 9011 }
\documentclass{stdlocal} \begin{document} \section{Preliminaries} % (fold) \label{sub:preliminaries} To systematically approach the implementation of PRNGs, basic knowledge in the topics of stochastics and statistics is administrable. Together, these topics will give a deeper understanding of randomness in deterministic computer systems, a formal description of pseudorandom sequences and generators, and the mathematical foundation of Monte Carlo algorithms. Based on them, we are capable of scientifically analyzing PRNGs concerning their randomness properties. Afterwards, we will give a brief overview of template mechanisms in the C++ programming language and the fundamentals of modern computer architecture. \subsection{Probability Theory} % (fold) \label{sub:stochastics} The observation of random experiments resulted in the construction of probability theory. But probability theory itself does not use a further formalized concept of randomness \autocite{schmidt2009}. % Randomness itself plays a minor role in probability theory and is used in form of realizations of random variables. In fact, it allows us to observe randomness without defining it \autocite{volchan2002}. % Actually, typical formalizations rely on probability theory. % This connection makes the development of RNGs possible. % Hence, in the following we will give only the formal definition of relevant structures without further discussions and will postpone an examination of randomness to the next section. Hence, we will postpone an examination of truly random sequences to section \ref{sec:pseudorandom_number_generators}. According to \textcite{schmidt2009}, Kolmogorov embedded probability theory in the theory of measurement and integration. Although it heavily relies on these theoretical structures, probability theory is one of the most important applications of measurement and integration theory. Therefore we will assume basic knowledge in this topic and refer to \textcite{schmidt2009} and \textcite{elstrodt2011} for a more detailed introduction to measure spaces, measurable functions, and integrals. Propositions and theorems will be given without proof. The underlying structure of probability theory, which connects it to measure theory, is the probability space. It is a measure space with a finite and normalized measure. This gives access to all the usual results of measure theory and furthermore unifies discrete and continuous distributions. \autocite[\ppno~193-195]{schmidt2009} \begin{definition}[Probability Space] A probability space is a measure space $\roundBrackets{\Omega, \mathscr{F}, P}$ such that $P(\Omega)=1$. In this case, we call $P$ the probability measure, $\mathscr{F}$ the set of all events, and $\Omega$ the set of all possible outcomes of a random experiment. \end{definition} % For our purposes, the set of possible outcomes $\Omega$ will be a finite or countable infinite set. % Hence, we can choose $\mathscr{F}$ to be the power set $\mathscr{P}(\Omega)$. Due to the complex definition % \footnote{Notation and symbols not directly defined are explained in the symbol table.} of a measure space, it is convenient to not have to explicitly specify the probability space when analyzing random experiments. Instead, we use random variables which are essentially measurable functions on a probability space \autocite[\pno~194]{schmidt2009}. For complicated cases, these will serve as observables for specific properties and will make the analysis much more intuitive. \begin{definition}[Random Variable] Let $(\Omega,\mathscr{F},P)$ be a probability space and $(\Sigma,\mathscr{A})$ a measurable space. A measurable function $\function{X}{\Omega}{\Sigma}$ is called a random variable. In this case, we denote with $P_X\define P\composition\inverse{X}$ the distribution and with $(\Sigma,\mathscr{A},P_X)$ the probability space of $X$. % We call $X(ω)$ for $ω\in\Omega$ a realization of $X$. % Let $\function{Y}{\Omega}{\Sigma}$ be another random variable. % $X$ and $Y$ are identically distributed if $P_X = P_Y$. Two random variables are identically distributed if they have the same distribution. Additionally, we say that $X$ is a real-valued random variable if $\Sigma = \setReal$ and $\mathscr{A} = \mathscr{B}(\setReal)$. \end{definition} From now on, if a random variable is defined then, if not stated otherwise, it is assumed there exists a proper probability space $(\Omega,\mathscr{F},P)$ and measurable space $(\Sigma, \mathscr{A})$. Another important concept of stochastics is known as independence. In \textcite{schmidt2009} it is defined for a family of events, a family of sets of events, and a family of random variables. If we think of random variables as observables then their independence means that their outcomes do not influence each other. % It makes only sense in the context of probability theory For our purposes, the general definition of all three forms of independence is distracting. In a computer, it makes no sense to talk about uncountably many elements. Therefore the following definition of independence takes only a countable sequence of random variables into account. Furthermore, to make it more understandable, this definition uses a theorem from \textcite[\pno~238]{schmidt2009} which characterizes the independence of random variables. % Here, we will use a simpler equivalent definition because, for a computer, all we need are finite sequences of random variables. \begin{definition}[Independence] % Let $(\Omega,\mathscr{F},P)$ be a probability space. % Two events $A, B \in \mathscr{F}$ are independent if $P(A\cap B)=P(A)P(B)$. % Let $(\Sigma_i,\mathscr{A}_i)$ for $i\in\set{1,2}{}$ be measurable spaces and $X_i$ random variables with $X\define X_1\times X_2$. % They are called independent if $P_X = P_{X_1} \otimes P_{X_2}$. Let $I\subset\setNatural$ and $X_i$ be a random variable for all $i\in I$. Then these random variables are independent if the following equation holds for all finite subsets $J\subset I$ whereby we denote the respective random vector with $X_J \define \roundBrackets{X_i}_{i\in J}$. \[ P_{X_J} = \bigotimes_{i\in J} P_{X_i} \] \end{definition} Typical observations of random sequences include the estimation of the expectation value and the variance. Both of these values are needed for analyzing PRNGs and the development of Monte Carlo simulations \autocite{landau2014}. Due to their deep connection to the integral, both of these moments are defined for real-valued random variables. We give the usual definitions based on \textcite[\ppno~274-276]{schmidt2009} in a simplified form. \begin{definition}[Expectation Value and Variance] Let $X$ be a real-valued random variable such that $\integral{\Omega}{}{\absolute{X}}{P}<\infty$. Then the expectation value $\expect X$ and variance $\var X$ of $X$ is defined in the following way. \[ \expect X \define \integral{\Omega}{}{X(ω)}{P(ω)} \separate \var X \define \expect\roundBrackets{X - \expect X}^2 \] \end{definition} To not rely on the underlying probability space directly, we want to be able to compute the expectation value through the respective distribution of the random variable. The theory of measure and integration gives the following proposition, also known as rule of substitution \autocite[\pno~276]{schmidt2009}. \begin{proposition}[Substitution] \label{proposition:substitution} Let $X$ be real-valued random variable and $\function{f}{\setReal}{\setReal}$ a measurable function such that $\integral{\Omega}{}{\absolute{f}}{P_X} < \infty$. Then the following equation holds. \[ \expect(f\composition X) = \integral{\setReal}{}{f(x)}{P_X(x)} \] In particular, if $\expect \absolute{X} < \infty$ then the above equation can be reformulated as follows. \[ \expect X = \integral{\setReal}{}{x}{P_X(x)} \] \end{proposition} The distribution of real-valued random variables is univariate and as a result can be described by so-called cumulative distribution functions (CDFs). The CDF intuitively characterizes the distribution and simplifies the analysis. Further, it can be proven that every CDF belongs to a univariate distribution. According to \textcite[\pno~246]{schmidt2009}, this is the theorem of correspondence. Sometimes it is even possible to define a probability density; a function that is the Lebesgue density of the respective distribution \autocite[\pno~255]{schmidt2009}. \begin{definition}[Probability Density and Cumulative Distribution Function] Let $X$ be a real-valued random variable. Then the respective cumulative distribution function is defined as follows. \[ \function{F_X}{\setReal}{[0,1]} \separate F_X(x) \define P_X((-\infty,x]) \] We call the function $\function{p}{\setReal}{[0,\infty)}$ a probability density of $X$ if for all $A\in\mathscr{B}(\setReal)$ \[ P_X(A) = \integral{A}{}{p(x)}{λ(x)}\ . \] \end{definition} % \begin{theorem}[Correspondence] % Let $X$ be a real-valued random variable. % There exists a unique monotone non-decreasing, right-continuous function $\function{F_X}{\setReal}{[0,1]}$ with % \[ % \lim_{x\rightarrow -\infty} F_X(x) = 0 % \separate % \lim_{x\rightarrow +\infty} F_X(x) = 1 % \] % such that % \[ % P_X((a,b]) = F(b) - F(a) % \] % \end{theorem} As well as CDFs, probability densities can greatly simplify computations which are based on absolute continuous random variables. The following proposition, obtained from \textcite{schmidt2009}, shows the simplified computation of an expectation value through a Lebesgue integral. \begin{proposition}[Chaining] \label{proposition:chaining} Let $X$ be a real-valued random variable with $p$ as its probability density. If $\function{f}{\setReal}{\setReal}$ is a measurable function such that $\expect \absolute{f\circ X} < \infty$ then \[ \expect \roundBrackets{f\composition X} = \integral{\setReal}{}{f(x)p(x)}{λ(x)}\ . \] \end{proposition} A last important theorem to name is the strong law of large numbers (SLLN). According to \textcite[\pno~13]{graham2013}, the principles of Monte Carlo methods are based on this theorem. It uses a sequence of identically and independently distributed (iid) random variables. Please note, there exist many more variations of this theorem. We will use a simplified version from \textcite{graham2013}. \begin{theorem}[Strong Law of Large Numbers] \label{theorem:slln} Let $(X_n)_{n\in\setNatural}$ be a sequence of iid real-valued random variables with finite expectation value $μ$. Then the following equation holds $P$-almost everywhere. \[ \lim_{n\to\infty} \frac{1}{n}\sum_{i=1}^n X_i = μ \] \end{theorem} % subsection stochastics (end) % \subsection{Number Theory and Finite Fields} % (fold) % \label{ssub:finite_fields} % % subsection finite_fields (end) \subsection{The C++ Programming Language} % (fold) \label{sub:the_c_programming_language} As already told in the introductory section \ref{sec:introduction}, the C++ programming language is an adequate candidate for developing high-performance low-level structures while keeping the high degree of abstraction that makes the use of libraries easier and more consistent. C++ features multiple programming styles, like procedural programming, data abstraction, object-oriented programming, as well as generic programming which is also known as template metaprogramming \autocite{stroustrup2014,vandevoorde2018}. The type mechanism makes C++ a strongly typed language. To exploit this, we will always try to map problems to an abstract data structure. Furthermore, the built-in facilities of C++, such as templates, function overloading, type deduction, and lambda expressions, simplify the type handling and the generalization of algorithms. Additionally, C++ provides a standard library, called the standard template library (STL), consisting of header files providing default templates to use for a wide variety of problems. In this thesis, we will rely on the random utilities the STL exhibits. We will also assume basic knowledge in C++ and refer to \textcite{stroustrup2014} and \textcite{meyers2014} for a detailed introduction to the general usage of the language. A complete online reference of the language is given by \textcite{cppreference}. The C++ programming language keeps evolving by defining different language standards every three years which are published by the ISO C++ standardization committee. Newer standards typically introduce modern language features, fix old behavior and add new algorithms and templates to the STL. Hence, modern C++ separates into the standards C++11, C++14, and C++17 each published in the year 2011, 2014, and 2017, respectively. Currently, we are waiting for the C++20 standard specification which will provides even more advanced features concerning template metaprogramming and concurrency. At the time of writing this thesis, C++17 is the most modern specification and as a consequence it will be used throughout the code. \autocite{stroustrup2014,meyers2014,vandevoorde2018} To design the API of a library supplying vectorized RNGs and some advanced utilities, we will heavily rely on different template metaprogramming techniques. For getting deeper into the topic, we will refer to \textcite{vandevoorde2018} and again to \textcite{meyers2014}. Here, we will only be able to list the most important terms, techniques and rules that will be used throughout the code. \begin{description} \item[Template Argument Deduction] \textquote[\cite{cppreference}]{% In order to instantiate a function template, every template argument must be known, but not every template argument has to be specified. If possible, the compiler will deduce the missing template arguments from the function arguments. } \item[Overloading Function Templates] \textquote[\cite{vandevoorde2018}]{% Like ordinary functions, function templates can be overloaded. That is, you can have different function definitions with the same function name so that when that name is used in a function call, a C++ compiler must decide which one of the various candidates to call. } \item[Variadic Templates] \textquote[\cite{vandevoorde2018}]{% Since C++11, templates can have parameters that accept a variable number of template arguments. This feature allows the use of templates in places where you have to pass an arbitrary number of arguments of arbitrary types. } \item[Perfect Forwarding] C++11 introduced so-called move semantics to optimize specific copy operations by moving internal resources instead of creating a deep copy of them. Perfect forwarding is a template-based pattern that forwards the basic properties of a type concerning its reference and modification type. % \autocite{vandevoorde2018} \item[SFINAE] Substituting template arguments to resolve function template overloading could lead to errors by creating code that makes no sense. The principle \enquote{substitution failure is not an error} (SFINAE) states that in these circumstances the overload candidates with such substitution problems will be simply ignored. % \autocite{vandevoorde2018} \item[{\footnotesize \texttt{\textbf{decltype}}}] This is a specifier introducing an unevaluated context from which it is deducing the type of the given expression without actually evaluating the expression. % \autocite{stroustrup2014} \item[\texttt{\textbf{\footnotesize std::enable\_if\_t}}] This is a helper template from the STL to ignore function templates by using SFINAE under certain compile conditions. If the template argument evaluates to true then \code{std::enable\_if\_t} will evaluate to an actual type. Otherwise, it will have no meaning triggering the SFINAE principle for overloads. % \autocite{vandevoorde2018} \item[\texttt{\textbf{\footnotesize std::declval}}] The given STL function template is only declared, not defined and therefore cannot be called in evaluated contexts. It can be used as a placeholder for an object reference of a specific type. Typically, this routine will be inserted instead of a default constructor in the unevaluated context argument of \code{decltype}. % \autocite{vandevoorde2018} \item[Type Traits] Type traits are general functions defined over types to modify or evaluate them. In the STL a typical examples is given by \code{std::is_same_v} which is evaluating if two types are the same. \end{description} % subsection the_c_programming_language (end) % section preliminaries (end) \end{document}
{ "alphanum_fraction": 0.7349473443, "avg_line_length": 74.0338983051, "ext": "tex", "hexsha": "2715dc545d94ec173733fe1c51b7ded29a323134", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c78931c1a5c0a85a1ad36d7d8979567b0853be52", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "lyrahgames/random-number-generators", "max_forks_repo_path": "docs/thesis/sections/preliminaries.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c78931c1a5c0a85a1ad36d7d8979567b0853be52", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "lyrahgames/random-number-generators", "max_issues_repo_path": "docs/thesis/sections/preliminaries.tex", "max_line_length": 283, "max_stars_count": 4, "max_stars_repo_head_hexsha": "c78931c1a5c0a85a1ad36d7d8979567b0853be52", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "lyrahgames/random-number-generators", "max_stars_repo_path": "docs/thesis/sections/preliminaries.tex", "max_stars_repo_stars_event_max_datetime": "2021-04-20T00:07:23.000Z", "max_stars_repo_stars_event_min_datetime": "2020-03-28T15:12:07.000Z", "num_tokens": 4148, "size": 17472 }
%% FILE: birkhoff_hsp.tex %% AUTHOR: Clifford Bergman and William DeMeo %% DATE: 25 Sep 2018 %% COPYRIGHT: (C) 2018 Clifford Bergman and William DeMeo \begin{filecontents*}{inputs/refs.bib} @book {MR2839398, AUTHOR = {Bergman, Clifford}, TITLE = {Universal algebra}, SERIES = {Pure and Applied Mathematics (Boca Raton)}, VOLUME = {301}, NOTE = {Fundamentals and selected topics}, PUBLISHER = {CRC Press, Boca Raton, FL}, YEAR = {2012}, PAGES = {xii+308}, ISBN = {978-1-4398-5129-6}, MRCLASS = {08-02 (06-02 08A40 08B05 08B10 08B26)}, MRNUMBER = {2839398 (2012k:08001)}, MRREVIEWER = {Konrad P. Pi{\'o}ro}, } \end{filecontents*} %:biblio %\documentclass[12pt]{amsart} \documentclass[12pt,reqno]{amsart} \usepackage{amsmath,amssymb,amsfonts,amscd} \usepackage{xspace} \usepackage[mathcal]{euscript} % Euler for math and numbers %%%%%%% wjd: added these packages vvvvvvvvvvvvvvvvvvvvvvvvv % PAGE GEOMETRY % These settings are for letter format \def\OPTpagesize{8.5in,11in} % Page size \def\OPTtopmargin{1in} % Margin at the top of the page \def\OPTbottommargin{1in} % Margin at the bottom of the page %% \def\OPTinnermargin{0.5in} % Margin on the inner side of the page \def\OPTinnermargin{1.5in} % Margin on the inner side of the page \def\OPTbindingoffset{0in} % Extra offset on the inner side %% \def\OPToutermargin{0.75in} % Margin on the outer side of the page \def\OPToutermargin{1.5in} % Margin on the outer side of the page \usepackage[papersize={\OPTpagesize}, twoside, includehead, top=\OPTtopmargin, bottom=\OPTbottommargin, inner=\OPTinnermargin, outer=\OPToutermargin, bindingoffset=\OPTbindingoffset]{geometry} \newcommand{\alg}[1]{\ensuremath{\mathbf{#1}}} \newcommand{\class}[1]{\ensuremath{\mathcal{#1}}} \newcommand{\var}[1]{\ensuremath{\mathcal{#1}}} \newcommand{\clop}[1]{\ensuremath{\mathbf{#1}}} \newcommand{\close}[1]{\ensuremath{\overline{#1}}} \newcommand{\Id}[1]{\ensuremath{\operatorname{Id}(#1)}} \newcommand{\Mod}[1]{\ensuremath{\operatorname{Mod}(#1)}} \newcommand{\defin}[1]{\textbf{#1}} \newcommand{\Hom}[1]{\ensuremath{\operatorname{Hom}(#1)}} \newcommand{\Epi}[1]{\ensuremath{\operatorname{Epi}(#1)}} \newcommand{\Con}[1]{\ensuremath{\operatorname{Con}(#1)}} \newcommand{\Clo}{\ensuremath{\operatorname{Clo}}} \newcommand{\Proj}{\ensuremath{\operatorname{Proj}}} \newcommand{\Sg}[2]{\ensuremath{\operatorname{Sg}^{#1}(#2)}} \newcommand{\compose}{\ensuremath{\circ}} % \newcommand{\ker}{\ensuremath{\mathrm{ker}}} % \newcommand{\implies}{\ensuremath{\Longrightarrow}} \newcommand{\va}{\ensuremath{\mathbf{a}}} \newcommand{\vb}{\ensuremath{\mathbf{b}}} \newcommand{\swap}[1]{\ensuremath{\mathtt{swap}(#1)}} \newcommand{\curry}[1]{\ensuremath{\mathtt{curry}(#1)}} \newcommand{\uncurry}[1]{\ensuremath{\mathtt{uncurry}(#1)}} %% function restriction %% example: \restr{f}{X} (restriction of f to X) \newcommand\restr[2]{{% we make the whole thing an ordinary symbol \left.\kern-\nulldelimiterspace % automatically resize the bar with \right #1 % the function \vphantom{\big|} % pretend it's a little taller at normal size \right|_{#2} % this is the delimiter }} \newcommand{\mysetminus}{\ensuremath{-}} %% uncomment the next line if we want to revert to the "set" minus notation %% \renewcommand{\mysetminus}{\ensuremath{\setminus}} \usepackage[yyyymmdd,hhmmss]{datetime} \usepackage{background} \backgroundsetup{ position=current page.east, angle=-90, nodeanchor=east, vshift=-1cm, hshift=8cm, opacity=1, scale=1, contents={\textcolor{gray!80}{WORK IN PROGRESS. DO NOT DISTRIBUTE. (compiled on \today\ at \currenttime)}} } \usepackage{mathtools} %% \usepackage{exers} % \usepackage{inputs/wjdlatexmacs} \usepackage[colorlinks=true,urlcolor=blue,linkcolor=blue,citecolor=blue]{hyperref} \usepackage{algorithm2e} \usepackage{stmaryrd} %\usepackage{url,enumerate,tikz,scalefnt} \usetikzlibrary{math} %needed tikz library \usepackage{comment} \usepackage{bussproofs} \usepackage{unixode} \usepackage{color} \renewcommand{\th}[2]{#1\mathrel{\theta}#2} \newcommand{\infixrel}[3]{#2\mathrel{#1}#3} \newcommand\llb{\ensuremath{\llbracket}} \newcommand\rrb{\ensuremath{\rrbracket}} \newcommand{\defn}[1]{\textbf{#1}} \newcommand{\N}{\ensuremath{\mathbb{N}}} %%//////////////////////////////////////////////////////////////////////////////// %% Theorem styles \numberwithin{equation}{section} \theoremstyle{plain} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{prop}[theorem]{Prop.} \theoremstyle{definition} \newtheorem{conjecture}{Conjecture} \newtheorem{claim}[theorem]{Claim} \newtheorem{subclaim}{Subclaim} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{definition}[theorem]{Definition} \newtheorem{notation}[theorem]{Notation} \newtheorem{Fact}[theorem]{Fact} \newtheorem*{fact}{Fact} \newtheorem{example}[theorem]{Example} \newtheorem{examples}[theorem]{Examples} \newtheorem{exercise}{Exercise} \newtheorem*{lem}{Lemma} \newtheorem*{cor}{Corollary} \newtheorem*{remark}{Remark} \newtheorem*{remarks}{Remarks} \newtheorem*{obs}{Observation} \title{Birkhoff's HSP Theorem} \author[C.~Bergman]{Clifford Bergman} %% \email{}\urladdr{} %% \address{University of Colorado\\Mathematics Dept\\Boulder 80309\\USA} \author[W.~DeMeo]{William DeMeo} \email{[email protected]} %% \urladdr{http://williamdemeo.github.io} %% \address{University of Colorado\\Mathematics Dept\\Boulder 80309\\USA} \date{\today} \begin{document} \maketitle \section{Preliminaries} \subsection{Notation} The symbols $\N$, $\omega$, and {\tt nat} are used interchangeably; they all denote the set of natural numbers. A \defn{signature} $S = (F, \rho)$ consists of a set $F$ of operation symbols and a function $\rho \colon F \to \N$. We call $\rho f$ the \defn{arity} of the symbol $f$. If $A$ is a set and $f$ is a $\rho f$-ary operation on $A$, then we sometimes write $f \colon A^{\rho f} \to A$. Since the natural number $\rho f$ denotes the set $\{0, 1, \dots, \rho f -1\}$, a function $g \colon \rho f \to A$ is simply a $\rho f$-tuple of elements from $A$; that is for each $i\in \rho f$, $g i \in A$. By identifying the $\rho f$-th power, $A^{\rho f}$, of the set $A$ with the type $\rho f \to A$ of functions from $\{0, 1, \dots, \rho f -1\}$ to $A$, we thus identify the function type $A^{\rho f} \to A$ with the type $(\rho f \to A) \to A$. To say that $f$ inhabits the function type $A^{\rho f} \to A$ and to write $f \colon A^{\rho f} \to A$ is then equivalent to saying that $f$ inhabits $(\rho f \to A) \to A$ and writing $f \colon (\rho f \to A) \to A$. Fix $m\in \N$. If $a = (a_0, a_1, \dots, a_{m-1})$ is an $m$-tuple of elements from $A$, then (keeping in mind that $m$ is the set $\{0, 1, \dots, m-1\}$) it is useful to understand that this tuple is a function $a : m \to A$, where $a(i) = a_i$, for each $i<m$. If $h \colon A \to A$, then $h\circ a : m \to A$ is the tuple $(h(a_0), h(a_1), \dots, h(a_{m-1}))\in A^m$, whose $i$-th coordinate is $(h\circ a)(i) = h(a(i)) = h(a_i) \in A$. On the other hand, if $g \colon A^m \to A$---equivalently, $g \colon (m \to A) \to A$---then $g a$ is the element $g(a_0, a_1, \dots, a_{m-1}) \in A$. If $f \colon (\rho f \to B) \to B$ is a $\rho f$-ary operation on $B$, if $a \colon \rho f \to A$ is a $\rho f$-tuple on $A$, and if $h \colon A \to B$, then $h \circ a \colon \rho f \to B$, so $f (h \circ a) \colon B$. \subsubsection{Generalized composition} Suppose $f \colon (\rho f \to A) \to A$, and suppose $g_i \colon A^m \to A$ for each $i <\rho f$. Let $g \colon \rho f \to (A^m \to A)$ denote the function whose value at $i < \rho f$ is $g(i) = g_i$. We want to define a \emph{generalized composition} of $f$ with $g_0, g_1, \dots, g_{\rho f -1}$. We could obviously do this component-wise, but that makes computing with such compositions unweildy. Observe, \begin{prooftree} \AxiomC{$f \colon (\rho f \to A) \to A$} \AxiomC{$a \colon \rho f \to A$} \BinaryInfC{$f a \colon A$} \end{prooftree} \begin{prooftree} \AxiomC{$g \colon \rho f \to ((m \to A) \to A)$} \AxiomC{$i \colon \rho f$} \BinaryInfC{$g i \colon (m \to A) \to A$} \AxiomC{$b \colon m \to A$} \BinaryInfC{$g i b\colon A$} \end{prooftree} Apparently composition of $f$ with $g$ is impossible without dropping down to coordinates since the types don't line up properly. However, this is easily fixed with an obvious isomorphism. Denote by $\uncurry{g} \colon (\rho f \times (m \to A)) \to A$ the uncurried version of $g$, so that $gib = \uncurry{g}(i,b)$. Swapping the first and second coordinates of $\uncurry{g}$ yields $\swap{\uncurry{g}} \colon ((m\to A) \times \rho f) \to A$; that is $\swap{\uncurry{g}}(b,i) = \uncurry{g} (i,b)$ for all $i \colon \rho f$ and $b \colon m \to A$. Now, if we let $\tilde{g} := \curry{\swap{\uncurry{g}}}$, then the types of $f$ and $\tilde{g}$ are properly aligned for composition. Indeed, we have \begin{prooftree} \AxiomC{$f \colon (\rho f \to A) \to A$} \AxiomC{$\tilde{g} \colon (m \to A) \to (\rho f \to A)$} \AxiomC{$b \colon m \to A$} \BinaryInfC{$\tilde{g} b \colon \rho f \to A$} \BinaryInfC{$f \tilde{g} b \colon A$} \end{prooftree} and for each $b \colon m \to A$, the function $\tilde{g}b \colon \rho f \to A$ is the tuple whose $i$-th coordinate is $\tilde{g}b(i) = g_i(b_0, \dots, b_{m-1})$. Thus, \[ f\tilde{g} b = f(g_0 (b_0, \dots, b_{m-1}), \dots, g_{\rho f -1}(b_0, \dots, b_{m-1})). \] This is called the \defn{generalized composition} of $f$ with $g = (g_0, \dots, g_{\rho f -1})$. \subsection{Elementary facts} \begin{lemma}[\protect{\cite[Ex.~1.16.6]{MR2839398}}] \label{ex:1.16.6} Let $f$ and $g$ be homomorphisms from $\alg{A}$ to $\alg{B}$. Let $E(f,g) = \{ a \in A : f(a) = g(a) \}$ (the \defin{equalizer} of $f$ and $g$). \begin{enumerate} \item $E(f,g)$ is a subuniverse of $\alg{A}$. \item If $X \subseteq A$ and $X$ generates $\alg{A}$ and $\restr{f}{X} = \restr{g}{X}$, then $f = g$. \item If $\alg{A}, \alg{B}$ are finite and $X$ generates $\alg{A}$, then $|\!\Hom{\alg{A},\alg{B}}| \leq |B|^{|X|}$. \end{enumerate} \end{lemma} \begin{proof} Let $\rho$ be the similarity type of $\alg{A}$ and $\alg{B}$, and $p$ a (say, $n$-ary) operation symbol in $\rho$. Then, for every tuple $(a_1, \dots, a_n) \in E(f,g)^n$, \begin{align*} f(p^{\alg{A}}(a_1, \dots, a_n)) &= p^{\alg{B}}(f(a_1), \dots, f(a_n))\\ &= p^{\alg{B}}(g(a_1), \dots, g(a_n)) = g(p^{\alg{A}}(a_1, \dots, a_n)). \end{align*} Therefore, $E(f,g)$ is closed under $p$. Since $p$ was arbitrary, $E(f,g)$ is closed under all operations in $\rho$ and is thus a subuniverse of $\alg{A}$. Suppose the subset $X \subseteq A$ generates $\alg{A}$ and suppose $\restr{f}{X} = \restr{g}{X}$. Fix an arbitrary element $a\in A$. We show $f(a) = g(a)$. Since $X$ generates $\alg{A}$, there exists a (say, $n$-ary) term $t$ and a tuple $(x_1, \dots, x_n) \in X^n$ such that $a = t^{\alg{A}}(x_1, \dots, x_n)$. Therefore, \begin{align*} f(a) = f(t^{\alg{A}}(x_1, \dots, x_n)) &= t^{\alg{B}}(f(x_1), \dots, f(x_n))\\ &= t^{\alg{B}}(g(x_1), \dots, g(x_n)) = g(t^{\alg{A}}(x_1, \dots, x_n)) = g(a). \end{align*} In other words, a homomorphism is uniquely determined by its restriction to a generating set. There are exactly $|B|^{|X|}$ functions from $X$ to $B$ so, assuming $X$ generates $\alg{A}$, we have $|\!\Hom{\alg{A},\alg{B}}| \leq |B|^{|X|}$. \end{proof} \begin{lemma}[\protect{\cite[Ex.~1.26.8]{MR2839398}}] \label{ex:1.26.8} Suppose $f \in \Hom{\alg{A},\alg{B}}$, $g \in \Hom{\alg{A},\alg{C}}$, $f$ is surjective, and $\ker f \subseteq \ker g$, then $\exists h \in \Hom{\alg{B},\alg{C}}$, $g = h \compose f$. % Let $f \colon \alg{A} \to \alg{B}$ and $f \colon \alg{A} \to \alg{C}$ be homomorphisms, with $g$ surjective. Prove that if $\ker g \subseteq \ker f$, then there is a homomorphism % $h \colon \alg{C} \to \alg{B}$ such that $f = h \compose g$. \end{lemma} \begin{proof} Define $h\colon B \to C$ as follows: for each $b\in B$, choose (by Axiom of Choice!) $a_0\in f^{-1}\{b\}$ and let $h(b) = g(a_0)$. (Since $f$ is surjective, such an $a_0$ exists for each $b\in B$.) Fix $a \in A$. We show $g(a) = h f(a)$. Let $a_0$ be the element of $f^{-1}\{f(a)\}$ that we chose when defining $h$ at $b = f(a)$. That is, $h(b) = g(a_0)$. Then, $f(a_0) = b = f(a)$, so $(a_0, a) \in \ker f\subseteq \ker g$, so $g(a) = g(a_0) = h(b) = h f(a)$, as desired. To see that $h$ is a homomorphism, let $p$ be a (say, $n$-ary) operation symbol. Let $(b_1, \dots, b_n) \in B^n$, and let $(a_1, \dots, a_n)$ be the respective representatives of the $f$-kernel classes $f^{-1}\{b_i\}$ that we chose when defining $h$. Then, \begin{align*} p^{\alg{C}}(h(b_1), \dots, h(b_n)) &= p^{\alg{C}}(h f(a_1), \dots, h f(a_n))\\ &= p^{\alg{C}}(g(a_1)), \dots, g(a_n))\\ &= g p^{\alg{A}}(a_1, \dots, a_n))\\ &= h f p^{\alg{A}}(a_1, \dots, a_n)\\ &= h p^{\alg{B}}(f(a_1), \dots, f(a_n))\\ &= h p^{\alg{B}}(b_1, \dots, b_n). \end{align*} \end{proof} \section{Subalgebra generation} \section{Clones} \begin{theorem}[\protect{\cite[Thm.~4.3.]{MR2839398}}] Let $A$ be a set and $S = (F, \rho)$ a signature and suppose each $f\in F$ is a $(\rho f)$-ary operation on $A$. Define \begin{align*} F_0 &= \Proj(A);\\ F_{n+1} &= F_n \cup \{ f g \mid f \in F, g \colon \rho f \to (F_n \cap (\rho g \to A)) \}, \text{ for } n < \omega. \end{align*} Then $\Clo^A(F) = \bigcup_n F_n$. \end{theorem} \section{Terms and Free Algebras} \begin{theorem}[\protect{\cite[Thm.~4.21]{MR2839398}}] \label{thm:4.21} Let $\rho$ be a similarity type. \begin{enumerate} \item $\alg{T}_\rho(X)$ is generated by $X$. \item For every algebra $\alg{A}$ of type $\rho$ and every function $h\colon X \to A$ there is a unique homomorphism $g\colon \alg{T}_\rho(X) \to \alg{A}$ such that $\restr{g}{X} = h$. \end{enumerate} \end{theorem} \begin{proof} The definition of $\alg{T}_\rho(X)$ exactly parallels the construction in Theorem 1.14. That accounts for (1). For (2), define $g(t)$ by induction on $|t|$. Suppose $|t| = 0$. Then $t \in X \cup \class{F}_0$. If $t \in X$ then define $g(t) = h(t)$. For $t \in \class{F}_0$, $g(t) = t^{\alg{A}}$. Note that since $\alg{A}$ is an algebra of type $\rho$ and $t$ is a nullary operation symbol, $t^{\alg{A}}$ is defined. For the inductive step, let $|t| = n + 1$. Then $t = f(s_1, \dots, s_k)$ for some $f \in \class{F}_k$ and $s_1, \dots, s_k$ each of height at most $n$. We define $g(t) = f^{\alg{A}}(g(s_1), \dots, g(s_k))$. By its very definition, $g$ is a homomorphism. Finally, the uniqueness of $g$ follows from Lemma~\ref{ex:1.16.6}. \end{proof} \begin{theorem}[\protect{\cite[Thm.~4.32]{MR2839398}}] \label{thm:4.32} Let $\alg{A}$ and $\alg{B}$ be algebras of type $\rho$. \begin{enumerate} \item For every $n$-ary term $t$ and homomorphism $g\colon \alg{A} \to \alg{B}$, $g(t^{\alg{A}}(a_1,\dots, a_n)) = t^{\alg{B}}(g(a_1),\dots, g(a_n))$. \item For every term $t \in T_\rho(X_\omega)$ and every $\theta \in \Con{\alg{A}}$, $\va \equiv_\theta \vb \implies t^{\alg{A}}(\va) \equiv_\theta t^{\alg{A}}(\vb)$. \item For every subset $Y$ of $A$, \[\Sg{\alg{A}}{Y} = \{ t^{\alg{A}}(a_1,\dots, a_n) : t \in T(X_n), a_i \in Y, i \leq n < \omega\}.\] \end{enumerate} \end{theorem} \begin{proof} The first statement is an easy induction on $|t|$. The second statement follows from the first by taking $\alg{B} = \alg{A}/\theta$ and $g$ the canonical homomorphism. For the third statement, again by induction on the height of $t$, every subalgebra must be closed under the action of $t^{\alg{A}}$. Thus the right-hand side is contained in the left. On the other hand, the right-hand side is clearly a subalgebra containing the elements of $Y$ (take $t = x_1$) from which the reverse inclusion follows. \end{proof} \section{Birkhoff's Theorem} \begin{definition} Let $\rho$ be a similarity type. An \defin{identity of type} $\rho$ is an ordered pair of terms, written $p \approx q$, from $T_\rho(X_\omega)$. Let $\alg{A}$ be an algebra of type $\rho$. We say that $\alg{A}$ satisfies $p\approx q$ if $p^{\alg{A}} = q^{\alg{A}}$. In this situation, we write $\alg{A} \models p \approx q$. If $\class{K}$ is a class of algebras of type $\rho$, we write $\class{K} \models p \approx q$ if $\forall \alg{A} \in \class{K}$, $\alg{A} \models p \approx q$. Finally, if $\Sigma$ is a set of equations, we write $\class{K} \models \Sigma$ if every member of $\class{K}$ satisfies every member of $\Sigma$. \end{definition} \begin{definition} Let $\class{K}$ be a class of algebras and $\Sigma$ a set of equations, each of similarity type $\rho$. We define $\Id{\class{K}} = \{p \approx q : \class{K} \models p \approx q\}$ and $\Mod{\Sigma} = \{ \alg{A} : \alg{A} \models \Sigma \}$. Classes of the form $\Mod{\Sigma}$ are called \defin{equational classes}, and $\Sigma$ is called an \defin{equational base} or an \defin{axiomatization} of the class. $\Mod{\Sigma}$ is called the class of \defin{models} of $\Sigma$. Dually, a set of identities of the form $\Id{\class{K}}$ is called an \defin{equational theory}. \end{definition} \begin{lemma}[\protect{\cite[Lem.~4.36]{MR2839398}}] \label{lem:4.36} For every class $\class{K}$, each of the classes $\clop{S}(\class{K})$, $\clop{H}(\class{K})$, $\clop{P}(\class{K})$, and $\clop{V}(\class{K})$ satisfies exactly the same identities as does $\class{K}$. \end{lemma} \begin{proof} (exercise) \end{proof} \begin{lemma}[\protect{\cite[Lem.~4.37]{MR2839398}}] \label{lem:4.37} $\class{K} \models p \approx q$ if and only if for every $\alg{A} \in \class{K}$ and every $h\in \Hom{\alg{T}(X_\omega),\alg{A}}$, we have $h(p) = h(q)$. \end{lemma} \begin{proof} First assume that $\class{K} \models p\approx q$. Pick $\alg{A}$ and $h$ as in the theorem. Then $\alg{A} \models p\approx q \implies p^{\alg{A}} = q^{\alg{A}} \implies p^{\alg{A}}(h(x_1), \dots, h(x_n)) = q^{\alg{A}}(h(x_1), \dots, h(x_n))$. Since $h$ is a homomorphism, we get $h(p^{\alg{A}}(x_1, \dots, x_n)) = h(q^{\alg{A}}(x_1, \dots, x_n))$, i.e., $h(p) = h(q)$. To prove the converse we must take any $\alg{A} \in \class{K}$ and $a_1, \dots, a_n \in A$ and show that $p^{\alg{A}}(x_1, \dots, x_n) = q^{\alg{A}}(x_1, \dots, x_n)$. Let $h_0 \colon X_\omega \to A$ be a function with $h_0(x_i) = a_i$ for $i\leq n$. By Theorem~\ref{thm:4.21}, $h_0$ extends to a homomorphism $h$ from $\alg{T}(X_\omega)$ to $\alg{A}$. By assumption $h(p) = h(q)$. Since $h(p) = h(p^{\alg{A}}(x_1, \dots, x_n)) = p^{\alg{A}}(h(x_1), \dots, h(x_n)) = p^{\alg{A}}(a_1,\dots, a_n)$ (and similarly for $q$) the result follows. \end{proof} \begin{theorem}[\protect{\cite[Thm.~4.38]{MR2839398}}] \label{thm:4.38} Let $\class{K}$ be a class of algebras and $p \approx q$ an equation. The following are equivalent. \begin{enumerate} \item \label{item:1} $\class{K} \models p\approx q$. \item \label{item:2} $(p,q)$ belongs to the congruence $\lambda_{\class{K}}$ on $\alg{T}(X_\omega)$. \item \label{item:3} $\alg{F}_{\class{K}}(X_\omega) \models p\approx q$. \end{enumerate} \end{theorem} \begin{proof} We shall show (\ref{item:1}) $\implies$ (\ref{item:3}) $\implies$ (\ref{item:2}) $\implies$ (\ref{item:1}). Throughout the proof we write $\alg{F}$ for $\alg{F}_{\class{K}}(X_\omega)$, $\alg{T}$ for $\alg{T}(X_\omega)$ and $\lambda$ for $\lambda_{\class{K}}$. Recall that $\alg{F} = \alg{T}/\lambda \in \clop{S}\clop{P}(\class{K})$. From (a) and Lemma~\ref{lem:4.36} we get $\clop{S}\clop{P}(\class{K}) \models p \approx q$. Thus (c) holds. From (c), $p^{\alg{F}}(\bar{x}_1,\dots, \bar{x}_n) = q^{\alg{F}}(\bar{x}_1,\dots, \bar{x}_n)$ where $\bar{x}_i = x_i/\lambda$. From the definition of $\alg{F}$, $p^{\alg{T}}(x_1,\dots, x_n) \equiv_\lambda q^{\alg{T}}(x_1,\dots, x_n)$ from which (b) follows since $p = p^{\alg{T}}(x_1,\dots, x_n)$ and $q = q^{\alg{T}}(x_1,\dots, x_n)$. Finally assume (b). We wish to apply Lemma~\ref{lem:4.37}. Let $\alg{A} \in \class{K}$ and $h \in \Hom{\alg{T},\alg{A}}$. Then $\alg{T}/\ker h \in \clop{S}(\alg{A}) \subseteq \clop{S}(\class{K})$ so $\ker h \supseteq \lambda$. Then (b) implies that $h(p) = h(q)$ hence (a) holds. \end{proof} Theorem~\ref{thm:4.38} tells us that we can determine whether an identity is true in a variety by consulting a particular algebra, namely $\alg{F}(X_\omega)$. Sometimes it is convenient to work with algebras free on other generating sets besides $X_\omega$. The following corollary takes care of that for us. \begin{corollary}[\protect{\cite[Cor.~4.39]{MR2839398}}] \label{cor:4.39} Let $\class{K}$ be a class of algebras, $p$ and $q$ $n$-ary terms, $Y$ a set and $y_1, \dots, y_n$ distinct elements of $Y$. Then $\class{K} \models p \approx q$ if and only if $p^{\alg{F}_{\class{K}}(Y)}(y_1, \dots, y_n) = q^{\alg{F}_{\class{K}}(Y)}(y_1, \dots, y_n)$. In particular, $\class{K} \models p \approx q$ if and only if $\alg{F}_{\class{K}}(X_n)\models p \approx q$. \end{corollary} \begin{proof} Since $\alg{F}_{\class{K}}(Y)\in \clop{S}\clop{P}(\class{K})$, the left-to-right direction uses the same argument as in Theorem~\ref{thm:4.38}. So assume that $p^{\alg{F}_{\class{K}}(Y)}(y_1, \dots, y_n) = q^{\alg{F}_{\class{K}}(Y)}(y_1, \dots, y_n)$. To show that $\class{K} \models p \approx q$, let $\alg{A} \in \class{K}$ and $a_1$, $\dots$, $a_n \in A$. We must show $p^{\alg{A}}(a_1, \dots, a_n) = q^{\alg{A}}(a_1, \dots, a_n)$. There is a homomorphism $h\colon \alg{F}_{\class{K}}(Y) \to \alg{A}$ such that $h(y_i) = a_i$ for $i \leq n$. Then \begin{align*} p^{\alg{A}}(a_1, \dots, a_n) &= p^{\alg{A}}(h (y_1), \dots, h (y_n)) = h(p^{\alg{F}_{\class{K}}(Y)}(y_1, \dots, y_n))\\ &= h(q^{\alg{F}_{\class{K}}(Y)}(y_1, \dots, y_n)) = q^{\alg{A}}(h(y_1), \dots, h(y_n))\\ &= q^{\alg{A}}(a_1, \dots, a_n). \end{align*} \end{proof} It follows from Lemma~\ref{lem:4.36} that every equational class is a variety. The converse is Birkhoff's Theorem. \begin{theorem}[\protect{\cite[Thm.~4.41]{MR2839398}}] \label{thm:4.41} Every variety is an equational class. \end{theorem} \begin{proof} Let $\var{W}$ be a variety. We must find a set of equations that axiomatizes $\var{W}$. The obvious choice is to use the set of all equations that hold in $\var{W}$. To this end, take $\Sigma = \Id{\var{W}}$. Let $\close{\var{W}} = \Mod{\Sigma}$. Clearly, $\var{W} \subseteq \close{\var{W}}$. We shall prove the reverse inclusion. Let $\alg{A} \in \close{\var{W}}$ and $Y$ a set of cardinality $\max(|A|, |\omega|)$. Choose a surjection $h_0\colon Y \to A$. By Theorem~\ref{thm:4.21}, $h_0$ extends to a (surjective) homomorphism $h \colon \alg{T}(Y) \to \alg{A}$. Furthermore, since $\alg{F}_{\var{W}}(Y) = \alg{T}(Y)/\Theta_{\var{W}}$, there is a surjective homomorphism $g \colon \alg{T}(Y) \to \alg{F}_{\var{W}}$. We claim that $\ker g \subseteq \ker h$. If the claim is true then by Lemma~\ref{ex:1.26.8} there is a map $f\colon \alg{F}_{\var{W}}(Y) \to \alg{A}$ such that $f \compose g = h$. Since $h$ is surjective, so is $f$. Hence $\alg{A} \in \clop{H}(\alg{F}_{\var{W}}(Y)) \subseteq \var{W}$ completing the proof. Let $u,v \in T(Y)$ and assume that $g(u) = g(v)$. Since $\alg{T}(Y)$ is generated by $Y$, by Theorem~\ref{thm:4.21}, there is an integer $n$, terms $p, q \in T(X_n)$, and $y_1$, $\dots$, $y_n \in Y$ such that $u = p^{\alg{T}(Y)}(y_1,\dots, y_n)$ and $v = q^{\alg{T}(Y)}(y_1,\dots, y_n)$, by Theorem~\ref{thm:4.32}. Applying the homomorphism $g$, \[ p^{\alg{F}_{\var{W}}(Y)}(y_1,\dots, y_n) = g(u) = g(v) = q^{\alg{F}_{\var{W}}(Y)}(y_1,\dots, y_n). \] Then by Corollary~\ref{cor:4.39}, $\var{W} \models p \approx q$, hence $(p \approx q) \in \Sigma$. Since $\alg{A} \in \close{\var{W}} = \Mod{\Sigma}$, we get $\alg{A} \models p \approx q$. Therefore, \[ h(u) = p^{\alg{A}}(h_0(y_1), \dots, h_0(y_n)) = q^{\alg{A}}(h_0(y_1), \dots, h_0(y_n)) = h(v), \] as desired. \end{proof} \bibliographystyle{alphaurl} \bibliography{inputs/refs} \end{document}
{ "alphanum_fraction": 0.6219888069, "avg_line_length": 43.5653710247, "ext": "tex", "hexsha": "6ee7cab4235eeec8a23db941e37182ded2f4b9a8", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ab9cbddbb5bdf1eeac4b0d5994bd6cad2a3665d4", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "UniversalAlgebra/lean-ualib", "max_forks_repo_path": "doc/birkhoff_hsp.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "ab9cbddbb5bdf1eeac4b0d5994bd6cad2a3665d4", "max_issues_repo_issues_event_max_datetime": "2018-07-27T19:09:49.000Z", "max_issues_repo_issues_event_min_datetime": "2018-07-18T04:43:29.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "UniversalAlgebra/lean-ualib", "max_issues_repo_path": "doc/birkhoff_hsp.tex", "max_line_length": 183, "max_stars_count": 5, "max_stars_repo_head_hexsha": "ab9cbddbb5bdf1eeac4b0d5994bd6cad2a3665d4", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "UniversalAlgebra/lean-ualib", "max_stars_repo_path": "doc/birkhoff_hsp.tex", "max_stars_repo_stars_event_max_datetime": "2018-08-06T04:37:22.000Z", "max_stars_repo_stars_event_min_datetime": "2018-07-27T19:06:45.000Z", "num_tokens": 9555, "size": 24658 }
\section{Standard Module \sectcode{rfc822}} \stmodindex{rfc822} \renewcommand{\indexsubitem}{(in module rfc822)} This module defines a class, \code{Message}, which represents a collection of ``email headers'' as defined by the Internet standard RFC 822. It is used in various contexts, usually to read such headers from a file. A \code{Message} instance is instantiated with an open file object as parameter. Instantiation reads headers from the file up to a blank line and stores them in the instance; after instantiation, the file is positioned directly after the blank line that terminates the headers. Input lines as read from the file may either be terminated by CR-LF or by a single linefeed; a terminating CR-LF is replaced by a single linefeed before the line is stored. All header matching is done independent of upper or lower case; e.g. \code{m['From']}, \code{m['from']} and \code{m['FROM']} all yield the same result. \subsection{Message Objects} A \code{Message} instance has the following methods: \begin{funcdesc}{rewindbody}{} Seek to the start of the message body. This only works if the file object is seekable. \end{funcdesc} \begin{funcdesc}{getallmatchingheaders}{name} Return a list of lines consisting of all headers matching \var{name}, if any. Each physical line, whether it is a continuation line or not, is a separate list item. Return the empty list if no header matches \var{name}. \end{funcdesc} \begin{funcdesc}{getfirstmatchingheader}{name} Return a list of lines comprising the first header matching \var{name}, and its continuation line(s), if any. Return \code{None} if there is no header matching \var{name}. \end{funcdesc} \begin{funcdesc}{getrawheader}{name} Return a single string consisting of the text after the colon in the first header matching \var{name}. This includes leading whitespace, the trailing linefeed, and internal linefeeds and whitespace if there any continuation line(s) were present. Return \code{None} if there is no header matching \var{name}. \end{funcdesc} \begin{funcdesc}{getheader}{name} Like \code{getrawheader(\var{name})}, but strip leading and trailing whitespace (but not internal whitespace). \end{funcdesc} \begin{funcdesc}{getaddr}{name} Return a pair (full name, email address) parsed from the string returned by \code{getheader(\var{name})}. If no header matching \var{name} exists, return \code{None, None}; otherwise both the full name and the address are (possibly empty )strings. Example: If \code{m}'s first \code{From} header contains the string\\ \code{'[email protected] (Jack Jansen)'}, then \code{m.getaddr('From')} will yield the pair \code{('Jack Jansen', '[email protected]')}. If the header contained \code{'Jack Jansen <[email protected]>'} instead, it would yield the exact same result. \end{funcdesc} \begin{funcdesc}{getaddrlist}{name} This is similar to \code{getaddr(\var{list})}, but parses a header containing a list of email addresses (e.g. a \code{To} header) and returns a list of (full name, email address) pairs (even if there was only one address in the header). If there is no header matching \var{name}, return an empty list. XXX The current version of this function is not really correct. It yields bogus results if a full name contains a comma. \end{funcdesc} \begin{funcdesc}{getdate}{name} Retrieve a header using \code{getheader} and parse it into a 9-tuple compatible with \code{time.mktime()}. If there is no header matching \var{name}, or it is unparsable, return \code{None}. Date parsing appears to be a black art, and not all mailers adhere to the standard. While it has been tested and found correct on a large collection of email from many sources, it is still possible that this function may occasionally yield an incorrect result. \end{funcdesc} \code{Message} instances also support a read-only mapping interface. In particular: \code{m[name]} is the same as \code{m.getheader(name)}; and \code{len(m)}, \code{m.has_key(name)}, \code{m.keys()}, \code{m.values()} and \code{m.items()} act as expected (and consistently). Finally, \code{Message} instances have two public instance variables: \begin{datadesc}{headers} A list containing the entire set of header lines, in the order in which they were read. Each line contains a trailing newline. The blank line terminating the headers is not contained in the list. \end{datadesc} \begin{datadesc}{fp} The file object passed at instantiation time. \end{datadesc}
{ "alphanum_fraction": 0.7629746124, "avg_line_length": 39.389380531, "ext": "tex", "hexsha": "e2d182e89b4ef99a7b7d85359e8daefeb2e32d9e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2a80562c5a163490f444181cb75ca1b3089759ec", "max_forks_repo_licenses": [ "Unlicense", "TCL", "DOC", "AAL", "X11" ], "max_forks_repo_name": "AtjonTV/Python-1.4", "max_forks_repo_path": "Doc/librfc822.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2a80562c5a163490f444181cb75ca1b3089759ec", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense", "TCL", "DOC", "AAL", "X11" ], "max_issues_repo_name": "AtjonTV/Python-1.4", "max_issues_repo_path": "Doc/librfc822.tex", "max_line_length": 70, "max_stars_count": null, "max_stars_repo_head_hexsha": "2a80562c5a163490f444181cb75ca1b3089759ec", "max_stars_repo_licenses": [ "Unlicense", "TCL", "DOC", "AAL", "X11" ], "max_stars_repo_name": "AtjonTV/Python-1.4", "max_stars_repo_path": "Doc/librfc822.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1156, "size": 4451 }
\chapter{Fenrir} \begin{enumerate} \item Reflect both girls by toggling gambits, then take them out of the party. \item Battle speed up, bring in \balthier, \ashe, \penelo \end{enumerate} \begin{gambit} \begin{itemize} \ashegambit{Bio} \penelogambit{Bio} \end{itemize} \end{gambit} \begin{enumerate} \item \leader{\balthier} \item Immobilize on Reddas, Reflect on the Girls. \ashe\ Decoy \balthier, gambits on \item Put the cursor on sleep \end{enumerate} \begin{battle}{Fenrir} \begin{itemize} \balthierf Run backwards diagonal, spam Sleep \item Keep him away from Reddas \balthierf Traveller \balthierf Attack \balthier\ until he's below 120 HP \end{itemize} \end{battle}
{ "alphanum_fraction": 0.7274011299, "avg_line_length": 27.2307692308, "ext": "tex", "hexsha": "65a4463e559fd7aeecbd074b6d5702bc2358d86d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "da410b753b25531eba75f3af36eaf68b9251f72b", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "nightbox69/Final-Fantasy-Speedruns", "max_forks_repo_path": "Final Fantasy XII/Sections/fenrir.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "da410b753b25531eba75f3af36eaf68b9251f72b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "nightbox69/Final-Fantasy-Speedruns", "max_issues_repo_path": "Final Fantasy XII/Sections/fenrir.tex", "max_line_length": 85, "max_stars_count": null, "max_stars_repo_head_hexsha": "da410b753b25531eba75f3af36eaf68b9251f72b", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "nightbox69/Final-Fantasy-Speedruns", "max_stars_repo_path": "Final Fantasy XII/Sections/fenrir.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 248, "size": 708 }
\chapter{Remote debugging} \label{remote} %$ New in 1.1: With XD, you may debug programs running on another computer to which you may connect via network. Table \ref{remote:protocols} contains the list of supported networks protocols. \begin{table}[htbp] \begin{center} \begin{tabular}{|l|l|l|} \hline \bf Code & \bf Protocol & \bf Remote system identified by \\ \hline \tt TCP & TCP/IP & IP address \\ \hline \end{tabular} \caption{Supported Protocols} \label{remote:protocols} \end{center} \end{table} \section{Preparing for remote debugging} \ifwinnt Copy the following files from the {\tt BIN} subdirectory of your XDS installation to an empty directory on the target system: \begin{verbatim} XD_SRV.EXE XDS24.DLL XD_NB09.DLL XD_DITLS.DLL XD_T_TCP.DLL XD_NB04.DLL XD_UTL.DLL \end{verbatim} \else\ifosii \fi \fi \section{Starting debug server} To launch the debug server on the target machine, issue the following command: \verb' XD_SRV /R=<protocol code>' Example: \verb' XD_SRV /R=TCP' \section{Starting XD in remote mode} To start XD in remote mode, use the {\tt /R} command line switch: \verb' XD /R=<protocol code>,<remote system id>' For instance, if you want XD to connect via TCP/IP to the debug server running on a machine which IP address is \verb'server.mycompany.com', issue the following command: \verb' XD /R=TCP,server.mycompany.com'
{ "alphanum_fraction": 0.6803713528, "avg_line_length": 23.9365079365, "ext": "tex", "hexsha": "ef3ad03583e50bf63cf6926c08a5cb80a60f0894", "lang": "TeX", "max_forks_count": 20, "max_forks_repo_forks_event_max_datetime": "2021-10-02T19:46:42.000Z", "max_forks_repo_forks_event_min_datetime": "2019-06-10T18:09:16.000Z", "max_forks_repo_head_hexsha": "cfd20e209193c9cfcee94ad2ca30d8c32ead48c9", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "undecidedzogvisvitalispotent8stars360/xds", "max_forks_repo_path": "Sources/Doc/XD/src/remote.tex", "max_issues_count": 5, "max_issues_repo_head_hexsha": "cfd20e209193c9cfcee94ad2ca30d8c32ead48c9", "max_issues_repo_issues_event_max_datetime": "2021-07-30T07:17:50.000Z", "max_issues_repo_issues_event_min_datetime": "2020-07-10T16:06:48.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "undecidedzogvisvitalispotent8stars360/xds", "max_issues_repo_path": "Sources/Doc/XD/src/remote.tex", "max_line_length": 82, "max_stars_count": 53, "max_stars_repo_head_hexsha": "b4a32b9c9c91fe513fa5ff78ec87bb44102a3b72", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "zanud/xds-2.60", "max_stars_repo_path": "misc/Doc/doc/XD/remote.tex", "max_stars_repo_stars_event_max_datetime": "2021-12-28T18:56:00.000Z", "max_stars_repo_stars_event_min_datetime": "2019-06-10T18:19:44.000Z", "num_tokens": 411, "size": 1508 }
\subsection{Scan Brats18\_TCIA08\_242\_1 layer 2} In this section we discuss the results when applying Hausdorff Distance Masks to the second extracted layer from the scan "Brats18\_TCIA08\_242\_1". \subsubsection{Results T1} \begin{figure}[H] \centering \begin{subfigure}[t]{.4\textwidth} \centering \includegraphics[width=\linewidth]{chapters/06_hdm/b_Brats18_TCIA08_242_1_L2/21.png} \caption{T1 modality slice} \end{subfigure}\hspace{1cm}% \begin{subfigure}[t]{.4\textwidth} \centering \includegraphics[width=\linewidth]{chapters/06_hdm/b_Brats18_TCIA08_242_1_L2/20.png} \caption{Tumor ground truth} \end{subfigure} \begin{subfigure}[t]{.45\textwidth} \centering \includegraphics[width=\linewidth]{chapters/06_hdm/b_Brats18_TCIA08_242_1_L2/23.png} \caption{Regions where the applied masks reduce the accuracy of the segmentation} \end{subfigure}\hspace{1cm}% \begin{subfigure}[t]{.45\textwidth} \centering \includegraphics[width=\linewidth]{chapters/06_hdm/b_Brats18_TCIA08_242_1_L2/24.png} \caption{Regions where the applied masks increase the accuracy of the segmentation} \end{subfigure} \caption{Modality T1 analyzed with Hausdorff Distance Masks. The image (c) shows regions which match the tumor, but also shows regions outside of the tumor which influence the segmentation. Image (d) shows regions that increase the accuracy when occluded in almost all areas of the brain.} \label{brats_tcia08_t1} \end{figure} \subsubsection{Results T1 contrast enhanced} \begin{figure}[H] \centering \begin{subfigure}[t]{.4\textwidth} \centering \includegraphics[width=\linewidth]{chapters/06_hdm/b_Brats18_TCIA08_242_1_L2/26.png} \caption{T1 contrast enhanced modality slice} \end{subfigure}\hspace{1cm}% \begin{subfigure}[t]{.4\textwidth} \centering \includegraphics[width=\linewidth]{chapters/06_hdm/b_Brats18_TCIA08_242_1_L2/25.png} \caption{Tumor ground truth} \end{subfigure} \begin{subfigure}[t]{.45\textwidth} \centering \includegraphics[width=\linewidth]{chapters/06_hdm/b_Brats18_TCIA08_242_1_L2/28.png} \caption{Regions where the applied masks reduce the accuracy of the segmentation} \end{subfigure}\hspace{1cm}% \begin{subfigure}[t]{.45\textwidth} \centering \includegraphics[width=\linewidth]{chapters/06_hdm/b_Brats18_TCIA08_242_1_L2/29.png} \caption{Regions where the applied masks increase the accuracy of the segmentation} \end{subfigure} \caption{Modality T1 contrast enhanced analyzed with Hausdorff Distance Masks. The important regions which decrease the accuracy of the segmentation are located in the tumor region which are clearly visible in this contrast enhanced scan modality.} \label{brats_tcia08_t1ce} \end{figure} \subsubsection{Results T2} \begin{figure}[H] \centering \begin{subfigure}[t]{.4\textwidth} \centering \includegraphics[width=\linewidth]{chapters/06_hdm/b_Brats18_TCIA08_242_1_L2/31.png} \caption{T2 modality slice} \end{subfigure}\hspace{1cm}% \begin{subfigure}[t]{.4\textwidth} \centering \includegraphics[width=\linewidth]{chapters/06_hdm/b_Brats18_TCIA08_242_1_L2/30.png} \caption{Tumor ground truth} \end{subfigure} \begin{subfigure}[t]{.45\textwidth} \centering \includegraphics[width=\linewidth]{chapters/06_hdm/b_Brats18_TCIA08_242_1_L2/33.png} \caption{Regions where the applied masks reduce the accuracy of the segmentation} \end{subfigure}\hspace{1cm}% \begin{subfigure}[t]{.45\textwidth} \centering \includegraphics[width=\linewidth]{chapters/06_hdm/b_Brats18_TCIA08_242_1_L2/34.png} \caption{Regions where the applied masks increase the accuracy of the segmentation} \end{subfigure} \caption{Modality T2 analyzed with Hausdorff Distance Masks. The regions decreasing the accuracy when occluded (image (c)) are in the tumor center which is visible in this modality. Regions increasing the accuracy when occluded (d) are around the tumor borders.} \label{brats_tcia08_t2} \end{figure} \subsubsection{Results FLAIR} \begin{figure}[H] \centering \begin{subfigure}[t]{.4\textwidth} \centering \includegraphics[width=\linewidth]{chapters/06_hdm/b_Brats18_TCIA08_242_1_L2/36.png} \caption{FLAIR modality slice} \end{subfigure}\hspace{1cm}% \begin{subfigure}[t]{.4\textwidth} \centering \includegraphics[width=\linewidth]{chapters/06_hdm/b_Brats18_TCIA08_242_1_L2/35.png} \caption{Tumor ground truth} \end{subfigure} \begin{subfigure}[t]{.45\textwidth} \centering \includegraphics[width=\linewidth]{chapters/06_hdm/b_Brats18_TCIA08_242_1_L2/38.png} \caption{Regions where the applied masks reduce the accuracy of the segmentation} \end{subfigure}\hspace{1cm}% \begin{subfigure}[t]{.45\textwidth} \centering \includegraphics[width=\linewidth]{chapters/06_hdm/b_Brats18_TCIA08_242_1_L2/39.png} \caption{Regions where the applied masks increase the accuracy of the segmentation} \end{subfigure} \caption{Modality FLAIR analyzed with Hausdorff Distance Masks. The region marked to decrease the accuracy when occluded (image (c)) is quite large, as is expected from the FLAIR modality which shows a big part of the tumor.} \label{brats_tcia08_flair} \end{figure} \subsubsection{Discussion} Figures \ref{brats_tcia08_t1}, \ref{brats_tcia08_t1ce}, \ref{brats_tcia08_t2} and \ref{brats_tcia08_flair} show HDM results applied on all four modalities. The generated visualizations for regions decreasing the accuracy look very similar, mostly matching the parts of the tumor that is visible on the corresponding modality. The T1 modality seems to have the smallest impact on the generated segmentation, the maximal deviation from the baseline distance is only 0.0025 compared to 0.005 for T1 contrast enhanced and T2. FLAIR is highest with a deviation of 0.006. \captionsetup[figure]{font=Large,labelfont=Large} \begin{figure}[H] \centering \begin{subfigure}[t]{.45\textwidth} \centering \includegraphics[width=\linewidth]{chapters/06_hdm/book/0.png} \caption{\Large{MRI Scan T1ce modality}} \end{subfigure}\hspace{1cm}% \begin{subfigure}[t]{.45\textwidth} \centering \includegraphics[width=\linewidth]{chapters/06_hdm/book/1.png} \caption{\Large{Tumor segment}} \end{subfigure} \begin{subfigure}[t]{.45\textwidth} \centering \includegraphics[width=\linewidth]{chapters/06_hdm/book/2.png} \caption{\Large{Important parts of the T1ce modality generated with the HDM method}} \end{subfigure}\hspace{1cm}% \begin{subfigure}[t]{.45\textwidth} \centering \includegraphics[width=\linewidth]{chapters/06_hdm/book/3.png} \caption{\Large{Important parts of the FLAIR modality generated with the HDM method}} \end{subfigure} \end{figure}
{ "alphanum_fraction": 0.7250940504, "avg_line_length": 48.1677852349, "ext": "tex", "hexsha": "f9e863c390b11edf56bcaf88f03a141ca38ec202", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a94ecd7cff9f00ecd23ecee319076b78bef79a8e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "andef4/thesis-doc", "max_forks_repo_path": "chapters/06_hdm/06b_brats.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a94ecd7cff9f00ecd23ecee319076b78bef79a8e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "andef4/thesis-doc", "max_issues_repo_path": "chapters/06_hdm/06b_brats.tex", "max_line_length": 565, "max_stars_count": null, "max_stars_repo_head_hexsha": "a94ecd7cff9f00ecd23ecee319076b78bef79a8e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "andef4/thesis-doc", "max_stars_repo_path": "chapters/06_hdm/06b_brats.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2088, "size": 7177 }
\chapter{Method}\label{cha:method} This chapter describes the implementation and application of the theory in more detail. As such, it will build upon the literature in order to more precisely describe how the project was implemented. The chapter will also give a description of how the evaluation of the different algorithms was performed. \section{Implementation} The full source code of the project can be found at \cite{source-code}. The implementation was done in C++ with OpenGL and CUDA. CUDA handled the voxelization and OpenGL was used to render the results of it. The project was implemented in Linux, but can of course be ported to other platforms. The implementation details which were not covered in the literature will be explained in the following sections. \subsection{CUDA-OpenGL Interoperability} During development, some form of visual feedback was required in order to verify and debug the results of the voxelization. As such, a CUDA-OpenGL interoperation had to be implemented. This meant creating a 3D texture in OpenGL and binding it to CUDA in order to write to it. A simplified source code of this can be seen in \appref{app:cuda-opengl-inter}, which is adapted from~\cite{cuda-opengl-Interoperability}. This method links the memory of the 3D texture to a CUDA array, meaning no data is duplicated. This is required, as texture sizes of up to 8 GB were used. \vfill \subsection{Floating-Point Voxelization} The first implementation detail that needs to be explained is how the scanline direction, $d_{sl}$, as shown in \figref{fig:scanline-distance}, was calculated. Let's call the three vertices of a triangle $v_1$, $v_2$ and $v_3$, which are sorted in the most dominant axis (assumed to be the z-axis). The scanline direction was calculated in two different ways, depending on how the vertices were positioned. First, if all vertices were in the same z-slice, any direction would sufficed. In the implementation, the edge between $v_1$ and $v_3$ was chosen. If the vertices were not in the same z-slice, the gradient of the triangle with respect to $z$ was used. This was calculated by first solving the plane equation for $z$: $$ D = n_x x + n_y y + n_z z \rightarrow z = \frac{D - n_x x - n_y y}{n_z},$$ where $n$ is the normal of the triangle and $D$ describes the position of the plane. The gradient was then the partial derivatives of the equation with respect to $x$ and $y$: $$d = (\frac{-n_x}{n_z}, \frac{-n_y}{n_z})$$ Which was the resulting scanline direction. In both cases, the direction also needed to be normalized, meaning the final direction was $$d_{sl} = \frac{d}{|d|}.$$ Each iteration of the algorithm, two scanline endpoints were calculated by reverse projecting the scanline direction onto the triangle's edges. The scanline direction was scaled by $l_i$, which was increased by $l$ each iteration. Recall from the theory that $l = |d_{sl,x}| + |d_{sl,y}|$. Using the reverse projection, the endpoints were calculated as \begin{equation} v_p = v_1 + d_e\frac{l_i}{d_e \cdot d_{sl}}, \label{eq:inverse-projection} \end{equation} where $d_e$ is the normalized direction of the edge it projects to. Note that this only works with the edges from $v_1$ to $v_2$ and $v_1$ to $v_3$. So a special case was needed for the edge from $v_2$ to $v_3$. This was solved by recalculating $v_2$ such that \begin{equation*} \left\{ \begin{aligned} 0 &= (v^\prime_2 - v_1) \cdot d_{sl} \\ v^\prime_2 - v_2 &= (v_3 - v_2)t \end{aligned} \right. . \end{equation*} That is, $v^\prime_2 - v_1$ is perpendicular to $d_{sl}$ and $v'_2 - v_2$ is parallel with $v_3 - v_2$. An example of this can be seen in \figref{fig:triangle-v2}. Solving the equation resulted in $$ v^\prime_2 = v_2 + (v_3 - v_2)\frac{(v_1 - v_2) \cdot d_{sl}}{(v_3 - v_2) \cdot d_{sl}}. $$ This was used instead of $v_1$ in \equref{eq:inverse-projection}, when projecting to the edge between $v_2$ and $v_3$. \input{fig/triangle-v2.fig} \subsection{Integer Voxelization}\label{ss:meth-integer-vox} In the integer version there were two edge cases which created holes in the voxelization. In order to solve these problems first recall the values $C_{lower}$, $C_{upper}$ and $C_k$ from the theory in \secref{ss:integer-optimal-scanline}. The first problem occurred when a new slice was started and an example is shown to the left in \figref{fig:integer-miss}. More specifically, it occurred when the next voxel was behind the current scanline, meaning it should have been included in it. This was resolved by checking if the value of $C_1$ was on the other side of $C_0$ relative to $C_{end}$. That is, if $C_0 < C_1 < C_{end}$ or $C_{end} < C_1 < C_0$. Here $C_{end}$ is defined as the $C_k$ of the last voxel on the triangle's edge. To test if the two values were on different sides, following equation was evaluated: $$ (C_1 - C_0) * (C_{end} - C_0) \leq 0. $$ If this condition was met, either $C_{upper}$ or $C_{lower}$ was set to $C_0$ depending on if $C_{end}$ was greater or less than $C_0$ respectively. This was done in the same step as when the next endpoint for the scanline is calculated. The other case where holes occurred was when $(X_1, Y_1)$ for both edges were on different sides of the scanline. This sometimes happened for triangles which had a very acute angle at $v_1$. An example of this can be seen to the right in \figref{fig:integer-miss}. This problem was similarly detected as before by $$ (C^a_1 - C_0) * (C^b_1 - C_0) \leq 0,$$ where $C^a_1$ and $C^b_1$ are $C_1$ for the two edges. When this was the case, $\Delta X$ and $\Delta Y$ were set similarly to how the scanline direction was chosen for the floating-point version. That is, the gradient of the triangle was calculated, but the division by $n_z$ was removed to avoid floating-point operations. However, this resulted in a scanline direction and not a difference between scanline endpoints. So it was also rotated by 90$\degree$, as the scanline should always be perpendicular to the scanline direction. The result was $\Delta X = N_y$ and $\Delta Y = -N_x$, where $N$ is the unnormalized normal of the triangle. These values were then used when determining the next scanline endpoints. \input{fig/integer-voxel-miss.fig} \subsection{Bresenham Algorithm}\label{ss:bresenham-problems} Worth noting is that Bresenham only works when $\Delta X$ is positive and greater than $\Delta Y$ and $\Delta Z$. To allow for negative directions, three changes were required. First, whenever $x$, $y$ or $z$ increased by one, they were instead decreased by one if the difference was negative. Another change was to set $\Delta X$, $\Delta Y$ and $\Delta Z$ to their absolute value. Finally, the iteration was terminated when the x-position of the voxel equals $X_1$, where $X_1$ is the last x-position on the line. This would however miss the last voxel on the line, which was resolved by setting the last voxel after the loop. To solve the problem where $\Delta X$ has to be greater than the other differences, a variable which keeps track of this axis was introduced. This variable will be called $A$ and was initialized to \begin{equation*} \left\{ \begin{aligned} A = 0 \hspace{1cm} &, max(\Delta X, \Delta Y, \Delta Z) = \Delta X \\ A = 1 \hspace{1cm} &, max(\Delta X, \Delta Y, \Delta Z) = \Delta Y \\ A = 2 \hspace{1cm} &, max(\Delta X, \Delta Y, \Delta Z) = \Delta Z \\ \end{aligned} \right. . \end{equation*} Then whenever the $\Delta X$ or $x$ was needed, instead the variable was indexed using $A$. For example $\Delta [A]$, would give the greatest difference. The indices of the other two axes was calculated by increasing $A$ by one and two respectively, and then taking the modulus 3 of them. Another thing missing from the theory is that Bresenham is not 6-connected. This can be seen in \figref{fig:bresenham-8}. As the optimal scanline requires 6-connected lines in order to fill the interior, the Bresenham algorithm had to be modified to support this. The first intuitive way to do this would be to add a voxel at $(x,y,z)$ whenever $y$ or $z$ increases in the algorithm. This however, caused the voxelization to not follow the line correctly, as seen to the left in \figref{fig:bresenham-8}. This was solved by checking if the error in $y$ was greater than $\Delta Y$, then instead of voxelizing the right voxel, voxelize the top voxel. The result can be seen to the right in \figref{fig:bresenham-8}. The same check was performed on the z-axis. This modification of the algorithm in \appref{app:3dbresen} can be seen in \appref{app:3dbresen-4}. The final problem with the Bresenham algorithm was that the integer version of the scanline algorithm required being able to get the next voxel in the line. The problem was that the 6-connected Bresenham could generate up to three voxels each iteration. One way to solve the problem would be to run one iteration of Bresenham and store all voxels generated in that iteration in a list. Then the next time a new voxel is needed, return an unused voxel from the list. The way it was solved for this thesis however, was to keep track of where the last voxel was returned in the iteration. Then the next time a voxel was needed, the iteration resumed where it left off last. \input{fig/bresenham-8.fig} \input{fig/bresenham-4.fig} \subsection{Voxel Rendering}\label{ss:voxel_rendering} In order to test the results of the optimal scanline, some form of rendering was needed. This thesis only considers rendering which results in cubes in a uniform grid. One approach would be to iterate through the 3D texture on the CPU and render a cube for every existing voxel. This would be bad for numerous reasons. First off, the data created on the GPU would need to be copied over to the CPU, which creates significant overhead. Especially since the data could get up towards 8 GB in size for the highest resolution. Secondly, rendering all voxels (even occluded ones) would slow down the rendering significantly. Another way would be to write each voxel coordinate to a list on the GPU when voxelizing the model. The data could then be used in a geometry shader, where each coordinate is transformed into a cube. This avoids the copying to the CPU and requires less data to store the voxelization, since only occupied voxels are stored. It can, however, render the same voxel multiple times if the voxelization does not keep track of which voxels are already occupied. Therefore, rendering the scene using raymarching was both deemed easier and a potentially faster method. The data was not transferred between the CPU and GPU and only the visible voxels were rendered. To perform the raymarching, it used the RLV algorithm to traverse the voxels. In this case however, the line was not terminated when it reached the endpoint of the line segment, as it does not exist. Instead it terminated when the line intersected with an existing voxel or when it had exited the voxel grid. \section{Evaluation} When performing the comparisons of the different line algorithms, several variable changes were considered in the experiments. Firstly, multiple models were tested on, with ranging levels of detail. The models that were used included the Blender monkey (also called Suzanne)~\cite{blender-monkey}, the Stanford bunny~\cite{stanford-rep} and the Stanford dragon~\cite{stanford-rep}. These can be seen in \figref{fig:models}. The triangle count of each model was 3936, 69451 and 871414 respectively. Suzanne was also subdivided once in Blender~\cite{blender}, in order to increase the triangle count to 3936. Secondly, the resolution of the voxel grid was varied for each model. The resolution varied between 128-2048, incremented by powers of 2. Due to GPU memory limits, resolutions greater than 2048 were not possible. \input{fig/models.fig} \subsection{Performance Analysis} To profile the performance of the algorithms, CUDA events were used~\cite{cuda-profiling}. To use the CUDA events, two timestamp events were created. One of them was initialized before the kernel ran using the CUDA function \texttt{cuEventRecord}. Then after the kernel ran, another event was initialized. These timestamps were entirely handled by the GPU, meaning the CPU would not wait for the timestamps to be executed. The function \texttt{cuEventSynchronize}, was used in order to wait for the events to be executed. The time between two timestamps was then determined by the CUDA call \texttt{cuEventElapsedTime}. \newpage \subsection{Error Analysis} The error was calculated for each of the resolutions, models and combinations of line voxelizations. Meaning RLV was compared to both ILV and Bresenham, but ILV was also compared to Bresenham. The comparison was done by first voxelizing using the first algorithm with the voxel value of 1. Then the voxelization ran again with the other algorithm (with the same 3D texture). This time the voxel value was determined depending on the current voxel value in that position. If the value at that position was 1 or 2, the value 2 was written, otherwise the value 3 was written. The result was a voxelization where the intersection had value 2, the voxels only in the first algorithm had value 1 and the voxels only in the second had value 3. To calculate the error, a simple CUDA kernel was created which iterated through the whole voxel space and summed up the amount of each of the values. With these values the error could be calculated using the formulas in \secref{s:error}.
{ "alphanum_fraction": 0.768, "avg_line_length": 65.8536585366, "ext": "tex", "hexsha": "27a60dd48c06d195fa40873964360398cc054f4f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "4e4cb94b2a4ee261b2b9974aa4b20f6643eb6595", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Thraix/MasterThesis", "max_forks_repo_path": "Thesis/Latex/method.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "4e4cb94b2a4ee261b2b9974aa4b20f6643eb6595", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Thraix/MasterThesis", "max_issues_repo_path": "Thesis/Latex/method.tex", "max_line_length": 182, "max_stars_count": 1, "max_stars_repo_head_hexsha": "4e4cb94b2a4ee261b2b9974aa4b20f6643eb6595", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Thraix/MasterThesis", "max_stars_repo_path": "Thesis/Latex/method.tex", "max_stars_repo_stars_event_max_datetime": "2021-04-16T10:54:38.000Z", "max_stars_repo_stars_event_min_datetime": "2021-04-16T10:54:38.000Z", "num_tokens": 3439, "size": 13500 }
\chapter{Introduction} This is the introduction. It might have some stuff at the beginning. \lipsum[1-1] % \input{chapters/Introduction/intro_section} % \input{chapters/Introduction/second_intro_section} \snip{intro_section} \snip{second_intro_section}
{ "alphanum_fraction": 0.8046875, "avg_line_length": 21.3333333333, "ext": "tex", "hexsha": "8a14c6356e7f4ec1d6af081f1115bd0cc680ddb2", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d2d154405a907baee31b886015332ab06d1a269e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "probablytom/thesis_template", "max_forks_repo_path": "chapters/Introduction/chap.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d2d154405a907baee31b886015332ab06d1a269e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "probablytom/thesis_template", "max_issues_repo_path": "chapters/Introduction/chap.tex", "max_line_length": 52, "max_stars_count": null, "max_stars_repo_head_hexsha": "d2d154405a907baee31b886015332ab06d1a269e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "probablytom/thesis_template", "max_stars_repo_path": "chapters/Introduction/chap.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 67, "size": 256 }
\chapter{Dynamic Programming} So far we have considered two major strategies in algorithms design: greedy, in which we repeatedly take the local optimum choice; and divide and conquer, in which we divide the greater problem into similar subproblems and recurse. There may be problems for which these strategies are suboptimal. In which case, we have a third strategy which may be of use: dynamic programming. This strategy divides the problem into sub problems, but rather than recursing on each sub problem individually, we identify easy to compute base cases from which we can build towards the solution to the larger problem, storing the results of previous computations to use in later computations. This technique differs from a similar technique called ``memoization'' which computes recursively top-down, whereas dynamic programming begins at the base case(s) and works up. Dynamic programming has these important steps: \begin{enumerate} \item Determine structure of optimal solution \item Set up recurrences for optimal solution \item Solve recurrences bottom-up \item Construct the optimal solution \end{enumerate} And the final step is most often neglected since we can typically add some small amount of information to the process so we can trivially reconstruct the optimal solution. \section{Matrix Chain Multiplication} Assuming knowledge of matrices and how they are multiplied. The problem is to find the way to multiply $n$ matrices $A_1,...,A_n$ with dimensions $p_0,...,p_n$ (e.g. $A_i$ has dimensions $p_{i-1} \cross p_i$) using the least number of calculations. Notice that multiplying $A_iA_{i+1}$ takes $p_{i-1}p_ip_{i+1}$ calculations. We start by determining the structure of the optimal solution. Observe that the optimal solution will necessarily involve splitting $A_1A_2...A_n$ into two subproblems at some optimally chosen $A_kA_{k+1}$ so we multiply $A_1...A_k$ then $A_k...A_n$ and then multiply the results together. If we define the function $m(i,j)$ to be the minimum cost of multiplying $A_i...A_j$, then the cost of the optimal solution is $m(1,n) = m(1,k) + m(k+1,n) + p_0p_kp_n$. We then define the recurrence $m(i,i) = 0$ and $m(i,j) = min \{ m(i,k) + m(k+1,j) + p_{i-1}p_kp_j \}$ for all $k$ such that $i < k \leq j$. We then compute all $m(i,i)$ for $1 \leq i \leq n$, then all $m(i,i+1)$, $m(i,i+2)$, ... until we have calculated $m(1,n)$. The time complexity of this solution is \[ \summ{l=2}{n}\summ{i=1}{n-l+1}\summ{k=i}{i+l+2}\BigOh{1} = \BigOh{n^3} \] \section{Longest Common Subsequence} For a string $S = (s_1,s_2,\dots,s_n)$, a subsequence of $S$ is string $S'$ such that every element of $S'$ is in $S$, and if an element $a \in S'$ comes before $b \in S'$, this also holds for $S$. The longest common subsequence of two strings $A=(a_1,a_2,\dots,a_n)$ and $B = (b_1,b_2,\dots,b_m)$, is the longest string $C = (c_1,c_2,\dots,c_k)$, such that $C$ is a subsequence of $A$ and $B$. Let the substring $(s_i,s_{i+1},\dots,s_j)$ be denoted as $S(i,j)$. Further, let $LCS(i,j)$ to be the length of the LCS of $A(0,i)$ and $B(0,j)$. Assume that we have computed $C(1,k)$, and that it is unique. Then if we remove $c_k$ and everything after it from $B$ and $A$, $C(1,k-1)$ is now the LCS of our modified $A$ and $B$. In general $LCS(i,j) =$ \begin{math} \left\{ \begin{array}{l l} \emptyset & \text{if $i=0$ or $j=0$}\\ LCS(i-1,j)+1 & \text{if $a_i=b_j$}\\ & \text{and $LCS(i-1,j)>LCS(i,j-1)$}\\ LCS(i,j-1)+1 & \text{if $a_i=b_j$}\\ & \text{and $LCS(i-1,j)<LCS(i,j-1)$}\\ \end{array} \right. \end{math} To compute this, we compute $LCS(0,0)$, $LCS(0,1)$,\dots,$LCS(0,m)$; then $LCS(1,0)$,\dots,$LCS(1,m)$; \dots;$LCS(n,0)$,\dots,$LCS(n,m)$. At which point we have our solution, which is $LCS(n,m)$. \section{Optimal Triangulation of a Convex Polygon} % page 91 in John's notes First some definitions. A \emph{polygon} is a list of vertices $(v_1,...,v_n)$ such that for any $v_i$, there exists an edge $(v_i,v_{i+1})$ and also there exists an edge $(v_1,v_n)$. A polygon is said to be \emph{convex} if any line passing through the polygon crosses the edges of the polygon at most twice. A \emph{chord} is an edge between two non-adjacent vertices in a polygon. A \emph{triangulation} is a set of chords which divide a polygon into triangles. The problem is to build a triangulation of a given convex polygon which minimizes total edge length. We define the function $w(a,b,c)$ to be the weight of the triangle $(v_a,v_b,v_c)$, which in this case will be the length of the edges $(v_a,v_b)$, $(v_b,v_c)$, and $(v_c,v_a)$. We also define the function $t(a,b)$ to be the optimal triangulation of points $(v_a,...,v_b)$. We would like to solve $t(1,n)$. We start by defining the structure of an optimal solution. Notice that the optimal triangulation contains the triangle $(v_1,v_k,v_n)$ for some $k$. The cost of this triangulation is $t(1,k) + t(k,n) + w(1,k,n)$. We then define the recurrence $t(i,i+1) = 0$ for all $i$, and $t(i,j) = min \{ t(i,k) + t(k,j) + w(i,k,j) \}$ for all $k$ such that $i < k < j$. We then compute all $t(i,i+1)$, then all $t(i,i+2)$, $t(i,i+3)$, ... until we have calculated $t(1,n)$. \hypertarget{sec:floyd_warshall}{\section{All-Pairs Shortest Path (Floyd-Warshall)}} Given a graph $G=(V,E)$ where edges have an associated weight $w(e)$ for $e \in E$, compute the shortest path between $u$ and $v$ for all $u,v \in V$. We start by defining the structure of an optimal solution. Let us decompose the problem into finding the optimal path between $v_i$ and $v_j$. All of the vertices on this path are contained in $\{ v_1,...,v_k \}$ except perhaps $v_i,v_j$. Notice that if $k=0$ then if there is an edge $(v_i,v_j) \in E$ then the distance is the weight of that edge, and if there is no such edge then the weight is infinite. Otherwise, we take the minimum of either including or excluding $v_k$ on the path. This gives the recurrence: \begin{math} d^k_{i,j} = \left\{ \begin{array}{l l} weight((v_i,v_j)) & \text{if $k=0$ and} \\ & (v_i,v_j) \in E \\ \infty & \text{if $k=0$ and} \\ & (v_i,v_j) \not \in E \\ min \{ d^{k-1}_{i,j},d^{k-1}_{i,k} + d^{k-1}_{k,j} \} & \text{if $k>0$} \end{array} \right. \end{math} We then compute $d^0_{i,j}$ for all $v_i,v_j \in V$, then all $d^1_{i,j}$, $d^2_{i,j}$, ... until we have computed $d^{n-2}_{i,j}$. Running time: $\BigOh{n^3}$. Analysis is left as an exercise to the reader. \section{Knapsack} Given a knapsack (or bag) which can carry $W$ units of weight, and $n$ items where item $i$ has weight $w_i$ and value $v_i$, what is the most valuable list of items which can fit in the given bag? Notice that this may include multiples of a particular item. We start by defining the structure of an optimal solution. Let $p(w)$ be the value of the optimal packing of a bag which can carry $w$ units of weight, and let $p_i(w)$ be the same but necessarily including item $i$. \begin{align*} p_i(w) &= p(w - w_i) + v_i & \\ p(0) &= 0 & \\ p(w) &= max \{ p(w - w_i) + v_i \} & \text{where $i : w_i \leq w$} \\ \end{align*} We then calculate $p(0), p(1), ..., p(W)$. The running time for this algorithm is $\BigOh{nW}$. Notice that $W$ requires $\BigOh{logW}$ bits to represent. Since the input to the problem is $\BigOh{n + logW}$, and $W = 2^{logW}$, the time complexity is $\BigOh{n2^{logW}}$ so this solution is exponential with respect to the input. \section{String Edit Distance} We define the edit distance between two strings as the minimum number of edits necessary to transform one string into another, where edits are insertions, deletions, or replacements of a single character. The problem is to compute this edit distance, given strings $X$ of length $m$ and $Y$ of length $n$. We start by defining the structure of an optimal solution. Let us define $E[i,j]$ as the minimal edit distance between $X[1..i]$ and $Y[1..j]$. We would like to know $E[m,n]$. It should be easy to see that $E[0,0] = 0$ and $E[1,1]$ is either $0$ if $X[1] = Y[1]$ or $1$ otherwise. $E[0,j] = j$ and $E[i,0] = i$. This gives the recurrence: \begin{math} E[i,j] = \left\{ \begin{array}{l l} 0 & \text{if } i=0,j=0 \\ i & \text{if } j=0 \\ j & \text{if } i=0 \\ min \{ 1 + E[i-1,j], 1 + E[i,j-1], 1 + E[i-1,j-1] \} & \text{if } X[i] \neq Y[j] \\ min \{ 1 + E[i-1,j], 1 + E[i,j-1], E[i-1,j-1] \} & \text{otherwise} \\ \end{array} \right. \end{math} Then we calculate $E[i,j]$ from $i=0,j=0$ to $i=m,j=n$.
{ "alphanum_fraction": 0.6668969488, "avg_line_length": 41.9565217391, "ext": "tex", "hexsha": "dfb719442003e9ea22977e4d5986a2910b7de78a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "13a04cc4a6bd8dec5e35c1a42b96680d47a98962", "max_forks_repo_licenses": [ "BSD-2-Clause-FreeBSD" ], "max_forks_repo_name": "SteamedPears/AllTheAlgorithms", "max_forks_repo_path": "dynamic_programming.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "13a04cc4a6bd8dec5e35c1a42b96680d47a98962", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-2-Clause-FreeBSD" ], "max_issues_repo_name": "SteamedPears/AllTheAlgorithms", "max_issues_repo_path": "dynamic_programming.tex", "max_line_length": 394, "max_stars_count": 3, "max_stars_repo_head_hexsha": "13a04cc4a6bd8dec5e35c1a42b96680d47a98962", "max_stars_repo_licenses": [ "BSD-2-Clause-FreeBSD" ], "max_stars_repo_name": "SteamedPears/AllTheAlgorithms", "max_stars_repo_path": "dynamic_programming.tex", "max_stars_repo_stars_event_max_datetime": "2017-05-01T03:13:05.000Z", "max_stars_repo_stars_event_min_datetime": "2016-10-12T19:16:53.000Z", "num_tokens": 2856, "size": 8685 }
\documentclass{amsart} \usepackage[margin=3cm]{geometry} % See geometry.pdf to learn the layout options. There are lots. \geometry{letterpaper} % ... or a4paper or a5paper or ... %\geometry{landscape} % Activate for for rotated page geometry \usepackage[parfill]{parskip} % Activate to begin paragraphs with an empty line rather than an indent \usepackage{float} \usepackage{graphicx} \usepackage{amssymb} \usepackage{epstopdf} \usepackage{siunitx} \usepackage{subcaption} \usepackage{listings} \usepackage{setspace} \usepackage{units} \usepackage{amsmath} \usepackage{wrapfig} \usepackage{lscape} \DeclareGraphicsRule{.tif}{png}{.png}{`convert #1 `dirname #1`/`basename #1 .tif`.png} \graphicspath{{./img/}} \title{Swinging Gate Pendulum Seismometer} \author{Caspar \textsc{Lant}} % Author name \date{\today} % Date for the report \begin{document} \bigskip \maketitle % Insert the title, author and date \begin{center} Intermediate Experimental Physics II\\ \vspace{1.5cm} \begin{tabular}{l r} Section: & 002\\ \\ Date Performed: & February 16, 2016 \\ % Date the experiment was performed Date Due: & February 23, 2016\\ \\ Partner: & Neil Saddeler\\ % Partner names Professor: & Prof. Andrew Kent\\ Instructor: & David Mykytyn % Instructor/supervisor \end{tabular} \vfill \includegraphics[width=0.7\textwidth]{seismo.png} \vfill \end{center} \pagebreak \setstretch{1.4} \paragraph{\textbf{The Objective} of this week's experiment was to use an oscilloscope to futher our understanding of damping and driving effects in oscillitory systems.} \section{Theoretical Background/ Abstract} Seismometers are of interest because they allow us to detect otherwise unnoticed mechanical vibrations in the world arround us. This is very useful in the detection of earthquakes and all sorts of geologic phenomena. Careful analysis of data produced by precise seismometers can allow us to predict the severity of earthquakes well before they happen. It's important to note that, although mechanical waves propegate longitudinally in air (as sound), they are transversely-propegating in solids like the earth's mantel and crust. It is for this reason that seismometers detect the transverse movement of earth, which is to say up and down. The device used in this lab is a ``swinging gate" siesmometer, which is sensitive to vibrations on the order of tens of micrometers in amplitude.\\ In this lab we will measure the damping effects of a magnet placed under our seismometer, take measurements of the devices response to a driving oscillation, and take a noise measurement in which we will observe the passing of the N and R trains. \vspace{5pt} \begin{figure}[H] % \begin{minipage}{.255\textwidth} \vspace{-5pt} \centering \begin{equation*} \begin{split} 0 &= m\ddot x + \Gamma \dot x + kx \\ &\Rightarrow \ddot x + \nicefrac{\Gamma}{m} \dot x + \nicefrac{k}{m} x \\ &\Rightarrow \ddot x + \gamma \dot x + \omega_0^2 x \end{split} \end{equation*} \end{minipage} % \begin{minipage}{.734\textwidth} \setstretch{1.4} The derivation of the general equation of motion for an underdamped vibration is given to the left. We expect the seismometer to be underdamped for most magnet positions, and that the damping constant will increase with increasing distance btween the magnet and the pendulum's pivot point. \end{minipage} \end{figure} \vspace{-15pt} From the ODE above, and some educated-guessing, we come to the following value of $x(t)$: \begin{equation} x_u(t) = \frac{1}{2} e^{\nicefrac{\gamma}{2}t}\left(a e^{i\omega_u t} + a^* e^{-i\omega_u t}\right) \end{equation} Where $\omega_u = \sqrt{\omega_0^2 - \left(\nicefrac{\gamma}{2}\right)^2} > 0$, and $a^*$ is $a$'s complex conjugate. The value of those two constants is given by initial systemic conditions\-- displacement, intial velocity, etc. \section{Experimental Procedure} \setstretch{1.3} \begin{enumerate} \item Level the seismometer by adjusting the thumbscrew on its base. It turns out that the best way to do this is not with the provided bubble level, but by eye. You should adjust the screw to a height such that the device's pendulum finds its equillibrium position at the middle of the seismometer base. \item Turn on data studio and set up a display of pendulum position versus time. \item Move the magnet to the non-business end of the device. Blow on the pendulum to displace it slightly. \item Measure the Period of the oscillations in DataStudio and dial in the second thumbscrew (located at the end of the pendulum's base) such that the period is between 14 adn 16 seconds. \item Once you're satisfied with your leveling, blow on the pendulum again and record about a dozen cycles. Note that if the pendulum's motion dies out before ten cycles, you should be blowing harder. \item Repeat this prodecure for various magnet positions. You should see an increasing damping effect. \item Export all this data for future analysis. \item Future analysis: pass each dataset into the provided python script and fit them curves! Record the values for amplitude, period, phase shift, and the damping factor. \item Next, Hook up the mechanical vibrator to DataStudio and pass a square wave through it at a relatively low frequency. \item Position the acrylic rod connected to the vibrator such that it comes into contact with the base of the seismometer. \item Measure the damping response for a sampling of frequencies. \item Remove the device from its inner tubes and repeat the last three steps. \item Now, turn off the vibrator and take a noise sample. Be on the lookout for peaks caused by passing trains. \item Export all this data as well. You're done. \end{enumerate} \section{Damping Effects} \setstretch{1.4} We predict that the damping coeffecient for our oscillitory system will increase as the distance bewteen the magnet and the pendulum mass decreases. This has to do with the fact that our pendulum seismometer can be thought of as a lever. When the force produced by the interaction between the magnet and the pendulum is closest to the end of the pendulum, it produces the greatest torque, and has the greatest mechanical advantage. \\ \\ \setstretch{1.3} Figure 1 tracks the oscillation of our measurement device with the magnet removed. The damping effects seen are produced by frictional forces and air resitance.\\ \\ Figure 2 shows the movement of our pendulum with the magnet $\unit[5]{cm}$ from our zero point. It is fitted with a time-damped sine curve.\\ \\ Figure 3 plots the oscillation of the seismometer for three magnet positions. The overdamped case is shown in red.\\ \\ Figure 4 shows the micro oscillations present in the overdamped case, and a fit-cure. The parameters of the fit are displayed in the upper-righthand corner of the graph. \setcounter{figure}{4} \section{Noise Sample - Locomotion Commotion} \setcounter{page}{4} \setstretch{1.4} Although Neil, with his head against the table, was able to hear trains pass, the trains did not produce an amplitude on the seismometer that exceeded our noise threshold. My guess is that, given that the seismometer is more sensitive to amplitude than Neil's ears are, Neil was able to detect the passing of a train not because of an elevated amplitude, but because of the characteristic vibratory envelope of a train passing. You'll see that the peaks labled ``train" are each bordered by a ``ramp-up" on its left and a ``ramp-down" in amplitude on its right. This is consistant with our intuition surrounding passing trains. My first impulse was to compare our data to MTA train schedules, but it soon became clear that the discrepencies between scheduled and actual train arrival times was often larger than the frequency at which train arrivals occur! This is to say that MTA train schedules are totally useless, or at least only good for providing an idea of the frequency of trains at a given time of day. If we were to design ``train seismometer"\-- one who's sole purpose was to record vibratory signature of passing trains, our first move would be to place the device as close to the source of vibration as possible, and also to taylor the resonant frequency of the device to match the vibrating frequency of our test-subjects. I'd probably hire an acousician to do this for me. It's also important to note that the only trains that are marked on this graph are those which passed through the station without stopping\-- known as ``express" trains. Local trains, which stopped in our local station, have slowed down so much by the time that they reach us that the amplitude of the vibrations that they produce is negligible. You'll notice that on of the largest peaks in amplitude occurs towards the very beginning of our sample span. This is likely an artifact of my setting up the device; pehaps a footstep I placed as I settled back into my chair from DataStudio... % \vspace{-0.1cm} \begin{figure}[H] \includegraphics[width=0.95\textwidth]{train2.png} \caption{Noise Sample with Train Markers} \end{figure} \section{Driven Oscillations} Figure \ref{fig:interference} shows the seismometer's response to a mechanical oscillation with frequency of $\unit[10]{Hz}$. The dark inner bands are an interference pattern produced by the interaction between the driving oscillation and the seismometer's sympathetic resonance. I would guess that the periodicity of this interference pattern is related to the natural resonant frequency of the measurement device. \vfill \begin{figure}[h] \centering \includegraphics[width=0.95\textwidth]{vib1.png} \caption{Seismometer Response to $\unit[10]{Hz}$ Vibratory Disturbance} \label{fig:interference} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.95\textwidth]{vib2.png} \caption{Driven Oscillation with and without Vibration Isolation} \label{fig:vibrationisolation} \end{figure} \setstretch{1.4} \section{Questions} \textit{Repeat your measurements both with and without the vibration isolation mounts for the wooden table. Do you see a difference? Why or why not?} \begin{quote} The amplitude of the noise increases by a factor of two when the vibration isolation mounts (inner tubes) are removed. It's also clear that the seismometer was no leveled correctly when taken off its inner tubes\-- the noise graph with larger amplitude centers itself around a voltage not equal to zero. See Figure \ref{fig:vibrationisolation}. \end{quote} \section{Error Analysis} In this experiment, our greatest source of error was certainly the erranous vibratory disturbances present in a laboratory full of people conducting their own experiments, oblivious to the fact that a pair of their classmates were taking mesaurements from an extremely precise instrument, and were acutely aware of their footstep. This could be mitigated by increasing the efficacy of the vibration oscillation mounts, perhaps by inflating them less, or by kicking everyone else out of the classroom. Another frustration was the fact that the pendulum on the seismometer refused to be centered. This produced a slight vertical offset in our noise measurements as well as an additional layer of uncertainty in our measurement of damping effects. This could be recitfied by a more precise instrument, and with more time spent calibrating it. Judging on our few measurements for magnet placement, I'd say that the damping factor is exponentially dependent on the distance between the magnet and the pivot point of the oscilloscope. I figure this because there is a relatively small difference in quality factor between magnet positions one two and three but a rather large one between positions three and four. Not the best analysis of this, but it's consistent with my understanding of torque and damped oscillitory systems, so I'd say that it's enough to go on... \vspace{0.2cm} \begin{figure}[H] \includegraphics[width=0.7\textwidth]{dampedwithfit.png} \end{figure} \newpage \setstretch{1.2} \lstinputlisting[language=Python]{./data/ReadDataFitPlot.py} \end{document}
{ "alphanum_fraction": 0.7665525114, "avg_line_length": 74.3272727273, "ext": "tex", "hexsha": "9bdb3ed0cd8ec09a15b69acb27f5e558f52fb2de", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "1b4d45d9e915a84ecb80a39498850463bbc2d3be", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "caspar/PhysicsLab", "max_forks_repo_path": "14_Seismometer/Seismometer.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "1b4d45d9e915a84ecb80a39498850463bbc2d3be", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "caspar/PhysicsLab", "max_issues_repo_path": "14_Seismometer/Seismometer.tex", "max_line_length": 1978, "max_stars_count": 1, "max_stars_repo_head_hexsha": "1b4d45d9e915a84ecb80a39498850463bbc2d3be", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "caspar/PhysicsLab", "max_stars_repo_path": "14_Seismometer/Seismometer.tex", "max_stars_repo_stars_event_max_datetime": "2016-05-08T19:42:20.000Z", "max_stars_repo_stars_event_min_datetime": "2016-05-08T19:42:20.000Z", "num_tokens": 2970, "size": 12264 }
% ----------------------------------------------------------------------------- % HEADER % ----------------------------------------------------------------------------- \documentclass[a4paper, 10pt]{article} \usepackage{jheppub} \usepackage[T1]{fontenc} \usepackage{colortbl,xcolor,float} \definecolor{orange}{rgb}{1,0.5,0} % ----------------------------------------------------------------------------- % COVER PAGE % ----------------------------------------------------------------------------- \title{{\includegraphics[scale=.4]{logo.png}}\ The LaTeX report} \author{Generated by elijahsheridan on 26 March 2020, 03:32:08} \abstract{ This report has been generated automatically by {\sc MadAnalysis} 5.\\$~$\\ Please cite:\\ \begin{quote} \textbf{E.~Conte, B.~Fuks and G.~Serret},\\ \textit{MadAnalysis 5, A User-Friendly Framework for Collider Phenomenology},\\ Comput. Phys. Commun. {\bf 184} (2013) 222-256,\\ arXiv:1206.1599 [hep-ph].\\ \end{quote} To contact us:\\ \begin{quote} \textbf{http://madanalysis.irmp.ucl.ac.be}\\ \textbf{[email protected]}\\ \end{quote} } % ----------------------------------------------------------------------------- % BEGIN DOCUMENT % ----------------------------------------------------------------------------- \begin{document} \maketitle \flushbottom % ----------------------------------------------------------------------------- % SECTION Setup % ----------------------------------------------------------------------------- \newpage \section{ Setup} \subsection{ Command history} \texttt{ma5>\# set directory where running "./\-bin/\-ma5"; set lumi; define the signal significance\\ } \texttt{ }\texttt{ }\texttt{ma5>set main.currentdir = /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno \# need to change this directory path --> exit and type "pwd" to get the path\\ } \texttt{ }\texttt{ }\texttt{ma5>set main.lumi = 40.0\\ } \texttt{ }\texttt{ }\texttt{ma5>set main.SBratio = 'S/\-sqrt(S+B)'\\ } \texttt{ }\texttt{ }\texttt{ma5>\# import samples --> change the path to the LHE file\\ } \texttt{ }\texttt{ }\texttt{ma5>import /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-axion\_signal/\-axion\_signal\_gurrola\_cuts\_1MeV.lhe.gz as signal\\ } \texttt{ }\texttt{ }\texttt{ma5>import /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-vbf\_diphoton\_background\_data/\-merged\_lhe/\-vbf\_diphoton\_background\_ht\_0\_100\_merged.lhe.gz as bg\_vbf\_0\_100\\ } \texttt{ }\texttt{ }\texttt{ma5>import /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-vbf\_diphoton\_background\_data/\-merged\_lhe/\-vbf\_diphoton\_background\_ht\_100\_200\_merged.lhe.gz as bg\_vbf\_100\_200\\ } \texttt{ }\texttt{ }\texttt{ma5>import /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-vbf\_diphoton\_background\_data/\-merged\_lhe/\-vbf\_diphoton\_background\_ht\_200\_400\_merged.lhe.gz as bg\_vbf\_200\_400\\ } \texttt{ }\texttt{ }\texttt{ma5>import /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-vbf\_diphoton\_background\_data/\-merged\_lhe/\-vbf\_diphoton\_background\_ht\_400\_600\_merged.lhe.gz as bg\_vbf\_400\_600\\ } \texttt{ }\texttt{ }\texttt{ma5>import /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-vbf\_diphoton\_background\_data/\-merged\_lhe/\-vbf\_diphoton\_background\_ht\_600\_800\_merged.lhe.gz as bg\_vbf\_600\_800\\ } \texttt{ }\texttt{ }\texttt{ma5>import /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-vbf\_diphoton\_background\_data/\-merged\_lhe/\-vbf\_diphoton\_background\_ht\_800\_1200\_merged.lhe.gz as bg\_vbf\_800\_1200\\ } \texttt{ }\texttt{ }\texttt{ma5>import /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-vbf\_diphoton\_background\_data/\-merged\_lhe/\-vbf\_diphoton\_background\_ht\_1200\_1600\_merged.lhe.gz as bg\_vbf\_1200\_1600\\ } \texttt{ }\texttt{ }\texttt{ma5>import /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-vbf\_diphoton\_background\_data/\-merged\_lhe/\-vbf\_diphoton\_background\_ht\_1600\_inf\_merged.lhe.gz as bg\_vbf\_1600\_inf\\ } \texttt{ }\texttt{ }\texttt{ma5>import /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-diphoton\_double\_isr\_background\_data/\-merged\_lhe/\-diphoton\_double\_isr\_background\_ht\_0\_100\_merged.lhe.gz as bg\_dip\_0\_100\\ } \texttt{ }\texttt{ }\texttt{ma5>import /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-diphoton\_double\_isr\_background\_data/\-merged\_lhe/\-diphoton\_double\_isr\_background\_ht\_100\_200\_merged.lhe.gz as bg\_dip\_100\_200\\ } \texttt{ }\texttt{ }\texttt{ma5>import /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-diphoton\_double\_isr\_background\_data/\-merged\_lhe/\-diphoton\_double\_isr\_background\_ht\_200\_400\_merged.lhe.gz as bg\_dip\_200\_400\\ } \texttt{ }\texttt{ }\texttt{ma5>import /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-diphoton\_double\_isr\_background\_data/\-merged\_lhe/\-diphoton\_double\_isr\_background\_ht\_400\_600\_merged.lhe.gz as bg\_dip\_400\_600\\ } \texttt{ }\texttt{ }\texttt{ma5>import /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-diphoton\_double\_isr\_background\_data/\-merged\_lhe/\-diphoton\_double\_isr\_background\_ht\_600\_800\_merged.lhe.gz as bg\_dip\_600\_800\\ } \texttt{ }\texttt{ }\texttt{ma5>import /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-diphoton\_double\_isr\_background\_data/\-merged\_lhe/\-diphoton\_double\_isr\_background\_ht\_800\_1200\_merged.lhe.gz as bg\_dip\_800\_1200\\ } \texttt{ }\texttt{ }\texttt{ma5>import /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-diphoton\_double\_isr\_background\_data/\-merged\_lhe/\-diphoton\_double\_isr\_background\_ht\_1200\_1600\_merged.lhe.gz as bg\_dip\_1200\_1600\\ } \texttt{ }\texttt{ }\texttt{ma5>import /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-diphoton\_double\_isr\_background\_data/\-merged\_lhe/\-diphoton\_double\_isr\_background\_ht\_1600\_inf\_merged.lhe.gz as bg\_dip\_1600\_inf\\ } \texttt{ }\texttt{ }\texttt{ma5>\# define bg and signal samples\\ } \texttt{ }\texttt{ }\texttt{ma5>set signal.type = signal\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_0\_100.type = background\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_100\_200.type = background\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_200\_400.type = background\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_400\_600.type = background\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_600\_800.type = background\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_800\_1200.type = background\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_1200\_1600.type = background\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_1600\_inf.type = background\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_0\_100.type = background\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_100\_200.type = background\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_200\_400.type = background\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_400\_600.type = background\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_600\_800.type = background\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_800\_1200.type = background\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_1200\_1600.type = background\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_1600\_inf.type = background\\ } \texttt{ }\texttt{ }\texttt{ma5>\# define weights for the samples\\ } \texttt{ }\texttt{ }\texttt{ma5>\#set sample\_1.weight = 1\\ } \texttt{ }\texttt{ }\texttt{ma5>\#set sample\_2.weight = 1\\ } \texttt{ }\texttt{ }\texttt{ma5>\# line styles and colors\\ } \texttt{ }\texttt{ }\texttt{ma5>set signal.linecolor = red\\ } \texttt{ }\texttt{ }\texttt{ma5>set signal.linestyle = dashed\\ } \texttt{ }\texttt{ }\texttt{ma5>set signal.linewidth = 3\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_0\_100.linecolor = blue-4\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_0\_100.linestyle = dash-dotted\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_0\_100.linewidth = 4\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_100\_200.linecolor = blue-3\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_100\_200.linestyle = dash-dotted\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_100\_200.linewidth = 4\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_200\_400.linecolor = blue-2\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_200\_400.linestyle = dash-dotted\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_200\_400.linewidth = 4\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_400\_600.linecolor = blue-1\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_400\_600.linestyle = dash-dotted\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_400\_600.linewidth = 4\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_600\_800.linecolor = blue\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_600\_800.linestyle = dash-dotted\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_600\_800.linewidth = 4\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_800\_1200.linecolor = blue+1\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_800\_1200.linestyle = dash-dotted\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_800\_1200.linewidth = 4\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_1200\_1600.linecolor = blue+2\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_1200\_1600.linestyle = dash-dotted\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_1200\_1600.linewidth = 4\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_1600\_inf.linecolor = blue+3\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_1600\_inf.linestyle = dash-dotted\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_vbf\_1600\_inf.linewidth = 4\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_0\_100.linecolor = green-4\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_0\_100.linestyle = dash-dotted\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_0\_100.linewidth = 4\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_100\_200.linecolor = green-3\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_100\_200.linestyle = dash-dotted\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_100\_200.linewidth = 4\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_200\_400.linecolor = green-2\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_200\_400.linestyle = dash-dotted\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_200\_400.linewidth = 4\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_400\_600.linecolor = green-1\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_400\_600.linestyle = dash-dotted\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_400\_600.linewidth = 4\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_600\_800.linecolor = green\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_600\_800.linestyle = dash-dotted\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_600\_800.linewidth = 4\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_800\_1200.linecolor = green+1\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_800\_1200.linestyle = dash-dotted\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_800\_1200.linewidth = 4\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_1200\_1600.linecolor = green+2\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_1200\_1600.linestyle = dash-dotted\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_1200\_1600.linewidth = 4\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_1600\_inf.linecolor = green+3\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_1600\_inf.linestyle = dash-dotted\\ } \texttt{ }\texttt{ }\texttt{ma5>set bg\_dip\_1600\_inf.linewidth = 4\\ } \texttt{ }\texttt{ }\texttt{ma5>\# a jet can be from a light quark or b quark\\ } \texttt{ }\texttt{ }\texttt{ma5>define jets = j\\ } \texttt{ }\texttt{ }\texttt{ma5>define e = e+ e-\\ } \texttt{ }\texttt{ }\texttt{ma5>define mu = mu+ mu-\\ } \texttt{ }\texttt{ }\texttt{ma5>define ta = ta+ ta-\\ } \texttt{ }\texttt{ }\texttt{ma5>define lept = e mu ta\\ } \texttt{ }\texttt{ }\texttt{ma5>define Zprime = 32 -32\\ } \texttt{ }\texttt{ }\texttt{ma5>\# reduce contribution from V+Zp ==> jj+Zpz\\ } \texttt{ }\texttt{ }\texttt{ma5>select M(jets[1] jets[2]) > 120\\ } \texttt{ }\texttt{ }\texttt{ma5>\# define which plots to make\\ } \texttt{ }\texttt{ }\texttt{ma5>plot PT(jets[1])\\ } \texttt{ }\texttt{ }\texttt{ma5>plot ETA(jets[1])\\ } \texttt{ }\texttt{ }\texttt{ma5>plot PHI(jets[1])\\ } \texttt{ }\texttt{ }\texttt{ma5>plot PT(jets[2])\\ } \texttt{ }\texttt{ }\texttt{ma5>plot ETA(jets[2])\\ } \texttt{ }\texttt{ }\texttt{ma5>plot PHI(jets[2])\\ } \texttt{ }\texttt{ }\texttt{ma5>plot DELTAR(jets[1], jets[2])\\ } \texttt{ }\texttt{ }\texttt{ma5>plot M(jets[1] jets[2])\\ } \texttt{ }\texttt{ }\texttt{ma5>plot MET\\ } \texttt{ }\texttt{ }\texttt{ma5>plot sdETA(jets[1] jets[2])\\ } \texttt{ }\texttt{ }\texttt{ma5>plot M(a[1] a[2])\\ } \texttt{ }\texttt{ }\texttt{ma5>plot PT(a[1])\\ } \texttt{ }\texttt{ }\texttt{ma5>plot PT(a[2])\\ } \texttt{ }\texttt{ }\texttt{ma5>plot THT\\ } \texttt{ }\texttt{ }\texttt{ma5>plot MET\\ } \texttt{ }\texttt{ }\texttt{ma5>plot TET\\ } \texttt{ }\texttt{ }\texttt{ma5>\#set the plot/\-graph parameters\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[2].xmax = 1000\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[2].xmin = 0\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[2].nbins = 200\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[2].logY = true\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[2].logX = false\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[2].rank = PTordering\\ } \texttt{ }\texttt{ }\texttt{ma5>\#set selection[2].stacking\_method = normalize2one\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[2].titleX = "p\_\{T\}[j\_\{1\}] (GeV)"\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[3].xmax = 8\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[3].xmin = -8\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[3].nbins = 160\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[3].logY = false\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[3].logX = false\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[3].rank = PTordering\\ } \texttt{ }\texttt{ }\texttt{ma5>\#set selection[3].stacking\_method = normalize2one\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[3].titleX = "\#eta[j\_\{1\}]"\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[4].xmax = 3.2\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[4].xmin = -3.2\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[4].nbins = 64\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[4].logY = false\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[4].logX = false\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[4].rank = PTordering\\ } \texttt{ }\texttt{ }\texttt{ma5>\#set selection[4].stacking\_method = normalize2one\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[4].titleX = "\#phi[j\_\{1\}]"\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[5].xmax = 500\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[5].xmin = 0\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[5].nbins = 100\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[5].logY = true\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[5].logX = false\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[5].rank = PTordering\\ } \texttt{ }\texttt{ }\texttt{ma5>\#set selection[5].stacking\_method = normalize2one\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[5].titleX = "p\_\{T\}[j\_\{2\}] (GeV)"\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[6].xmax = 8\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[6].xmin = -8\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[6].nbins = 160\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[6].logY = false\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[6].logX = false\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[6].rank = PTordering\\ } \texttt{ }\texttt{ }\texttt{ma5>\#set selection[6].stacking\_method = normalize2one\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[6].titleX = "\#eta[j\_\{2\}]"\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[7].xmax = 3.2\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[7].xmin = -3.2\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[7].nbins = 64\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[7].logY = false\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[7].logX = false\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[7].rank = PTordering\\ } \texttt{ }\texttt{ }\texttt{ma5>\#set selection[7].stacking\_method = normalize2one\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[7].titleX = "\#phi[j\_\{2\}]"\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[8].xmax = 15\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[8].xmin = 0\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[8].nbins = 75\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[8].logY = false\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[8].logX = false\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[8].rank = PTordering\\ } \texttt{ }\texttt{ }\texttt{ma5>\#set selection[8].stacking\_method = normalize2one\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[8].titleX = "\#DeltaR[j\_\{1\},j\_\{2\}]"\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[9].xmax = 8000\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[9].xmin = 0\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[9].nbins = 160\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[9].logY = false\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[9].logX = false\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[9].rank = PTordering\\ } \texttt{ }\texttt{ }\texttt{ma5>\#set selection[9].stacking\_method = normalize2one\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[9].titleX = "M[j\_\{1\},j\_\{2\}] (GeV)"\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[10].xmax = 1000\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[10].xmin = 0\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[10].nbins = 100\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[10].logY = true\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[10].logX = false\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[10].rank = PTordering\\ } \texttt{ }\texttt{ }\texttt{ma5>\#set selection[10].stacking\_method = normalize2one\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[10].titleX = "\#slash\{E\}\_\{T\} (GeV)"\\ } \texttt{ }\texttt{ }\texttt{ma5>\#set selection[11].stacking\_method = normalize2one\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[11].titleX = "\#Delta\#eta(j\_\{1\},j\_\{2\})"\\ } \texttt{ }\texttt{ }\texttt{ma5>\#set selection[12].xmax = 2000\\ } \texttt{ }\texttt{ }\texttt{ma5>\#set selection[12].xmin = 0\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[12].nbins = 400\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[12].logY = true\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[12].logX = false\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[12].rank = PTordering\\ } \texttt{ }\texttt{ }\texttt{ma5>\#set selection[12].stacking\_method = normalize2one\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[12].titleX = "M[a\_\{1\},a\_\{2\}] (GeV)"\\ } \texttt{ }\texttt{ }\texttt{ma5>\#set selection[13].xmax = 4\\ } \texttt{ }\texttt{ }\texttt{ma5>\#set selection[13].xmin = -4\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[13].nbins = 80\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[13].logY = false\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[13].logX = false\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[13].rank = PTordering\\ } \texttt{ }\texttt{ }\texttt{ma5>\#set selection[13].stacking\_method = normalize2one\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[13].titleX = "p\_\{T\}[a\_\{1\}]"\\ } \texttt{ }\texttt{ }\texttt{ma5>\#set selection[14].xmax = 2000\\ } \texttt{ }\texttt{ }\texttt{ma5>\#set selection[14].xmin = 0\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[14].nbins = 400\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[14].logY = true\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[14].logX = false\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[14].rank = PTordering\\ } \texttt{ }\texttt{ }\texttt{ma5>\#set selection[14].stacking\_method = normalize2one\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[14].titleX = "p\_\{T\}[a\_\{2\}] (GeV)"\\ } \texttt{ }\texttt{ }\texttt{ma5>\#set selection[15].xmax = 4\\ } \texttt{ }\texttt{ }\texttt{ma5>\#set selection[15].xmin = -4\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[15].nbins = 80\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[15].logY = false\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[15].logX = false\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[15].rank = PTordering\\ } \texttt{ }\texttt{ }\texttt{ma5>\#set selection[15].stacking\_method = normalize2one\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[15].titleX = "THT"\\ } \texttt{ }\texttt{ }\texttt{ma5>\#set selection[16].xmax = 1000\\ } \texttt{ }\texttt{ }\texttt{ma5>\#set selection[16].xmin = 0\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[16].nbins = 200\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[16].logY = true\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[16].logX = false\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[16].rank = PTordering\\ } \texttt{ }\texttt{ }\texttt{ma5>\#set selection[16].stacking\_method = normalize2one\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[16].titleX = "MET"\\ } \texttt{ }\texttt{ }\texttt{ma5>\#set selection[17].xmax = 4\\ } \texttt{ }\texttt{ }\texttt{ma5>\#set selection[17].xmin = -4\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[17].nbins = 80\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[17].logY = false\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[17].logX = false\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[17].rank = PTordering\\ } \texttt{ }\texttt{ }\texttt{ma5>\#set selection[17].stacking\_method = normalize2one\\ } \texttt{ }\texttt{ }\texttt{ma5>set selection[17].titleX = "TET"\\ } \texttt{ }\texttt{ }\texttt{ma5>\# apply selections\\ } \texttt{ }\texttt{ }\texttt{ma5>select (sdETA(jets[1] jets[2]) > 6.1 or sdETA(jets[1] jets[2]) < -6.1) and M(jets[1] jets[2]) > 1000\\ } \texttt{ }\texttt{ }\texttt{ma5>submit analysis\_deltaeta6.1\_mmjj\_1000\\ } \texttt{ }\texttt{ }\subsection{ Configuration} \begin{itemize} \item MadAnalysis version 1.6.33 (2017/\-11/\-20). \item Histograms given for an integrated luminosity of \textcolor{blue}{40.0}\textcolor{blue}{ fb}$^{\textcolor{blue}{-1}}$\textcolor{blue}{.} \textcolor{blue}{} \end{itemize} % ----------------------------------------------------------------------------- % SECTION Datasets % ----------------------------------------------------------------------------- \newpage \section{ Datasets} \subsection{ signal} \begin{itemize} \item Samples stored in the directory: \textcolor{blue}{/\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-optimization} . \item Sample consisting of: \textcolor{blue}{signal} events. \item Generated events: \textcolor{blue}{1000000 } events. \item Normalization to the luminosity: \textcolor{blue}{4094}\textcolor{blue}{ +/\-- }\textcolor{blue}{2 } events. \item Ratio (event weight): \textcolor{blue}{0.0041 } . \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{|m{55.0mm}|m{25.0mm}|m{30.0mm}|m{30.0mm}|} \hline {\cellcolor{yellow} Path to the event file}& {\cellcolor{yellow} Nr. of events}& {\cellcolor{yellow} Cross section (pb)}& {\cellcolor{yellow} Negative wgts (\%)}\\ \hline {\cellcolor{white} /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-axion\_signal/\-axion\_signal\_gurrola\_cuts\_1MeV.lhe.gz}& {\cellcolor{white} 1000000}& {\cellcolor{white} 0.102 @ 0.028\%}& {\cellcolor{white} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \subsection{ bg\_vbf\_0\_100} \begin{itemize} \item Samples stored in the directory: \textcolor{blue}{/\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-optimization} . \item Sample consisting of: \textcolor{blue}{background} events. \item Generated events: \textcolor{blue}{1000000 } events. \item Normalization to the luminosity: \textcolor{blue}{12150}\textcolor{blue}{ +/\-- }\textcolor{blue}{24 } events. \item Ratio (event weight): \textcolor{blue}{0.012 } . \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{|m{55.0mm}|m{25.0mm}|m{30.0mm}|m{30.0mm}|} \hline {\cellcolor{yellow} Path to the event file}& {\cellcolor{yellow} Nr. of events}& {\cellcolor{yellow} Cross section (pb)}& {\cellcolor{yellow} Negative wgts (\%)}\\ \hline {\cellcolor{white} /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-vbf\_diphoton\_background\_data/\-merged\_lhe/\-vbf\_diphoton\_background\_ht\_0\_100\_merged.lhe.gz}& {\cellcolor{white} 1000000}& {\cellcolor{white} 0.304 @ 0.19\%}& {\cellcolor{white} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \subsection{ bg\_vbf\_100\_200} \begin{itemize} \item Samples stored in the directory: \textcolor{blue}{/\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-optimization} . \item Sample consisting of: \textcolor{blue}{background} events. \item Generated events: \textcolor{blue}{965662 } events. \item Normalization to the luminosity: \textcolor{blue}{9695}\textcolor{blue}{ +/\-- }\textcolor{blue}{17 } events. \item Ratio (event weight): \textcolor{blue}{0.01 } . \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{|m{55.0mm}|m{25.0mm}|m{30.0mm}|m{30.0mm}|} \hline {\cellcolor{yellow} Path to the event file}& {\cellcolor{yellow} Nr. of events}& {\cellcolor{yellow} Cross section (pb)}& {\cellcolor{yellow} Negative wgts (\%)}\\ \hline {\cellcolor{white} /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-vbf\_diphoton\_background\_data/\-merged\_lhe/\-vbf\_diphoton\_background\_ht\_100\_200\_merged.lhe.gz}& {\cellcolor{white} 965662}& {\cellcolor{white} 0.242 @ 0.17\%}& {\cellcolor{white} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \subsection{ bg\_vbf\_200\_400} \begin{itemize} \item Samples stored in the directory: \textcolor{blue}{/\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-optimization} . \item Sample consisting of: \textcolor{blue}{background} events. \item Generated events: \textcolor{blue}{984165 } events. \item Normalization to the luminosity: \textcolor{blue}{5413}\textcolor{blue}{ +/\-- }\textcolor{blue}{11 } events. \item Ratio (event weight): \textcolor{blue}{0.0055 } . \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{|m{55.0mm}|m{25.0mm}|m{30.0mm}|m{30.0mm}|} \hline {\cellcolor{yellow} Path to the event file}& {\cellcolor{yellow} Nr. of events}& {\cellcolor{yellow} Cross section (pb)}& {\cellcolor{yellow} Negative wgts (\%)}\\ \hline {\cellcolor{white} /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-vbf\_diphoton\_background\_data/\-merged\_lhe/\-vbf\_diphoton\_background\_ht\_200\_400\_merged.lhe.gz}& {\cellcolor{white} 984165}& {\cellcolor{white} 0.135 @ 0.2\%}& {\cellcolor{white} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \subsection{ bg\_vbf\_400\_600} \begin{itemize} \item Samples stored in the directory: \textcolor{blue}{/\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-optimization} . \item Sample consisting of: \textcolor{blue}{background} events. \item Generated events: \textcolor{blue}{1000000 } events. \item Normalization to the luminosity: \textcolor{blue}{986}\textcolor{blue}{ +/\-- }\textcolor{blue}{2 } events. \item Ratio (event weight): \textcolor{blue}{0.00099 } . \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{|m{55.0mm}|m{25.0mm}|m{30.0mm}|m{30.0mm}|} \hline {\cellcolor{yellow} Path to the event file}& {\cellcolor{yellow} Nr. of events}& {\cellcolor{yellow} Cross section (pb)}& {\cellcolor{yellow} Negative wgts (\%)}\\ \hline {\cellcolor{white} /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-vbf\_diphoton\_background\_data/\-merged\_lhe/\-vbf\_diphoton\_background\_ht\_400\_600\_merged.lhe.gz}& {\cellcolor{white} 1000000}& {\cellcolor{white} 0.0247 @ 0.14\%}& {\cellcolor{white} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \subsection{ bg\_vbf\_600\_800} \begin{itemize} \item Samples stored in the directory: \textcolor{blue}{/\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-optimization} . \item Sample consisting of: \textcolor{blue}{background} events. \item Generated events: \textcolor{blue}{1000000 } events. \item Normalization to the luminosity: \textcolor{blue}{252}\textcolor{blue}{ +/\-- }\textcolor{blue}{1 } events. \item Ratio (event weight): \textcolor{blue}{0.00025 } . \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{|m{55.0mm}|m{25.0mm}|m{30.0mm}|m{30.0mm}|} \hline {\cellcolor{yellow} Path to the event file}& {\cellcolor{yellow} Nr. of events}& {\cellcolor{yellow} Cross section (pb)}& {\cellcolor{yellow} Negative wgts (\%)}\\ \hline {\cellcolor{white} /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-vbf\_diphoton\_background\_data/\-merged\_lhe/\-vbf\_diphoton\_background\_ht\_600\_800\_merged.lhe.gz}& {\cellcolor{white} 1000000}& {\cellcolor{white} 0.0063 @ 0.13\%}& {\cellcolor{white} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \subsection{ bg\_vbf\_800\_1200} \begin{itemize} \item Samples stored in the directory: \textcolor{blue}{/\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-optimization} . \item Sample consisting of: \textcolor{blue}{background} events. \item Generated events: \textcolor{blue}{400839 } events. \item Normalization to the luminosity: \textcolor{blue}{114}\textcolor{blue}{ +/\-- }\textcolor{blue}{1 } events. \item Ratio (event weight): \textcolor{blue}{0.00028 } . \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{|m{55.0mm}|m{25.0mm}|m{30.0mm}|m{30.0mm}|} \hline {\cellcolor{yellow} Path to the event file}& {\cellcolor{yellow} Nr. of events}& {\cellcolor{yellow} Cross section (pb)}& {\cellcolor{yellow} Negative wgts (\%)}\\ \hline {\cellcolor{white} /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-vbf\_diphoton\_background\_data/\-merged\_lhe/\-vbf\_diphoton\_background\_ht\_800\_1200\_merged.lhe.gz}& {\cellcolor{white} 400839}& {\cellcolor{white} 0.00287 @ 0.16\%}& {\cellcolor{white} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \subsection{ bg\_vbf\_1200\_1600} \begin{itemize} \item Samples stored in the directory: \textcolor{blue}{/\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-optimization} . \item Sample consisting of: \textcolor{blue}{background} events. \item Generated events: \textcolor{blue}{953803 } events. \item Normalization to the luminosity: \textcolor{blue}{20}\textcolor{blue}{ +/\-- }\textcolor{blue}{1 } events. \item Ratio (event weight): \textcolor{blue}{2.1e-05 } . \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{|m{55.0mm}|m{25.0mm}|m{30.0mm}|m{30.0mm}|} \hline {\cellcolor{yellow} Path to the event file}& {\cellcolor{yellow} Nr. of events}& {\cellcolor{yellow} Cross section (pb)}& {\cellcolor{yellow} Negative wgts (\%)}\\ \hline {\cellcolor{white} /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-vbf\_diphoton\_background\_data/\-merged\_lhe/\-vbf\_diphoton\_background\_ht\_1200\_1600\_merged.lhe.gz}& {\cellcolor{white} 953803}& {\cellcolor{white} 0.000515 @ 0.16\%}& {\cellcolor{white} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \subsection{ bg\_vbf\_1600\_inf} \begin{itemize} \item Samples stored in the directory: \textcolor{blue}{/\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-optimization} . \item Sample consisting of: \textcolor{blue}{background} events. \item Generated events: \textcolor{blue}{270148 } events. \item Normalization to the luminosity: \textcolor{blue}{7}\textcolor{blue}{ +/\-- }\textcolor{blue}{1 } events. \item Ratio (event weight): \textcolor{blue}{2.6e-05 } . \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{|m{55.0mm}|m{25.0mm}|m{30.0mm}|m{30.0mm}|} \hline {\cellcolor{yellow} Path to the event file}& {\cellcolor{yellow} Nr. of events}& {\cellcolor{yellow} Cross section (pb)}& {\cellcolor{yellow} Negative wgts (\%)}\\ \hline {\cellcolor{white} /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-vbf\_diphoton\_background\_data/\-merged\_lhe/\-vbf\_diphoton\_background\_ht\_1600\_inf\_merged.lhe.gz}& {\cellcolor{white} 270148}& {\cellcolor{white} 0.000191 @ 0.11\%}& {\cellcolor{white} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \subsection{ bg\_dip\_0\_100} \begin{itemize} \item Samples stored in the directory: \textcolor{blue}{/\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-optimization} . \item Sample consisting of: \textcolor{blue}{background} events. \item Generated events: \textcolor{blue}{1040000 } events. \item Normalization to the luminosity: \textcolor{blue}{2710847}\textcolor{blue}{ +/\-- }\textcolor{blue}{4614 } events. \item\textcolor{red}{Ratio (event weight): }\textcolor{red}{2.6 }\textcolor{red}{ - warning: please generate more events (weight larger than 1)!} \textcolor{red}{} \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{|m{55.0mm}|m{25.0mm}|m{30.0mm}|m{30.0mm}|} \hline {\cellcolor{yellow} Path to the event file}& {\cellcolor{yellow} Nr. of events}& {\cellcolor{yellow} Cross section (pb)}& {\cellcolor{yellow} Negative wgts (\%)}\\ \hline {\cellcolor{white} /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-diphoton\_double\_isr\_background\_data/\-merged\_lhe/\-diphoton\_double\_isr\_background\_ht\_0\_100\_merged.lhe.gz}& {\cellcolor{white} 1040000}& {\cellcolor{white} 67.8 @ 0.17\%}& {\cellcolor{white} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \subsection{ bg\_dip\_100\_200} \begin{itemize} \item Samples stored in the directory: \textcolor{blue}{/\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-optimization} . \item Sample consisting of: \textcolor{blue}{background} events. \item Generated events: \textcolor{blue}{1040000 } events. \item Normalization to the luminosity: \textcolor{blue}{1095362}\textcolor{blue}{ +/\-- }\textcolor{blue}{1528 } events. \item\textcolor{red}{Ratio (event weight): }\textcolor{red}{1.1 }\textcolor{red}{ - warning: please generate more events (weight larger than 1)!} \textcolor{red}{} \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{|m{55.0mm}|m{25.0mm}|m{30.0mm}|m{30.0mm}|} \hline {\cellcolor{yellow} Path to the event file}& {\cellcolor{yellow} Nr. of events}& {\cellcolor{yellow} Cross section (pb)}& {\cellcolor{yellow} Negative wgts (\%)}\\ \hline {\cellcolor{white} /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-diphoton\_double\_isr\_background\_data/\-merged\_lhe/\-diphoton\_double\_isr\_background\_ht\_100\_200\_merged.lhe.gz}& {\cellcolor{white} 1040000}& {\cellcolor{white} 27.4 @ 0.14\%}& {\cellcolor{white} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \subsection{ bg\_dip\_200\_400} \begin{itemize} \item Samples stored in the directory: \textcolor{blue}{/\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-optimization} . \item Sample consisting of: \textcolor{blue}{background} events. \item Generated events: \textcolor{blue}{1040000 } events. \item Normalization to the luminosity: \textcolor{blue}{239548}\textcolor{blue}{ +/\-- }\textcolor{blue}{414 } events. \item Ratio (event weight): \textcolor{blue}{0.23 } . \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{|m{55.0mm}|m{25.0mm}|m{30.0mm}|m{30.0mm}|} \hline {\cellcolor{yellow} Path to the event file}& {\cellcolor{yellow} Nr. of events}& {\cellcolor{yellow} Cross section (pb)}& {\cellcolor{yellow} Negative wgts (\%)}\\ \hline {\cellcolor{white} /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-diphoton\_double\_isr\_background\_data/\-merged\_lhe/\-diphoton\_double\_isr\_background\_ht\_200\_400\_merged.lhe.gz}& {\cellcolor{white} 1040000}& {\cellcolor{white} 5.99 @ 0.17\%}& {\cellcolor{white} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \subsection{ bg\_dip\_400\_600} \begin{itemize} \item Samples stored in the directory: \textcolor{blue}{/\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-optimization} . \item Sample consisting of: \textcolor{blue}{background} events. \item Generated events: \textcolor{blue}{1040000 } events. \item Normalization to the luminosity: \textcolor{blue}{28798}\textcolor{blue}{ +/\-- }\textcolor{blue}{53 } events. \item Ratio (event weight): \textcolor{blue}{0.028 } . \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{|m{55.0mm}|m{25.0mm}|m{30.0mm}|m{30.0mm}|} \hline {\cellcolor{yellow} Path to the event file}& {\cellcolor{yellow} Nr. of events}& {\cellcolor{yellow} Cross section (pb)}& {\cellcolor{yellow} Negative wgts (\%)}\\ \hline {\cellcolor{white} /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-diphoton\_double\_isr\_background\_data/\-merged\_lhe/\-diphoton\_double\_isr\_background\_ht\_400\_600\_merged.lhe.gz}& {\cellcolor{white} 1040000}& {\cellcolor{white} 0.72 @ 0.18\%}& {\cellcolor{white} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \subsection{ bg\_dip\_600\_800} \begin{itemize} \item Samples stored in the directory: \textcolor{blue}{/\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-optimization} . \item Sample consisting of: \textcolor{blue}{background} events. \item Generated events: \textcolor{blue}{662009 } events. \item Normalization to the luminosity: \textcolor{blue}{6674}\textcolor{blue}{ +/\-- }\textcolor{blue}{28 } events. \item Ratio (event weight): \textcolor{blue}{0.01 } . \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{|m{55.0mm}|m{25.0mm}|m{30.0mm}|m{30.0mm}|} \hline {\cellcolor{yellow} Path to the event file}& {\cellcolor{yellow} Nr. of events}& {\cellcolor{yellow} Cross section (pb)}& {\cellcolor{yellow} Negative wgts (\%)}\\ \hline {\cellcolor{white} /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-diphoton\_double\_isr\_background\_data/\-merged\_lhe/\-diphoton\_double\_isr\_background\_ht\_600\_800\_merged.lhe.gz}& {\cellcolor{white} 662009}& {\cellcolor{white} 0.167 @ 0.41\%}& {\cellcolor{white} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \subsection{ bg\_dip\_800\_1200} \begin{itemize} \item Samples stored in the directory: \textcolor{blue}{/\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-optimization} . \item Sample consisting of: \textcolor{blue}{background} events. \item Generated events: \textcolor{blue}{1040000 } events. \item Normalization to the luminosity: \textcolor{blue}{2942}\textcolor{blue}{ +/\-- }\textcolor{blue}{6 } events. \item Ratio (event weight): \textcolor{blue}{0.0028 } . \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{|m{55.0mm}|m{25.0mm}|m{30.0mm}|m{30.0mm}|} \hline {\cellcolor{yellow} Path to the event file}& {\cellcolor{yellow} Nr. of events}& {\cellcolor{yellow} Cross section (pb)}& {\cellcolor{yellow} Negative wgts (\%)}\\ \hline {\cellcolor{white} /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-diphoton\_double\_isr\_background\_data/\-merged\_lhe/\-diphoton\_double\_isr\_background\_ht\_800\_1200\_merged.lhe.gz}& {\cellcolor{white} 1040000}& {\cellcolor{white} 0.0736 @ 0.17\%}& {\cellcolor{white} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \subsection{ bg\_dip\_1200\_1600} \begin{itemize} \item Samples stored in the directory: \textcolor{blue}{/\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-optimization} . \item Sample consisting of: \textcolor{blue}{background} events. \item Generated events: \textcolor{blue}{337115 } events. \item Normalization to the luminosity: \textcolor{blue}{513}\textcolor{blue}{ +/\-- }\textcolor{blue}{3 } events. \item Ratio (event weight): \textcolor{blue}{0.0015 } . \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{|m{55.0mm}|m{25.0mm}|m{30.0mm}|m{30.0mm}|} \hline {\cellcolor{yellow} Path to the event file}& {\cellcolor{yellow} Nr. of events}& {\cellcolor{yellow} Cross section (pb)}& {\cellcolor{yellow} Negative wgts (\%)}\\ \hline {\cellcolor{white} /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-diphoton\_double\_isr\_background\_data/\-merged\_lhe/\-diphoton\_double\_isr\_background\_ht\_1200\_1600\_merged.lhe.gz}& {\cellcolor{white} 337115}& {\cellcolor{white} 0.0128 @ 0.51\%}& {\cellcolor{white} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \subsection{ bg\_dip\_1600\_inf} \begin{itemize} \item Samples stored in the directory: \textcolor{blue}{/\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-optimization} . \item Sample consisting of: \textcolor{blue}{background} events. \item Generated events: \textcolor{blue}{1040000 } events. \item Normalization to the luminosity: \textcolor{blue}{187}\textcolor{blue}{ +/\-- }\textcolor{blue}{1 } events. \item Ratio (event weight): \textcolor{blue}{0.00018 } . \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{|m{55.0mm}|m{25.0mm}|m{30.0mm}|m{30.0mm}|} \hline {\cellcolor{yellow} Path to the event file}& {\cellcolor{yellow} Nr. of events}& {\cellcolor{yellow} Cross section (pb)}& {\cellcolor{yellow} Negative wgts (\%)}\\ \hline {\cellcolor{white} /\-Users/\-elijahsheridan/\-MG5\_aMC\_v2\_6\_5/\-axion\_pheno/\-madgraph\_data/\-diphoton\_double\_isr\_background\_data/\-merged\_lhe/\-diphoton\_double\_isr\_background\_ht\_1600\_inf\_merged.lhe.gz}& {\cellcolor{white} 1040000}& {\cellcolor{white} 0.00469 @ 0.15\%}& {\cellcolor{white} 0.0}\\ \hline \end{tabular} \end{center} \end{table} % ----------------------------------------------------------------------------- % SECTION Histos and cuts % ----------------------------------------------------------------------------- \newpage \section{ Histos and cuts} \subsection{Cut 1} \textbf{* Cut: select M ( jets[1] jets[2] ) > 120.0}\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{20.0mm}|m{27.0mm}|m{27.0mm}|m{33.0mm}|m{32.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Events kept: K}& {\cellcolor{yellow} Rejected events: R}& {\cellcolor{yellow} Efficiency: K /\- (K + R)}& {\cellcolor{yellow} Cumul. efficiency: K /\- Initial}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 4094.04 +/\-- 1.14}& {\cellcolor{white} 0.04 +/\-- 0.20}& {\cellcolor{white} 1.00e+00 +/\-- 4.89e-05}& {\cellcolor{white} 1.00e+00 +/\-- 4.89e-05}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 3934.7 +/\-- 52.1}& {\cellcolor{white} 8215.6 +/\-- 53.9}& {\cellcolor{white} 0.32384 +/\-- 0.00425}& {\cellcolor{white} 0.32384 +/\-- 0.00425}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 8757.0 +/\-- 32.7}& {\cellcolor{white} 938.4 +/\-- 29.2}& {\cellcolor{white} 0.903 +/\-- 0.003}& {\cellcolor{white} 0.903 +/\-- 0.003}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 5358.8 +/\-- 13.1}& {\cellcolor{white} 54.50 +/\-- 7.35}& {\cellcolor{white} 0.98993 +/\-- 0.00136}& {\cellcolor{white} 0.98993 +/\-- 0.00136}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 983.78 +/\-- 2.22}& {\cellcolor{white} 3.07 +/\-- 1.75}& {\cellcolor{white} 0.99689 +/\-- 0.00177}& {\cellcolor{white} 0.99689 +/\-- 0.00177}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 251.851 +/\-- 0.571}& {\cellcolor{white} 0.226 +/\-- 0.475}& {\cellcolor{white} 0.99910 +/\-- 0.00189}& {\cellcolor{white} 0.99910 +/\-- 0.00189}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 114.64 +/\-- 0.39}& {\cellcolor{white} 0.12 +/\-- 0.35}& {\cellcolor{white} 0.99895 +/\-- 0.00302}& {\cellcolor{white} 0.99895 +/\-- 0.00302}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 20.553 +/\-- 0.209}& {\cellcolor{white} 0.0426 +/\-- 0.2061}& {\cellcolor{white} 1.00 +/\-- 0.01}& {\cellcolor{white} 1.00 +/\-- 0.01}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 7.513 +/\-- 0.378}& {\cellcolor{white} 0.146 +/\-- 0.378}& {\cellcolor{white} 0.9810 +/\-- 0.0494}& {\cellcolor{white} 0.9810 +/\-- 0.0494}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 714692 +/\-- 1416}& {\cellcolor{white} 1996154 +/\-- 3473}& {\cellcolor{white} 0.263642 +/\-- 0.000268}& {\cellcolor{white} 0.263642 +/\-- 0.000268}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 855479 +/\-- 1268}& {\cellcolor{white} 239882 +/\-- 546}& {\cellcolor{white} 0.781001 +/\-- 0.000395}& {\cellcolor{white} 0.781001 +/\-- 0.000395}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 234627 +/\-- 411}& {\cellcolor{white} 4920.9 +/\-- 69.9}& {\cellcolor{white} 0.97946 +/\-- 0.00029}& {\cellcolor{white} 0.97946 +/\-- 0.00029}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 28616.2 +/\-- 53.6}& {\cellcolor{white} 182.4 +/\-- 13.5}& {\cellcolor{white} 0.993665 +/\-- 0.000468}& {\cellcolor{white} 0.993665 +/\-- 0.000468}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 6658.9 +/\-- 27.8}& {\cellcolor{white} 15.43 +/\-- 3.92}& {\cellcolor{white} 0.997689 +/\-- 0.000588}& {\cellcolor{white} 0.997689 +/\-- 0.000588}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 2939.96 +/\-- 5.28}& {\cellcolor{white} 2.38 +/\-- 1.54}& {\cellcolor{white} 0.999192 +/\-- 0.000524}& {\cellcolor{white} 0.999192 +/\-- 0.000524}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 513.34 +/\-- 2.66}& {\cellcolor{white} 0.17 +/\-- 0.41}& {\cellcolor{white} 0.999669 +/\-- 0.000803}& {\cellcolor{white} 0.999669 +/\-- 0.000803}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 187.742 +/\-- 0.345}& {\cellcolor{white} 0.0414 +/\-- 0.2034}& {\cellcolor{white} 0.99978 +/\-- 0.00108}& {\cellcolor{white} 0.99978 +/\-- 0.00108}\\ \hline \end{tabular} \end{center} \end{table} \newpage \subsection{ Histogram 1} \textbf{* Plot: PT ( jets[1] ) }\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{23.0mm}|m{23.0mm}|m{18.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Integral}& {\cellcolor{yellow} Entries per event}& {\cellcolor{yellow} Mean}& {\cellcolor{yellow} RMS}& {\cellcolor{yellow} \% underflow}& {\cellcolor{yellow} \% overflow}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 4094}& {\cellcolor{white} 1.0}& {\cellcolor{white} 445.82}& {\cellcolor{white} 317.0}& {\cellcolor{orange} 0.0}& {\cellcolor{orange} 6.28}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 3934}& {\cellcolor{white} 1.0}& {\cellcolor{white} 46.784}& {\cellcolor{white} 11.01}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 8756}& {\cellcolor{white} 1.0}& {\cellcolor{white} 87.0579}& {\cellcolor{white} 20.21}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 5358}& {\cellcolor{white} 1.0}& {\cellcolor{white} 159.249}& {\cellcolor{white} 38.06}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 983}& {\cellcolor{white} 1.0}& {\cellcolor{white} 274.434}& {\cellcolor{white} 50.77}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 251}& {\cellcolor{white} 1.0}& {\cellcolor{white} 386.332}& {\cellcolor{white} 64.57}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 114}& {\cellcolor{white} 1.0}& {\cellcolor{white} 524.594}& {\cellcolor{white} 93.61}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.1816}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 20.6}& {\cellcolor{white} 1.0}& {\cellcolor{white} 738.34}& {\cellcolor{white} 109.5}& {\cellcolor{green} 0.0}& {\cellcolor{green} 3.383}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 7.66}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1048.57}& {\cellcolor{white} 221.9}& {\cellcolor{red} 0.0}& {\cellcolor{red} 46.27}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 714691}& {\cellcolor{white} 1.0}& {\cellcolor{white} 44.2861}& {\cellcolor{white} 11.5}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 855479}& {\cellcolor{white} 1.0}& {\cellcolor{white} 84.0902}& {\cellcolor{white} 19.87}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 234627}& {\cellcolor{white} 1.0}& {\cellcolor{white} 155.663}& {\cellcolor{white} 38.1}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 28616}& {\cellcolor{white} 1.0}& {\cellcolor{white} 272.939}& {\cellcolor{white} 53.09}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 6658}& {\cellcolor{white} 1.0}& {\cellcolor{white} 382.903}& {\cellcolor{white} 65.7}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 2939}& {\cellcolor{white} 1.0}& {\cellcolor{white} 517.996}& {\cellcolor{white} 90.51}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.2342}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 513}& {\cellcolor{white} 1.0}& {\cellcolor{white} 728.639}& {\cellcolor{white} 100.1}& {\cellcolor{green} 0.0}& {\cellcolor{green} 2.272}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 187}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1036.29}& {\cellcolor{white} 211.5}& {\cellcolor{red} 0.0}& {\cellcolor{red} 43.96}\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.45]{selection_0.png}\\ \caption{ } \end{center} \end{figure} \newpage \subsection{ Histogram 2} \textbf{* Plot: ETA ( jets[1] ) }\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{23.0mm}|m{23.0mm}|m{18.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Integral}& {\cellcolor{yellow} Entries per event}& {\cellcolor{yellow} Mean}& {\cellcolor{yellow} RMS}& {\cellcolor{yellow} \% underflow}& {\cellcolor{yellow} \% overflow}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 4094}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.0023996}& {\cellcolor{white} 1.616}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 3934}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.00133609}& {\cellcolor{white} 2.635}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 8756}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.0043695}& {\cellcolor{white} 2.247}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 5358}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.00194377}& {\cellcolor{white} 1.965}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 983}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.000999715}& {\cellcolor{white} 1.682}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 251}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.000513382}& {\cellcolor{white} 1.499}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 114}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.00310292}& {\cellcolor{white} 1.329}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 20.6}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.000169046}& {\cellcolor{white} 1.134}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 7.66}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.00127081}& {\cellcolor{white} 0.9541}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 714691}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.000508343}& {\cellcolor{white} 2.224}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 855479}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.00260979}& {\cellcolor{white} 1.71}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 234627}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.0010006}& {\cellcolor{white} 1.468}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 28616}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.00170173}& {\cellcolor{white} 1.279}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 6658}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.0049065}& {\cellcolor{white} 1.157}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 2939}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.00133618}& {\cellcolor{white} 1.052}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 513}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.00486624}& {\cellcolor{white} 0.9226}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 187}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.00107396}& {\cellcolor{white} 0.8}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.45]{selection_1.png}\\ \caption{ } \end{center} \end{figure} \newpage \subsection{ Histogram 3} \textbf{* Plot: PHI ( jets[1] ) }\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{23.0mm}|m{23.0mm}|m{18.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Integral}& {\cellcolor{yellow} Entries per event}& {\cellcolor{yellow} Mean}& {\cellcolor{yellow} RMS}& {\cellcolor{yellow} \% underflow}& {\cellcolor{yellow} \% overflow}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 4094}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.00102738}& {\cellcolor{white} 1.813}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 3934}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.00419195}& {\cellcolor{white} 1.813}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 8756}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.00148977}& {\cellcolor{white} 1.814}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 5358}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.00192437}& {\cellcolor{white} 1.814}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 983}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.00356173}& {\cellcolor{white} 1.813}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 251}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.000882503}& {\cellcolor{white} 1.813}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 114}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.00348627}& {\cellcolor{white} 1.813}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 20.6}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.00205129}& {\cellcolor{white} 1.813}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 7.66}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.00218185}& {\cellcolor{white} 1.813}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 714691}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.00108067}& {\cellcolor{white} 1.816}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 855479}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.00213448}& {\cellcolor{white} 1.814}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 234627}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.00158683}& {\cellcolor{white} 1.812}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 28616}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.00239958}& {\cellcolor{white} 1.813}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 6658}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.00121432}& {\cellcolor{white} 1.814}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 2939}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.000396081}& {\cellcolor{white} 1.814}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 513}& {\cellcolor{white} 1.0}& {\cellcolor{white} 6.26217e-05}& {\cellcolor{white} 1.814}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 187}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.0014267}& {\cellcolor{white} 1.814}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.45]{selection_2.png}\\ \caption{ } \end{center} \end{figure} \newpage \subsection{ Histogram 4} \textbf{* Plot: PT ( jets[2] ) }\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{23.0mm}|m{23.0mm}|m{18.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Integral}& {\cellcolor{yellow} Entries per event}& {\cellcolor{yellow} Mean}& {\cellcolor{yellow} RMS}& {\cellcolor{yellow} \% underflow}& {\cellcolor{yellow} \% overflow}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 4094}& {\cellcolor{white} 1.0}& {\cellcolor{white} 161.87}& {\cellcolor{white} 136.0}& {\cellcolor{green} 0.0}& {\cellcolor{green} 3.02}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 3934}& {\cellcolor{white} 1.0}& {\cellcolor{white} 31.2504}& {\cellcolor{white} 7.217}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 8756}& {\cellcolor{white} 1.0}& {\cellcolor{white} 57.5348}& {\cellcolor{white} 16.74}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 5358}& {\cellcolor{white} 1.0}& {\cellcolor{white} 111.536}& {\cellcolor{white} 32.69}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 983}& {\cellcolor{white} 1.0}& {\cellcolor{white} 201.545}& {\cellcolor{white} 47.5}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 251}& {\cellcolor{white} 1.0}& {\cellcolor{white} 293.986}& {\cellcolor{white} 62.63}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 114}& {\cellcolor{white} 1.0}& {\cellcolor{white} 415.077}& {\cellcolor{white} 90.48}& {\cellcolor{red} 0.0}& {\cellcolor{red} 15.1}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 20.6}& {\cellcolor{white} 1.0}& {\cellcolor{white} 613.849}& {\cellcolor{white} 108.4}& {\cellcolor{red} 0.0}& {\cellcolor{red} 90.47}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 7.66}& {\cellcolor{white} 1.0}& {\cellcolor{white} 917.972}& {\cellcolor{white} 221.8}& {\cellcolor{red} 0.0}& {\cellcolor{red} 98.03}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 714691}& {\cellcolor{white} 1.0}& {\cellcolor{white} 29.8777}& {\cellcolor{white} 7.014}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 855479}& {\cellcolor{white} 1.0}& {\cellcolor{white} 54.688}& {\cellcolor{white} 15.89}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 234627}& {\cellcolor{white} 1.0}& {\cellcolor{white} 106.13}& {\cellcolor{white} 33.69}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 28616}& {\cellcolor{white} 1.0}& {\cellcolor{white} 201.059}& {\cellcolor{white} 51.01}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 6658}& {\cellcolor{white} 1.0}& {\cellcolor{white} 296.97}& {\cellcolor{white} 64.64}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 2939}& {\cellcolor{white} 1.0}& {\cellcolor{white} 421.627}& {\cellcolor{white} 88.69}& {\cellcolor{red} 0.0}& {\cellcolor{red} 16.13}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 513}& {\cellcolor{white} 1.0}& {\cellcolor{white} 623.471}& {\cellcolor{white} 99.67}& {\cellcolor{red} 0.0}& {\cellcolor{red} 93.34}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 187}& {\cellcolor{white} 1.0}& {\cellcolor{white} 926.302}& {\cellcolor{white} 210.5}& {\cellcolor{red} 0.0}& {\cellcolor{red} 98.92}\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.45]{selection_3.png}\\ \caption{ } \end{center} \end{figure} \newpage \subsection{ Histogram 5} \textbf{* Plot: ETA ( jets[2] ) }\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{23.0mm}|m{23.0mm}|m{18.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Integral}& {\cellcolor{yellow} Entries per event}& {\cellcolor{yellow} Mean}& {\cellcolor{yellow} RMS}& {\cellcolor{yellow} \% underflow}& {\cellcolor{yellow} \% overflow}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 4094}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.00500696}& {\cellcolor{white} 2.329}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 3934}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.0019532}& {\cellcolor{white} 2.674}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 8756}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.0066}& {\cellcolor{white} 2.371}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 5358}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.000228824}& {\cellcolor{white} 2.132}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 983}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.000744275}& {\cellcolor{white} 1.863}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 251}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.00168321}& {\cellcolor{white} 1.667}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 114}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.000452317}& {\cellcolor{white} 1.473}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 20.6}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.000595874}& {\cellcolor{white} 1.238}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 7.66}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.00207042}& {\cellcolor{white} 1.017}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 714691}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1.01009e-05}& {\cellcolor{white} 2.147}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 855479}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.00108977}& {\cellcolor{white} 1.645}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 234627}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.00265008}& {\cellcolor{white} 1.446}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 28616}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.000387923}& {\cellcolor{white} 1.29}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 6658}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.000303381}& {\cellcolor{white} 1.182}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 2939}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.00119018}& {\cellcolor{white} 1.078}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 513}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.000424092}& {\cellcolor{white} 0.9457}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 187}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.000907805}& {\cellcolor{white} 0.8179}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.45]{selection_4.png}\\ \caption{ } \end{center} \end{figure} \newpage \subsection{ Histogram 6} \textbf{* Plot: PHI ( jets[2] ) }\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{23.0mm}|m{23.0mm}|m{18.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Integral}& {\cellcolor{yellow} Entries per event}& {\cellcolor{yellow} Mean}& {\cellcolor{yellow} RMS}& {\cellcolor{yellow} \% underflow}& {\cellcolor{yellow} \% overflow}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 4094}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.00274458}& {\cellcolor{white} 1.814}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 3934}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.00276363}& {\cellcolor{white} 1.814}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 8756}& {\cellcolor{white} 1.0}& {\cellcolor{white} -3.00875e-05}& {\cellcolor{white} 1.815}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 5358}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.00153572}& {\cellcolor{white} 1.814}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 983}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.00312538}& {\cellcolor{white} 1.814}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 251}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.00050346}& {\cellcolor{white} 1.815}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 114}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.000132195}& {\cellcolor{white} 1.813}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 20.6}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.0034209}& {\cellcolor{white} 1.815}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 7.66}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.00282812}& {\cellcolor{white} 1.814}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 714691}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.00324102}& {\cellcolor{white} 1.813}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 855479}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.00116298}& {\cellcolor{white} 1.815}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 234627}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.000793162}& {\cellcolor{white} 1.815}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 28616}& {\cellcolor{white} 1.0}& {\cellcolor{white} -1.20627e-05}& {\cellcolor{white} 1.814}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 6658}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.00250468}& {\cellcolor{white} 1.815}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 2939}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.000669513}& {\cellcolor{white} 1.813}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 513}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.00203746}& {\cellcolor{white} 1.813}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 187}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.00234294}& {\cellcolor{white} 1.814}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.45]{selection_5.png}\\ \caption{ } \end{center} \end{figure} \newpage \subsection{ Histogram 7} \textbf{* Plot: DELTAR ( jets[1] , jets[2] ) }\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{23.0mm}|m{23.0mm}|m{18.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Integral}& {\cellcolor{yellow} Entries per event}& {\cellcolor{yellow} Mean}& {\cellcolor{yellow} RMS}& {\cellcolor{yellow} \% underflow}& {\cellcolor{yellow} \% overflow}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 4094}& {\cellcolor{white} 1.0}& {\cellcolor{white} 4.02835}& {\cellcolor{white} 1.056}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 3934}& {\cellcolor{white} 1.0}& {\cellcolor{white} 5.21509}& {\cellcolor{white} 1.267}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 8756}& {\cellcolor{white} 1.0}& {\cellcolor{white} 4.68485}& {\cellcolor{white} 1.264}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 5358}& {\cellcolor{white} 1.0}& {\cellcolor{white} 4.4049}& {\cellcolor{white} 1.096}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 983}& {\cellcolor{white} 1.0}& {\cellcolor{white} 4.11552}& {\cellcolor{white} 0.8948}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 251}& {\cellcolor{white} 1.0}& {\cellcolor{white} 3.92644}& {\cellcolor{white} 0.7722}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 114}& {\cellcolor{white} 1.0}& {\cellcolor{white} 3.75826}& {\cellcolor{white} 0.6584}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 20.6}& {\cellcolor{white} 1.0}& {\cellcolor{white} 3.58482}& {\cellcolor{white} 0.5257}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 7.66}& {\cellcolor{white} 1.0}& {\cellcolor{white} 3.44779}& {\cellcolor{white} 0.4108}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 714691}& {\cellcolor{white} 1.0}& {\cellcolor{white} 4.20916}& {\cellcolor{white} 0.7369}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 855479}& {\cellcolor{white} 1.0}& {\cellcolor{white} 3.45656}& {\cellcolor{white} 0.6833}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 234627}& {\cellcolor{white} 1.0}& {\cellcolor{white} 3.29993}& {\cellcolor{white} 0.6389}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 28616}& {\cellcolor{white} 1.0}& {\cellcolor{white} 3.28686}& {\cellcolor{white} 0.5815}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 6658}& {\cellcolor{white} 1.0}& {\cellcolor{white} 3.28486}& {\cellcolor{white} 0.5273}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 2939}& {\cellcolor{white} 1.0}& {\cellcolor{white} 3.28621}& {\cellcolor{white} 0.4662}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 513}& {\cellcolor{white} 1.0}& {\cellcolor{white} 3.28098}& {\cellcolor{white} 0.3842}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 187}& {\cellcolor{white} 1.0}& {\cellcolor{white} 3.26771}& {\cellcolor{white} 0.3022}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.45]{selection_6.png}\\ \caption{ } \end{center} \end{figure} \newpage \subsection{ Histogram 8} \textbf{* Plot: M ( jets[1] jets[2] ) }\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{23.0mm}|m{23.0mm}|m{18.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Integral}& {\cellcolor{yellow} Entries per event}& {\cellcolor{yellow} Mean}& {\cellcolor{yellow} RMS}& {\cellcolor{yellow} \% underflow}& {\cellcolor{yellow} \% overflow}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 4094}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1376.2}& {\cellcolor{white} 772.9}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 3934}& {\cellcolor{white} 1.0}& {\cellcolor{white} 469.964}& {\cellcolor{white} 412.4}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 8756}& {\cellcolor{white} 1.0}& {\cellcolor{white} 609.152}& {\cellcolor{white} 529.4}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 5358}& {\cellcolor{white} 1.0}& {\cellcolor{white} 888.626}& {\cellcolor{white} 671.3}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0003082}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 983}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1211.77}& {\cellcolor{white} 761.1}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0006018}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 251}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1465.23}& {\cellcolor{white} 805.1}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.001401}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 114}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1732.47}& {\cellcolor{white} 822.0}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.002495}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 20.6}& {\cellcolor{white} 1.0}& {\cellcolor{white} 2125.32}& {\cellcolor{white} 815.8}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.002832}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 7.66}& {\cellcolor{white} 1.0}& {\cellcolor{white} 2691.74}& {\cellcolor{white} 857.1}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.01037}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 714691}& {\cellcolor{white} 1.0}& {\cellcolor{white} 202.321}& {\cellcolor{white} 105.0}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 855479}& {\cellcolor{white} 1.0}& {\cellcolor{white} 222.194}& {\cellcolor{white} 129.4}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 234627}& {\cellcolor{white} 1.0}& {\cellcolor{white} 364.432}& {\cellcolor{white} 195.6}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 28616}& {\cellcolor{white} 1.0}& {\cellcolor{white} 626.063}& {\cellcolor{white} 277.8}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 6658}& {\cellcolor{white} 1.0}& {\cellcolor{white} 872.986}& {\cellcolor{white} 337.8}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 2939}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1178.48}& {\cellcolor{white} 408.8}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 513}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1647.88}& {\cellcolor{white} 468.4}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 187}& {\cellcolor{white} 1.0}& {\cellcolor{white} 2311.56}& {\cellcolor{white} 635.4}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0001923}\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.45]{selection_7.png}\\ \caption{ } \end{center} \end{figure} \newpage \subsection{ Histogram 9} \textbf{* Plot: MET}\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{23.0mm}|m{23.0mm}|m{18.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Integral}& {\cellcolor{yellow} Entries per event}& {\cellcolor{yellow} Mean}& {\cellcolor{yellow} RMS}& {\cellcolor{yellow} \% underflow}& {\cellcolor{yellow} \% overflow}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 4094}& {\cellcolor{white} 1.0}& {\cellcolor{white} 8.33083e-09}& {\cellcolor{white} 1.078e-08}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 3934}& {\cellcolor{white} 1.0}& {\cellcolor{white} 5.97855e-10}& {\cellcolor{white} 4.233e-10}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 8756}& {\cellcolor{white} 1.0}& {\cellcolor{white} 9.55638e-10}& {\cellcolor{white} 1.098e-09}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 5358}& {\cellcolor{white} 1.0}& {\cellcolor{white} 3.23057e-09}& {\cellcolor{white} 2.219e-09}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 983}& {\cellcolor{white} 1.0}& {\cellcolor{white} 4.52485e-09}& {\cellcolor{white} 2.611e-09}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 251}& {\cellcolor{white} 1.0}& {\cellcolor{white} 4.90226e-09}& {\cellcolor{white} 2.72e-09}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 114}& {\cellcolor{white} 1.0}& {\cellcolor{white} 5.15622e-09}& {\cellcolor{white} 2.979e-09}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 20.6}& {\cellcolor{white} 1.0}& {\cellcolor{white} 5.81971e-09}& {\cellcolor{white} 5.339e-09}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 7.66}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1.30635e-08}& {\cellcolor{white} 1.639e-08}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 714691}& {\cellcolor{white} 1.0}& {\cellcolor{white} 5.87329e-10}& {\cellcolor{white} 4.134e-10}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 855479}& {\cellcolor{white} 1.0}& {\cellcolor{white} 8.95295e-10}& {\cellcolor{white} 1.036e-09}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 234627}& {\cellcolor{white} 1.0}& {\cellcolor{white} 3.11256e-09}& {\cellcolor{white} 2.186e-09}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 28616}& {\cellcolor{white} 1.0}& {\cellcolor{white} 4.43451e-09}& {\cellcolor{white} 2.58e-09}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 6658}& {\cellcolor{white} 1.0}& {\cellcolor{white} 4.80208e-09}& {\cellcolor{white} 2.678e-09}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 2939}& {\cellcolor{white} 1.0}& {\cellcolor{white} 5.06231e-09}& {\cellcolor{white} 3.026e-09}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 513}& {\cellcolor{white} 1.0}& {\cellcolor{white} 5.58908e-09}& {\cellcolor{white} 4.823e-09}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 187}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1.25075e-08}& {\cellcolor{white} 1.605e-08}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.45]{selection_8.png}\\ \caption{ } \end{center} \end{figure} \newpage \subsection{ Histogram 10} \textbf{* Plot: sdETA ( jets[1] jets[2] ) }\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{23.0mm}|m{23.0mm}|m{18.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Integral}& {\cellcolor{yellow} Entries per event}& {\cellcolor{yellow} Mean}& {\cellcolor{yellow} RMS}& {\cellcolor{yellow} \% underflow}& {\cellcolor{yellow} \% overflow}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 4094}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.00740656}& {\cellcolor{white} 3.704}& {\cellcolor{green} 0.2057}& {\cellcolor{green} 0.2022}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 3934}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.000617109}& {\cellcolor{white} 4.726}& {\cellcolor{green} 0.5803}& {\cellcolor{green} 0.6083}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 8756}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.0109695}& {\cellcolor{white} 4.015}& {\cellcolor{green} 0.1029}& {\cellcolor{green} 0.1176}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 5358}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.00171495}& {\cellcolor{white} 3.569}& {\cellcolor{green} 0.004928}& {\cellcolor{green} 0.003696}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 983}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.000255441}& {\cellcolor{white} 3.09}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 251}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.00219659}& {\cellcolor{white} 2.754}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 114}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.0026506}& {\cellcolor{white} 2.428}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 20.6}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.00076492}& {\cellcolor{white} 2.046}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 7.66}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.00334122}& {\cellcolor{white} 1.694}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 714691}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.000498242}& {\cellcolor{white} 3.363}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0007299}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 855479}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.00369956}& {\cellcolor{white} 2.156}& {\cellcolor{green} 0.0001229}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 234627}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.00164949}& {\cellcolor{white} 1.795}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 28616}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.00131381}& {\cellcolor{white} 1.639}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 6658}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.00460312}& {\cellcolor{white} 1.539}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 2939}& {\cellcolor{white} 1.0}& {\cellcolor{white} 0.000145998}& {\cellcolor{white} 1.448}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 513}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.00444215}& {\cellcolor{white} 1.327}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 187}& {\cellcolor{white} 1.0}& {\cellcolor{white} -0.00198176}& {\cellcolor{white} 1.196}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.45]{selection_9.png}\\ \caption{ } \end{center} \end{figure} \newpage \subsection{ Histogram 11} \textbf{* Plot: M ( a[1] a[2] ) }\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{23.0mm}|m{23.0mm}|m{18.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Integral}& {\cellcolor{yellow} Entries per event}& {\cellcolor{yellow} Mean}& {\cellcolor{yellow} RMS}& {\cellcolor{yellow} \% underflow}& {\cellcolor{yellow} \% overflow}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 4094}& {\cellcolor{white} 1.0}& {\cellcolor{white} 950.206}& {\cellcolor{white} 725.5}& {\cellcolor{red} 0.0}& {\cellcolor{red} 36.84}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 3934}& {\cellcolor{white} 1.0}& {\cellcolor{white} 59.2471}& {\cellcolor{white} 49.28}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.002778}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 8756}& {\cellcolor{white} 1.0}& {\cellcolor{white} 70.2822}& {\cellcolor{white} 64.56}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.01548}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 5358}& {\cellcolor{white} 1.0}& {\cellcolor{white} 92.2723}& {\cellcolor{white} 92.82}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0622}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 983}& {\cellcolor{white} 1.0}& {\cellcolor{white} 117.071}& {\cellcolor{white} 124.4}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.1777}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 251}& {\cellcolor{white} 1.0}& {\cellcolor{white} 132.524}& {\cellcolor{white} 146.0}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.3347}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 114}& {\cellcolor{white} 1.0}& {\cellcolor{white} 143.801}& {\cellcolor{white} 162.6}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.5078}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 20.6}& {\cellcolor{white} 1.0}& {\cellcolor{white} 153.519}& {\cellcolor{white} 177.9}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.6926}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 7.66}& {\cellcolor{white} 1.0}& {\cellcolor{white} 159.525}& {\cellcolor{white} 184.7}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.7897}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 714691}& {\cellcolor{white} 1.0}& {\cellcolor{white} 48.4767}& {\cellcolor{white} 39.08}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.002553}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 855479}& {\cellcolor{white} 1.0}& {\cellcolor{white} 55.4245}& {\cellcolor{white} 50.97}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.004558}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 234627}& {\cellcolor{white} 1.0}& {\cellcolor{white} 74.3979}& {\cellcolor{white} 76.83}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0267}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 28616}& {\cellcolor{white} 1.0}& {\cellcolor{white} 95.1626}& {\cellcolor{white} 107.9}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.1108}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 6658}& {\cellcolor{white} 1.0}& {\cellcolor{white} 108.836}& {\cellcolor{white} 127.8}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.2203}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 2939}& {\cellcolor{white} 1.0}& {\cellcolor{white} 119.749}& {\cellcolor{white} 143.7}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.3484}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 513}& {\cellcolor{white} 1.0}& {\cellcolor{white} 131.534}& {\cellcolor{white} 157.2}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.4736}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 187}& {\cellcolor{white} 1.0}& {\cellcolor{white} 143.672}& {\cellcolor{white} 167.2}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.5656}\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.45]{selection_10.png}\\ \caption{ } \end{center} \end{figure} \newpage \subsection{ Histogram 12} \textbf{* Plot: PT ( a[1] ) }\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{23.0mm}|m{23.0mm}|m{18.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Integral}& {\cellcolor{yellow} Entries per event}& {\cellcolor{yellow} Mean}& {\cellcolor{yellow} RMS}& {\cellcolor{yellow} \% underflow}& {\cellcolor{yellow} \% overflow}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 4094}& {\cellcolor{white} 1.0}& {\cellcolor{white} 588.092}& {\cellcolor{white} 368.7}& {\cellcolor{orange} 0.0}& {\cellcolor{orange} 12.98}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 3934}& {\cellcolor{white} 1.0}& {\cellcolor{white} 33.0393}& {\cellcolor{white} 18.87}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 8756}& {\cellcolor{white} 1.0}& {\cellcolor{white} 46.4939}& {\cellcolor{white} 31.54}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 5358}& {\cellcolor{white} 1.0}& {\cellcolor{white} 72.005}& {\cellcolor{white} 60.42}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.000308}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 983}& {\cellcolor{white} 1.0}& {\cellcolor{white} 106.886}& {\cellcolor{white} 103.6}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.002407}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 251}& {\cellcolor{white} 1.0}& {\cellcolor{white} 132.381}& {\cellcolor{white} 141.7}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.009208}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 114}& {\cellcolor{white} 1.0}& {\cellcolor{white} 154.148}& {\cellcolor{white} 182.0}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.3163}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 20.6}& {\cellcolor{white} 1.0}& {\cellcolor{white} 172.887}& {\cellcolor{white} 223.7}& {\cellcolor{green} 0.0}& {\cellcolor{green} 1.906}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 7.66}& {\cellcolor{white} 1.0}& {\cellcolor{white} 181.168}& {\cellcolor{white} 246.2}& {\cellcolor{green} 0.0}& {\cellcolor{green} 2.06}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 714691}& {\cellcolor{white} 1.0}& {\cellcolor{white} 29.8019}& {\cellcolor{white} 18.74}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 855479}& {\cellcolor{white} 1.0}& {\cellcolor{white} 42.2098}& {\cellcolor{white} 31.98}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0001231}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 234627}& {\cellcolor{white} 1.0}& {\cellcolor{white} 67.1266}& {\cellcolor{white} 63.03}& {\cellcolor{green} 0.0}& {\cellcolor{green} 9.834e-05}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 28616}& {\cellcolor{white} 1.0}& {\cellcolor{white} 95.4197}& {\cellcolor{white} 106.8}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.001258}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 6658}& {\cellcolor{white} 1.0}& {\cellcolor{white} 113.382}& {\cellcolor{white} 139.4}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.00999}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 2939}& {\cellcolor{white} 1.0}& {\cellcolor{white} 126.721}& {\cellcolor{white} 168.3}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.3807}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 513}& {\cellcolor{white} 1.0}& {\cellcolor{white} 138.701}& {\cellcolor{white} 193.4}& {\cellcolor{green} 0.0}& {\cellcolor{green} 1.426}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 187}& {\cellcolor{white} 1.0}& {\cellcolor{white} 146.238}& {\cellcolor{white} 198.9}& {\cellcolor{green} 0.0}& {\cellcolor{green} 1.065}\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.45]{selection_11.png}\\ \caption{ } \end{center} \end{figure} \newpage \subsection{ Histogram 13} \textbf{* Plot: PT ( a[2] ) }\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{23.0mm}|m{23.0mm}|m{18.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Integral}& {\cellcolor{yellow} Entries per event}& {\cellcolor{yellow} Mean}& {\cellcolor{yellow} RMS}& {\cellcolor{yellow} \% underflow}& {\cellcolor{yellow} \% overflow}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 4094}& {\cellcolor{white} 1.0}& {\cellcolor{white} 334.941}& {\cellcolor{white} 290.0}& {\cellcolor{green} 0.0}& {\cellcolor{green} 3.619}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 3934}& {\cellcolor{white} 1.0}& {\cellcolor{white} 18.2641}& {\cellcolor{white} 11.16}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 8756}& {\cellcolor{white} 1.0}& {\cellcolor{white} 21.2596}& {\cellcolor{white} 15.54}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 5358}& {\cellcolor{white} 1.0}& {\cellcolor{white} 25.9605}& {\cellcolor{white} 23.17}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0002055}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 983}& {\cellcolor{white} 1.0}& {\cellcolor{white} 31.2333}& {\cellcolor{white} 32.38}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0009032}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 251}& {\cellcolor{white} 1.0}& {\cellcolor{white} 34.5946}& {\cellcolor{white} 38.91}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0001002}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 114}& {\cellcolor{white} 1.0}& {\cellcolor{white} 37.1093}& {\cellcolor{white} 44.62}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0009971}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 20.6}& {\cellcolor{white} 1.0}& {\cellcolor{white} 39.4356}& {\cellcolor{white} 49.91}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.002623}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 7.66}& {\cellcolor{white} 1.0}& {\cellcolor{white} 40.8098}& {\cellcolor{white} 52.8}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.004823}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 714691}& {\cellcolor{white} 1.0}& {\cellcolor{white} 16.5201}& {\cellcolor{white} 9.637}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 855479}& {\cellcolor{white} 1.0}& {\cellcolor{white} 18.8185}& {\cellcolor{white} 12.99}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 234627}& {\cellcolor{white} 1.0}& {\cellcolor{white} 22.8735}& {\cellcolor{white} 19.58}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 28616}& {\cellcolor{white} 1.0}& {\cellcolor{white} 26.8872}& {\cellcolor{white} 27.11}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0001937}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 6658}& {\cellcolor{white} 1.0}& {\cellcolor{white} 29.3925}& {\cellcolor{white} 32.02}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0003031}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 2939}& {\cellcolor{white} 1.0}& {\cellcolor{white} 31.3973}& {\cellcolor{white} 35.97}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0007697}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 513}& {\cellcolor{white} 1.0}& {\cellcolor{white} 33.6461}& {\cellcolor{white} 39.79}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.001184}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 187}& {\cellcolor{white} 1.0}& {\cellcolor{white} 35.6015}& {\cellcolor{white} 42.32}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.002116}\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.45]{selection_12.png}\\ \caption{ } \end{center} \end{figure} \newpage \subsection{ Histogram 14} \textbf{* Plot: THT}\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{23.0mm}|m{23.0mm}|m{18.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Integral}& {\cellcolor{yellow} Entries per event}& {\cellcolor{yellow} Mean}& {\cellcolor{yellow} RMS}& {\cellcolor{yellow} \% underflow}& {\cellcolor{yellow} \% overflow}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 4094}& {\cellcolor{white} 1.0}& {\cellcolor{white} 607.69}& {\cellcolor{white} 391.1}& {\cellcolor{red} 0.0}& {\cellcolor{red} 15.4}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 3934}& {\cellcolor{white} 1.0}& {\cellcolor{white} 78.0345}& {\cellcolor{white} 14.32}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 8756}& {\cellcolor{white} 1.0}& {\cellcolor{white} 144.593}& {\cellcolor{white} 27.95}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 5358}& {\cellcolor{white} 1.0}& {\cellcolor{white} 270.786}& {\cellcolor{white} 53.37}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 983}& {\cellcolor{white} 1.0}& {\cellcolor{white} 475.979}& {\cellcolor{white} 55.0}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 251}& {\cellcolor{white} 1.0}& {\cellcolor{white} 680.318}& {\cellcolor{white} 55.83}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 114}& {\cellcolor{white} 1.0}& {\cellcolor{white} 939.672}& {\cellcolor{white} 106.7}& {\cellcolor{red} 0.0}& {\cellcolor{red} 27.89}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 20.6}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1352.19}& {\cellcolor{white} 109.7}& {\cellcolor{red} 0.0}& {\cellcolor{red} 100.0}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 7.66}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1966.54}& {\cellcolor{white} 391.6}& {\cellcolor{red} 0.0}& {\cellcolor{red} 100.0}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 714691}& {\cellcolor{white} 1.0}& {\cellcolor{white} 74.1637}& {\cellcolor{white} 15.18}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 855479}& {\cellcolor{white} 1.0}& {\cellcolor{white} 138.778}& {\cellcolor{white} 26.45}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 234627}& {\cellcolor{white} 1.0}& {\cellcolor{white} 261.793}& {\cellcolor{white} 50.89}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 28616}& {\cellcolor{white} 1.0}& {\cellcolor{white} 473.998}& {\cellcolor{white} 54.59}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 6658}& {\cellcolor{white} 1.0}& {\cellcolor{white} 679.873}& {\cellcolor{white} 55.79}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 2939}& {\cellcolor{white} 1.0}& {\cellcolor{white} 939.623}& {\cellcolor{white} 106.7}& {\cellcolor{red} 0.0}& {\cellcolor{red} 27.8}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 513}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1352.11}& {\cellcolor{white} 109.8}& {\cellcolor{red} 0.0}& {\cellcolor{red} 100.0}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 187}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1962.59}& {\cellcolor{white} 386.0}& {\cellcolor{red} 0.0}& {\cellcolor{red} 100.0}\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.45]{selection_13.png}\\ \caption{ } \end{center} \end{figure} \newpage \subsection{ Histogram 15} \textbf{* Plot: MET}\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{23.0mm}|m{23.0mm}|m{18.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Integral}& {\cellcolor{yellow} Entries per event}& {\cellcolor{yellow} Mean}& {\cellcolor{yellow} RMS}& {\cellcolor{yellow} \% underflow}& {\cellcolor{yellow} \% overflow}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 4094}& {\cellcolor{white} 1.0}& {\cellcolor{white} 8.33083e-09}& {\cellcolor{white} 1.078e-08}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 3934}& {\cellcolor{white} 1.0}& {\cellcolor{white} 5.97855e-10}& {\cellcolor{white} 4.233e-10}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 8756}& {\cellcolor{white} 1.0}& {\cellcolor{white} 9.55638e-10}& {\cellcolor{white} 1.098e-09}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 5358}& {\cellcolor{white} 1.0}& {\cellcolor{white} 3.23057e-09}& {\cellcolor{white} 2.219e-09}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 983}& {\cellcolor{white} 1.0}& {\cellcolor{white} 4.52485e-09}& {\cellcolor{white} 2.611e-09}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 251}& {\cellcolor{white} 1.0}& {\cellcolor{white} 4.90226e-09}& {\cellcolor{white} 2.72e-09}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 114}& {\cellcolor{white} 1.0}& {\cellcolor{white} 5.15622e-09}& {\cellcolor{white} 2.979e-09}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 20.6}& {\cellcolor{white} 1.0}& {\cellcolor{white} 5.81971e-09}& {\cellcolor{white} 5.339e-09}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 7.66}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1.30635e-08}& {\cellcolor{white} 1.639e-08}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 714691}& {\cellcolor{white} 1.0}& {\cellcolor{white} 5.87329e-10}& {\cellcolor{white} 4.134e-10}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 855479}& {\cellcolor{white} 1.0}& {\cellcolor{white} 8.95295e-10}& {\cellcolor{white} 1.036e-09}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 234627}& {\cellcolor{white} 1.0}& {\cellcolor{white} 3.11256e-09}& {\cellcolor{white} 2.186e-09}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 28616}& {\cellcolor{white} 1.0}& {\cellcolor{white} 4.43451e-09}& {\cellcolor{white} 2.58e-09}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 6658}& {\cellcolor{white} 1.0}& {\cellcolor{white} 4.80208e-09}& {\cellcolor{white} 2.678e-09}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 2939}& {\cellcolor{white} 1.0}& {\cellcolor{white} 5.06231e-09}& {\cellcolor{white} 3.026e-09}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 513}& {\cellcolor{white} 1.0}& {\cellcolor{white} 5.58908e-09}& {\cellcolor{white} 4.823e-09}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 187}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1.25075e-08}& {\cellcolor{white} 1.605e-08}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.45]{selection_14.png}\\ \caption{ } \end{center} \end{figure} \newpage \subsection{ Histogram 16} \textbf{* Plot: TET}\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{23.0mm}|m{23.0mm}|m{18.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|m{19.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Integral}& {\cellcolor{yellow} Entries per event}& {\cellcolor{yellow} Mean}& {\cellcolor{yellow} RMS}& {\cellcolor{yellow} \% underflow}& {\cellcolor{yellow} \% overflow}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 4094}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1530.72}& {\cellcolor{white} 825.3}& {\cellcolor{red} 0.0}& {\cellcolor{red} 70.59}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 3934}& {\cellcolor{white} 1.0}& {\cellcolor{white} 129.338}& {\cellcolor{white} 32.52}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.001235}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 8756}& {\cellcolor{white} 1.0}& {\cellcolor{white} 212.346}& {\cellcolor{white} 54.5}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.00344}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 5358}& {\cellcolor{white} 1.0}& {\cellcolor{white} 368.751}& {\cellcolor{white} 98.32}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.04948}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 983}& {\cellcolor{white} 1.0}& {\cellcolor{white} 614.098}& {\cellcolor{white} 136.7}& {\cellcolor{green} 0.0}& {\cellcolor{green} 1.955}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 251}& {\cellcolor{white} 1.0}& {\cellcolor{white} 847.293}& {\cellcolor{white} 172.9}& {\cellcolor{red} 0.0}& {\cellcolor{red} 15.03}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 114}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1130.93}& {\cellcolor{white} 233.4}& {\cellcolor{red} 0.0}& {\cellcolor{red} 66.91}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 20.6}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1564.51}& {\cellcolor{white} 271.2}& {\cellcolor{red} 0.0}& {\cellcolor{red} 100.0}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 7.66}& {\cellcolor{white} 1.0}& {\cellcolor{white} 2188.52}& {\cellcolor{white} 475.9}& {\cellcolor{red} 0.0}& {\cellcolor{red} 100.0}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 714691}& {\cellcolor{white} 1.0}& {\cellcolor{white} 120.486}& {\cellcolor{white} 31.72}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 855479}& {\cellcolor{white} 1.0}& {\cellcolor{white} 199.807}& {\cellcolor{white} 52.08}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.0009859}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 234627}& {\cellcolor{white} 1.0}& {\cellcolor{white} 351.793}& {\cellcolor{white} 94.87}& {\cellcolor{green} 0.0}& {\cellcolor{green} 0.02749}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 28616}& {\cellcolor{white} 1.0}& {\cellcolor{white} 596.305}& {\cellcolor{white} 134.8}& {\cellcolor{green} 0.0}& {\cellcolor{green} 1.993}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 6658}& {\cellcolor{white} 1.0}& {\cellcolor{white} 822.647}& {\cellcolor{white} 165.9}& {\cellcolor{orange} 0.0}& {\cellcolor{orange} 10.92}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 2939}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1097.74}& {\cellcolor{white} 215.8}& {\cellcolor{red} 0.0}& {\cellcolor{red} 61.64}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 513}& {\cellcolor{white} 1.0}& {\cellcolor{white} 1524.46}& {\cellcolor{white} 238.7}& {\cellcolor{red} 0.0}& {\cellcolor{red} 100.0}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 187}& {\cellcolor{white} 1.0}& {\cellcolor{white} 2144.43}& {\cellcolor{white} 445.2}& {\cellcolor{red} 0.0}& {\cellcolor{red} 100.0}\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.45]{selection_15.png}\\ \caption{ } \end{center} \end{figure} \newpage \subsection{Cut 2} \textbf{* Cut: select ( sdETA ( jets[1] jets[2] ) > 6.1 or sdETA ( jets[1] jets[2] ) < -6.1 ) and M ( jets[1] jets[2] ) > 1000.0}\\ \begin{table}[H] \begin{center} \begin{tabular}{|m{20.0mm}|m{27.0mm}|m{27.0mm}|m{33.0mm}|m{32.0mm}|} \hline {\cellcolor{yellow} Dataset}& {\cellcolor{yellow} Events kept: K}& {\cellcolor{yellow} Rejected events: R}& {\cellcolor{yellow} Efficiency: K /\- (K + R)}& {\cellcolor{yellow} Cumul. efficiency: K /\- Initial}\\ \hline {\cellcolor{white} signal}& {\cellcolor{white} 150.5 +/\-- 12.0}& {\cellcolor{white} 3943.5 +/\-- 12.1}& {\cellcolor{white} 0.03677 +/\-- 0.00294}& {\cellcolor{white} 0.03677 +/\-- 0.00294}\\ \hline {\cellcolor{white} bg\_vbf\_0\_100}& {\cellcolor{white} 362.3 +/\-- 18.8}& {\cellcolor{white} 3572.4 +/\-- 50.7}& {\cellcolor{white} 0.09208 +/\-- 0.00461}& {\cellcolor{white} 0.02982 +/\-- 0.00154}\\ \hline {\cellcolor{white} bg\_vbf\_100\_200}& {\cellcolor{white} 669.8 +/\-- 25.0}& {\cellcolor{white} 8087.1 +/\-- 39.2}& {\cellcolor{white} 0.07649 +/\-- 0.00284}& {\cellcolor{white} 0.06909 +/\-- 0.00258}\\ \hline {\cellcolor{white} bg\_vbf\_200\_400}& {\cellcolor{white} 145.6 +/\-- 11.9}& {\cellcolor{white} 5213.2 +/\-- 17.4}& {\cellcolor{white} 0.02717 +/\-- 0.00222}& {\cellcolor{white} 0.0269 +/\-- 0.0022}\\ \hline {\cellcolor{white} bg\_vbf\_400\_600}& {\cellcolor{white} 3.15 +/\-- 1.77}& {\cellcolor{white} 980.63 +/\-- 2.84}& {\cellcolor{white} 0.0032 +/\-- 0.0018}& {\cellcolor{white} 0.0032 +/\-- 0.0018}\\ \hline {\cellcolor{white} bg\_vbf\_600\_800}& {\cellcolor{white} 0.0761 +/\-- 0.2758}& {\cellcolor{white} 251.775 +/\-- 0.634}& {\cellcolor{white} 0.000302 +/\-- 0.001095}& {\cellcolor{white} 0.000302 +/\-- 0.001094}\\ \hline {\cellcolor{white} bg\_vbf\_800\_1200}& {\cellcolor{white} 0.00285 +/\-- 0.05343}& {\cellcolor{white} 114.640 +/\-- 0.394}& {\cellcolor{white} 2.49e-05 +/\-- 4.66e-04}& {\cellcolor{white} 2.49e-05 +/\-- 4.66e-04}\\ \hline {\cellcolor{white} bg\_vbf\_1200\_1600}& {\cellcolor{white} 6.47e-05 +/\-- 8.04e-03}& {\cellcolor{white} 20.553 +/\-- 0.209}& {\cellcolor{white} 3.15e-06 +/\-- 3.91e-04}& {\cellcolor{white} 3.14e-06 +/\-- 3.90e-04}\\ \hline {\cellcolor{white} bg\_vbf\_1600\_inf}& {\cellcolor{white} 0.0 +/\-- 0.0}& {\cellcolor{white} 7.513 +/\-- 0.378}& {\cellcolor{white} 0.0 +/\-- 0.0}& {\cellcolor{white} 0.0 +/\-- 0.0}\\ \hline {\cellcolor{white} bg\_dip\_0\_100}& {\cellcolor{white} 805.3 +/\-- 28.4}& {\cellcolor{white} 713887 +/\-- 1414}& {\cellcolor{white} 1.13e-03 +/\-- 3.97e-05}& {\cellcolor{white} 2.97e-04 +/\-- 1.05e-05}\\ \hline {\cellcolor{white} bg\_dip\_100\_200}& {\cellcolor{white} 695.2 +/\-- 26.4}& {\cellcolor{white} 854784 +/\-- 1268}& {\cellcolor{white} 8.13e-04 +/\-- 3.08e-05}& {\cellcolor{white} 6.35e-04 +/\-- 2.41e-05}\\ \hline {\cellcolor{white} bg\_dip\_200\_400}& {\cellcolor{white} 32.94 +/\-- 5.74}& {\cellcolor{white} 234594 +/\-- 411}& {\cellcolor{white} 1.40e-04 +/\-- 2.45e-05}& {\cellcolor{white} 1.38e-04 +/\-- 2.40e-05}\\ \hline {\cellcolor{white} bg\_dip\_400\_600}& {\cellcolor{white} 0.609 +/\-- 0.781}& {\cellcolor{white} 28615.6 +/\-- 53.6}& {\cellcolor{white} 2.13e-05 +/\-- 2.73e-05}& {\cellcolor{white} 2.12e-05 +/\-- 2.71e-05}\\ \hline {\cellcolor{white} bg\_dip\_600\_800}& {\cellcolor{white} 0.0504 +/\-- 0.2246}& {\cellcolor{white} 6658.9 +/\-- 27.8}& {\cellcolor{white} 7.57e-06 +/\-- 3.37e-05}& {\cellcolor{white} 7.56e-06 +/\-- 3.36e-05}\\ \hline {\cellcolor{white} bg\_dip\_800\_1200}& {\cellcolor{white} 0.00849 +/\-- 0.09214}& {\cellcolor{white} 2939.95 +/\-- 5.28}& {\cellcolor{white} 2.89e-06 +/\-- 3.13e-05}& {\cellcolor{white} 2.89e-06 +/\-- 3.13e-05}\\ \hline {\cellcolor{white} bg\_dip\_1200\_1600}& {\cellcolor{white} 0.0 +/\-- 0.0}& {\cellcolor{white} 513.34 +/\-- 2.66}& {\cellcolor{white} 0.0 +/\-- 0.0}& {\cellcolor{white} 0.0 +/\-- 0.0}\\ \hline {\cellcolor{white} bg\_dip\_1600\_inf}& {\cellcolor{white} 0.0 +/\-- 0.0}& {\cellcolor{white} 187.742 +/\-- 0.345}& {\cellcolor{white} 0.0 +/\-- 0.0}& {\cellcolor{white} 0.0 +/\-- 0.0}\\ \hline \end{tabular} \end{center} \end{table} % ----------------------------------------------------------------------------- % SECTION Summary % ----------------------------------------------------------------------------- \newpage \section{ Summary} \subsection{Cut-flow charts} \begin{itemize} \item How to compare signal (S) and background (B): \textcolor{blue}{S/\-sqrt(S+B)} . \item Object definition selections are indicated in cyan. \item Reject and select are indicated by 'REJ' and 'SEL' respectively \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{|m{36.0mm}|m{36.0mm}|m{36.0mm}|m{33.0mm}|} \hline {\cellcolor{yellow} Cuts}& {\cellcolor{yellow} Signal (S)}& {\cellcolor{yellow} Background (B)}& {\cellcolor{yellow} S vs B}\\ \hline {\cellcolor{white} Initial (no cut)}& {\cellcolor{white} 4094.08 +/\-- 1.13}& {\cellcolor{white} 4113516 +/\-- 4877}& {\cellcolor{white} 2.01760 +/\-- 0.00132}\\ \hline {\cellcolor{white} SEL: M ( jets[1] jets[2] ) > 120.0}& {\cellcolor{white} 4094.04 +/\-- 1.14}& {\cellcolor{white} 1863145 +/\-- 1947}& {\cellcolor{white} 2.99607 +/\-- 0.00177}\\ \hline {\cellcolor{white} SEL: ( sdETA ( jets[1] jets[2] ) > 6.1 or sdETA ( }& {\cellcolor{white} 150.5 +/\-- 12.0}& {\cellcolor{white} 2715.0 +/\-- 51.6}& {\cellcolor{white} 2.81 +/\-- 0.22}\\ \hline \end{tabular} \end{center} \end{table} \end{document}
{ "alphanum_fraction": 0.5534557577, "avg_line_length": 73.5936853002, "ext": "tex", "hexsha": "fee7c5c4602552aa9bff7ae33485a0a01df83a2e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7d3fc08f5ae5b17a3500eba19a2e43f87f076ce5", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "sheride/axion_pheno", "max_forks_repo_path": "optimization/first_sdEta_mjj_optimization/analyses/analysis_deltaeta6.1_mmjj_1000/Output/PDF/MadAnalysis5job_0/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7d3fc08f5ae5b17a3500eba19a2e43f87f076ce5", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "sheride/axion_pheno", "max_issues_repo_path": "optimization/first_sdEta_mjj_optimization/analyses/analysis_deltaeta6.1_mmjj_1000/Output/PDF/MadAnalysis5job_0/main.tex", "max_line_length": 356, "max_stars_count": null, "max_stars_repo_head_hexsha": "7d3fc08f5ae5b17a3500eba19a2e43f87f076ce5", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "sheride/axion_pheno", "max_stars_repo_path": "optimization/first_sdEta_mjj_optimization/analyses/analysis_deltaeta6.1_mmjj_1000/Output/PDF/MadAnalysis5job_0/main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 51300, "size": 142183 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % UMB-CS240-2016S: Programming in C % Copyright 2016 Pejman Ghorbanzade <[email protected]> % Creative Commons Attribution-ShareAlike 4.0 International License % More info: https://github.com/ghorbanzade/UMB-CS240-2016S %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section*{Question 2} Write a program \texttt{reverse.c} that takes an integer number as a command-line argument and prints its reverse. Your program is expected to function as shown below. \begin{terminal} $ gcc reverse.c -o reverse $ ./reverse error: missing command line argument $ ./reverse 12345 Reverse: 54321 $ ./reverse -54321 Reverse: -12345 $ ./reverse hello error: not a number $ ./reverse 1234 5678 hello Reverse: 4321 \end{terminal}
{ "alphanum_fraction": 0.635250918, "avg_line_length": 30.2592592593, "ext": "tex", "hexsha": "20ece145fc2af9b210d0a2e4f845094898188d86", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c32c866cbe5f7d7044f51f2bcd689b33bda61980", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ghorbanzade/UMB-CS240-2016S", "max_forks_repo_path": "src/tex/main/hw03/hw03q02.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "c32c866cbe5f7d7044f51f2bcd689b33bda61980", "max_issues_repo_issues_event_max_datetime": "2016-06-20T03:04:35.000Z", "max_issues_repo_issues_event_min_datetime": "2016-05-16T23:55:39.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ghorbanzade/UMB-CS240-2016S", "max_issues_repo_path": "src/tex/main/hw03/hw03q02.tex", "max_line_length": 114, "max_stars_count": 1, "max_stars_repo_head_hexsha": "c32c866cbe5f7d7044f51f2bcd689b33bda61980", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ghorbanzade/UMB-CS240-2016S", "max_stars_repo_path": "src/tex/main/hw03/hw03q02.tex", "max_stars_repo_stars_event_max_datetime": "2020-05-03T18:41:24.000Z", "max_stars_repo_stars_event_min_datetime": "2020-05-03T18:41:24.000Z", "num_tokens": 210, "size": 817 }
\subsubsection{Overdamped ($\Delta > 0$)} This is the simplest and easiest case to deal with because our two roots, $r_1$ and $r_2$, are real and distinct. So, out solution is \begin{equation*} y = C_1e^{r_1 t} + C_2e^{r_2 t} \end{equation*} \begin{center} \includegraphics[width=0.5\textwidth]{./higherOrder/freeVibrs/overdamped.png} \end{center} We know that $r_1, r_2 < 0$, so \begin{equation*} \lim\limits_{t \to 0}{C_1e^{r_1 t} + C_2e^{r_2 t}} = 0 \end{equation*} meaning the mass's oscillation decays over time.
{ "alphanum_fraction": 0.6936090226, "avg_line_length": 40.9230769231, "ext": "tex", "hexsha": "e49acd49d0d607b003ed942a675b42a3a4b3f95f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3ad58ef55c176f7ebaf145144e0a4eb720ebde86", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "rawsh/Math-Summaries", "max_forks_repo_path": "diffEq/higherOrder/freeVibrs/overdamped.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3ad58ef55c176f7ebaf145144e0a4eb720ebde86", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "rawsh/Math-Summaries", "max_issues_repo_path": "diffEq/higherOrder/freeVibrs/overdamped.tex", "max_line_length": 134, "max_stars_count": null, "max_stars_repo_head_hexsha": "3ad58ef55c176f7ebaf145144e0a4eb720ebde86", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "rawsh/Math-Summaries", "max_stars_repo_path": "diffEq/higherOrder/freeVibrs/overdamped.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 205, "size": 532 }
\documentclass[12pt]{article} \usepackage[english]{babel} \usepackage[utf8x]{inputenc} \usepackage[T1]{fontenc} \usepackage{parskip} \usepackage{lipsum} \usepackage[a4paper, total={6in, 8in}]{geometry} \usepackage{setspace} \usepackage[superscript]{cite} \usepackage{xcolor} \usepackage{hyperref} \usepackage{enumitem} \usepackage{listings} \usepackage{amsmath} \usepackage{amssymb} \usepackage{ifsym} \usepackage{amssymb} \usepackage{tikz} \input{./commons.tex} % change margins within the geometry package and eventually the font size \begin{document} \begin{flushright} \today \end{flushright} {\Large \textbf{Assignment 4}} {\large Query Optimization} \textsc{Ilaria Battiston - 03723403} \\ \textsc{Mareva Zenelaj - 03736071} \rule{\linewidth}{0.5pt} \section{First exercise} \subsection{Query Graph} This query graph assumes four relations $R_1$, $R_2$, $R_3$, $R_4$, each of them with cardinality 10. We are considering the following query graph: \begin{tikzpicture} \node (R0) at (0,0) {$R_1$}; \node (R1) [right of=R0] {$R_2$}; \node (R2) [below of=R0] {$R_3$}; \node (R3) [right of=R2] {$R_4$}; \draw (R0) -- (R1); \draw (R0) -- (R2); \draw (R1) -- (R3); \draw (R2) -- (R3); \end{tikzpicture} The selectivities of the query graph are: \begin{itemize} \item $R_1 \bowtie R_2 = 0.4$; \item $R_1 \bowtie R_3 = 0.5$; \item $R_2 \bowtie R_4 = 0.49$; \item $R_3 \bowtie R_4 = 0.6$. \end{itemize} The algorithm GreedyOperatorOrdering will perform the following steps: \begin{enumerate} \item Join $R_1$ and $R_2$; \item Join $R_3$ and $R_4$; \item Join the two results. \end{enumerate} $$ C_{out}((R_1 \bowtie R_2) \bowtie (R_3 \bowtie R_4)) = 40 + 60 + 588 = 688$$ However, there exists another join ordering giving a better result: $$ C_{out}((R_1 \bowtie R_3) \bowtie (R_2 \bowtie R_4)) = 50 + 49 + 588 = 687$$ \subsection{Join Tree} Execution of GreedyOperatorOrdering: \begin{tikzpicture}[sibling distance=3cm, level 2/.style={sibling distance=1.5cm}] \node (j1) {$\bowtie$} { child { node (j2) {$\bowtie$} child { node (j3) {$R1$} } child { node (j6) {$R2$} } } child { node (j7) {$\bowtie$} child { node (j8) {$R3$} } child { node (j8) {$R4$} } } }; \end{tikzpicture} Optimal join tree: \begin{tikzpicture}[sibling distance=3cm, level 2/.style={sibling distance=1.5cm}] \node (j1) {$\bowtie$} { child { node (j2) {$\bowtie$} child { node (j3) {$R1$} } child { node (j6) {$R3$} } } child { node (j7) {$\bowtie$} child { node (j8) {$R2$} } child { node (j8) {$R4$} } } }; \end{tikzpicture} \subsection{GreedyJoinOrdering-1} In this example the proposed weight function corresponds to the cardinality. However, all relations have the same cardinality, therefore it is assumed that the chosen ordering is the natural one: \begin{tikzpicture}[sibling distance=2.5cm] \node (j1) {$\bowtie$} { child { node (j2) {$\bowtie$} child { node (j3) {$\bowtie$} child { node (j5) {$R_1$} } child { node (j6) {$R_2$} } } child { node (j6) {$R_3$} } } child { node (j7) {$R_4$} } }; \end{tikzpicture} $$C_{out}(((R_1 \bowtie R_2) \bowtie R_3) \bowtie R_4) = 40 + 200 + 588 = 828$$ \newpage \subsection{GreedyJoinOrdering-2} The algorithm joins according to selectivity: \begin{tikzpicture}[sibling distance=2.5cm] \node (j1) {$\bowtie$} { child { node (j2) {$\bowtie$} child { node (j3) {$\bowtie$} child { node (j5) {$R_1$} } child { node (j6) {$R_2$} } } child { node (j6) {$R_4$} } } child { node (j7) {$R_3$} } }; \end{tikzpicture} $$ C_{out}(((R_1 \bowtie R_2) \bowtie R_4) \bowtie R_3) = 40 + 196 + 588 = 824$$ \subsection{GreedyJoinOrdering-3} In this example we only show the optimal join tree with all the possible $C_{out}$ combinations: \begin{itemize} \item Starting from $R_1$: $ C_{out}(((R_1 \bowtie R_2) \bowtie R_4) \bowtie R_3) = 40 + 196 + 588 = 824$; \item Starting from $R_2$: $ C_{out}(((R_2 \bowtie R_2) \bowtie R_4) \bowtie R_3) = 40 + 196 + 588 = 824$; \item Starting from $R_3$: $ C_{out}(((R_3 \bowtie R_1) \bowtie R_2) \bowtie R_4) = 50 + 200 + 588 = 838$; \item Starting from $R_4$: $ C_{out}(((R_4 \bowtie R_2) \bowtie R_1) \bowtie R_3) = 49 + 196 + 588 = 833$. \end{itemize} Clearly, the minimum value is achieved starting from $R_1$ or $R_2$, hence the optimal tree is: \begin{tikzpicture}[sibling distance=2.5cm] \node (j1) {$\bowtie$} { child { node (j2) {$\bowtie$} child { node (j3) {$\bowtie$} child { node (j5) {$R_1$} } child { node (j6) {$R_2$} } } child { node (j6) {$R_4$} } } child { node (j7) {$R_3$} } }; \end{tikzpicture} \end{document}
{ "alphanum_fraction": 0.620997921, "avg_line_length": 22.2685185185, "ext": "tex", "hexsha": "de195153794618a491aad2ae73c12cde6df28253", "lang": "TeX", "max_forks_count": 69, "max_forks_repo_forks_event_max_datetime": "2022-03-17T19:27:50.000Z", "max_forks_repo_forks_event_min_datetime": "2019-10-02T21:46:57.000Z", "max_forks_repo_head_hexsha": "b736fc4ae065612dc988b6cb220fcf2f6119a138", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "mrahtapot/TUM", "max_forks_repo_path": "Query Optimization/assignments/Assignment 4.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "b736fc4ae065612dc988b6cb220fcf2f6119a138", "max_issues_repo_issues_event_max_datetime": "2021-07-31T19:35:57.000Z", "max_issues_repo_issues_event_min_datetime": "2021-02-16T12:22:43.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "mrahtapot/TUM", "max_issues_repo_path": "Query Optimization/assignments/Assignment 4.tex", "max_line_length": 195, "max_stars_count": 225, "max_stars_repo_head_hexsha": "b736fc4ae065612dc988b6cb220fcf2f6119a138", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "mrahtapot/TUM", "max_stars_repo_path": "Query Optimization/assignments/Assignment 4.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-29T22:25:38.000Z", "max_stars_repo_stars_event_min_datetime": "2019-10-02T10:49:41.000Z", "num_tokens": 1917, "size": 4810 }
\documentclass[]{article} \usepackage{lmodern} \usepackage{setspace} \setstretch{2} \usepackage{amssymb,amsmath} \usepackage{ifxetex,ifluatex} \usepackage{fixltx2e} % provides \textsubscript \ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \else % if luatex or xelatex \ifxetex \usepackage{mathspec} \else \usepackage{fontspec} \fi \defaultfontfeatures{Ligatures=TeX,Scale=MatchLowercase} \fi % use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} % use microtype if available \IfFileExists{microtype.sty}{% \usepackage{microtype} \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts }{} \usepackage[margin=1in]{geometry} \usepackage{hyperref} \hypersetup{unicode=true, pdfborder={0 0 0}, breaklinks=true} \urlstyle{same} % don't use monospace font for urls \usepackage{graphicx,grffile} \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi} \def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} \makeatother % Scale images if necessary, so that they will not overflow the page % margins by default, and it is still possible to overwrite the defaults % using explicit options in \includegraphics[width, height, ...]{} \setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} \IfFileExists{parskip.sty}{% \usepackage{parskip} }{% else \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt} } \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \setcounter{secnumdepth}{0} % Redefines (sub)paragraphs to behave more like sections \ifx\paragraph\undefined\else \let\oldparagraph\paragraph \renewcommand{\paragraph}[1]{\oldparagraph{#1}\mbox{}} \fi \ifx\subparagraph\undefined\else \let\oldsubparagraph\subparagraph \renewcommand{\subparagraph}[1]{\oldsubparagraph{#1}\mbox{}} \fi %%% Use protect on footnotes to avoid problems with footnotes in titles \let\rmarkdownfootnote\footnote% \def\footnote{\protect\rmarkdownfootnote} %%% Change title format to be more compact \usepackage{titling} % Create subtitle command for use in maketitle \newcommand{\subtitle}[1]{ \posttitle{ \begin{center}\large#1\end{center} } } \setlength{\droptitle}{-2em} \title{} \pretitle{\vspace{\droptitle}} \posttitle{} \author{} \preauthor{}\postauthor{} \date{} \predate{}\postdate{} \fontsize{8}{20} \pagenumbering{gobble} \begin{document} \subsection{Figure legends}\label{figure-legends} Fig. 1. Map of study area showing location and source of samples. The bold region in the inset denotes the area where \emph{Carcharhinus limbatus} were sampled from in this study. The grey-shaded regions denote sampling areas for \emph{Carcharhinus tilstoni} in previous studies in Queensland (Qld) (Harry \emph{et al.} 2013) and the Northern Territory (NT) (Stevens and Wiley 1986). NSW, New South Wales. Fig. 2. Length structure and source of \emph{C. limbatus} samples used in the present study. Fig. 3. Age and growth of \emph{C. limbatus}. Panels (a) and (b) show length at age. Line and shaded area are the fitted growth model with 95\% confidence and prediction intervals. Panel (c) shows length of neonate sharks (known age 0) used to jointly estimate length-at-birth in the growth model. Data from Qld \emph{C. tilstoni} (Harry \emph{et al.} 2013) are provided for comparison in panels (a), (b) and (c). Panel (d) shows growth model residuals. Panel (e) compares growth model estimates of mean length at age for \emph{C. limbatus} and two \emph{C. tilstoni} populations. Panel (f) compares the growth model from (a) and (b) with the length structure of neonates and juveniles from Moreton Bay. Colours denote possible cohorts. Fig. 4. Weight at length of \emph{C. limbatus}. Panel (a) shows mean weight at length with 95\% confidence and prediction intervals estimated using log-linear regression. Panel (b) compares log-transformed weight at length with QLD \emph{C. tilstoni} (Harry \emph{et al.} 2013). Fig. 5. Maturity at length and age of \emph{C. limbatus}. Panels (a) and (b) show logistic regression models with 95\% confidence intervals used to estimate length and age at maturity. Points are empirical proportion of individuals mature at length and age. Panels (c) and (d) compare length and age at 50\% maturity and maternity of \emph{C. limbatus} with two populations of \emph{C. tilstoni} (Stevens and Wiley 1986; Davenport and Stevens 1988; Harry \emph{et al.} 2013). Fig. 6. Comparative demography of \emph{C. limbatus} and \emph{C. tilstoni}. Panel (a) is a density plot of intrinsic rate of population increase, \emph{r}, based on 1000 Monte Carlo simulations. Panel (b) shows the mean and 95\% quantiles of \emph{r} and values assumed for natural mortality, \emph{M}, from Monte Carlo simulation. Solid lines show how \emph{r} varies as a function of \emph{M} when all other variables are held equal, illustrating the range of plausible values for both quantities. Separate lines show the change when biennial or triennial reproduction is assumed for \emph{C. limbatus}. Panels (c) and (d) are biomass-weighted stable age distributions as a function of age and length. Darker shaded regions show the proportion of mature biomass for males, and mature and maternal biomass for females. Females in maternal condition are those that would have contributed to recruitment within a given year. \newpage \subsection{Figures}\label{figures} \includegraphics{/Users/alharry/Documents/Manuscripts/limbatus/reports/figures_files/figure-latex/fig1-1.pdf}\\ Fig. 1 \newpage \includegraphics{/Users/alharry/Documents/Manuscripts/limbatus/reports/figures_files/figure-latex/fig2-1.pdf}\\ Fig 2. \newpage \includegraphics{/Users/alharry/Documents/Manuscripts/limbatus/reports/figures_files/figure-latex/fig3-1.pdf} Fig 3. \newpage \includegraphics{/Users/alharry/Documents/Manuscripts/limbatus/reports/figures_files/figure-latex/fig4-1.pdf}\\ Fig 4. \newpage \includegraphics{/Users/alharry/Documents/Manuscripts/limbatus/reports/figures_files/figure-latex/fig5-1.pdf}\\ Fig 5. \newpage \includegraphics{/Users/alharry/Documents/Manuscripts/limbatus/reports/figures_files/figure-latex/fig6-1.pdf} Fig 6. \end{document}
{ "alphanum_fraction": 0.7678904792, "avg_line_length": 36.9425287356, "ext": "tex", "hexsha": "530d7747d36c14bd9afa02c776834bb1a166c74e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c8e8ef3c6964e7a7c316684c843fa137013358c2", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "alharry/limbatus", "max_forks_repo_path": "reports/figures.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c8e8ef3c6964e7a7c316684c843fa137013358c2", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "alharry/limbatus", "max_issues_repo_path": "reports/figures.tex", "max_line_length": 111, "max_stars_count": null, "max_stars_repo_head_hexsha": "c8e8ef3c6964e7a7c316684c843fa137013358c2", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "alharry/limbatus", "max_stars_repo_path": "reports/figures.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1958, "size": 6428 }
\chapter{Background}\label{chap:background} \section{JavaScript} \label{sec:background-javascript} JavaScript (abbreviated JS) is a programming language that is compliant with the ECMAScript Language Specification \citep{ecma-script}. It is one of the key technologies for developing client-side web applications, alongside HTML and CSS. It is an interpreted, dynamically typed language that allows to program small scripts very fast. Its low learning curve, the possibility of fast prototyping and its simple interaction with the contents of a web page through the DOM (Document Object Model) made it the ideal programming languages for introducing dynamic behavior to web applications. Numerous JavaScript frameworks that aid the development of web applications have been created over the years. \mintinline{text}{jQuery} is used by 74.2\% of the 10 million most popular websites, making it the most popular JavaScript framework in the market \citep{jquery}\citep{w3techs-javascript-libraries-statistics}. Other frameworks like \mintinline{text}{AngularJS}, \mintinline{text}{React} and \mintinline{text}{Vue.js} appeared as an elegant alternative for building complex and scalable web applications \citep{angularjs}\citep{react}\citep{vuejs}. A very simple example of how to use \mintinline{text}{jQuery} for adding dynamic behavior to a web page is shown in \coderef{code:background-jquery-example}. \begin{code} \htmlcode{code/background/javascript/jquery-example.html} \captionsetup{aboveskip=0pt, belowskip=10pt} \caption[Example using JavaScript and HTML]{\textbf{Example using JavaScript and HTML} - Very simple example that uses library jQuery to hide an element from the DOM when it gets clicked. The example was extracted from W3Schools website \citep{w3schools}.} \label{code:background-jquery-example} \end{code} All browsers have built-in interpreters that support running JavaScript code. The language is also used to develop backend applications that run in server side runtime environments like NodeJS \citep{nodejs}. \figref{fig:background-survey-programming-languages} shows that it is the most popular programming languages among professional developers. Additionally, 5 of the 12 most popular frameworks used for web developing are JavaScript frameworks. Similarly, it can be seen in \figref{fig:background-programming-languages-evolution} that JavaScript's popularity remained constant in the last 3 years and it remained as the most popular programming language for seven consecutive years. \input{figures/background/stackoverflowSurvey-ProgrammingLanguages2019.tex} \input{figures/background/stackoverflowSurvey-ProgrammingLanguagesEvolution.tex} \subsection{Types} Section 8 of the ECMAScript Language Specification presents the allowed types for a variable \citep{ecma-script}. These are: \mintinline{text}{undefined}, \mintinline{text}{null}, \mintinline{text}{boolean}, \mintinline{text}{string}, \mintinline{text}{number}, \mintinline{text}{object} and \mintinline{text}{symbol} for ES6/ECMAScript 2015. All of them except for \mintinline{text}{object} are called primitives. It must be noted that functions and arrays are not built-in types. A Function is considered to be `member of the Object type that is an instance of the standard built-in \mintinline{text}{Function} constructor and that may be invoked as a subroutine' \citep{ecma-script}. The \mintinline{text}{Function} object is a built-in object with a \mintinline{text}{[[Call]]} internal property, which means that it can be invoked. Similarly, arrays are instances of the built-in \mintinline{text}{Array} object. Both concepts are shown in \coderef{code:background-functions-and-arrays}. \begin{code} \jscode{code/background/javascript/functions-and-arrays.js} \captionsetup{aboveskip=0pt, belowskip=10pt} \caption[Functions and arrays are built-in objects in JS]{\textbf{Functions and arrays are built-in objects in JS} - A function can be created using the \mintinline{text}{function} keyword or using the \mintinline{text}{Function} built-in constructor. Similarly, arrays can be created using brackets or the \mintinline{text}{Array} built-in object.} \label{code:background-functions-and-arrays} \end{code} Finally, the \mintinline{text}{typeof} operator is not an exact match with the built-in types. It surprisingly returns \mintinline{text}{'object'} for \mintinline{text}{null}, \mintinline{text}{'function'} for a \mintinline{text}{function} and \mintinline{text}{'object'} for an \mintinline{text}{array}. \coderef{code:background-javascript-typeof} gives some examples about this operator. \begin{code} \jscode{code/background/javascript/javascript-typeof.js} \captionsetup{aboveskip=0pt, belowskip=10pt} \caption[typeof JavaScript operator]{\textbf{\mintinline{text}{typeof} JavaScript operator} - Examples of the \mintinline{text}{typeof} operator for \mintinline{text}{null}, \mintinline{text}{function} and \mintinline{text}{array}.} \label{code:background-javascript-typeof} \end{code} \subsection{Wrapper Objects for Primitives} JavaScript also includes the built-in objects \mintinline{text}{Boolean}, \mintinline{text}{String} and \mintinline{text}{Number}. Whenever a method or a property of a primitive value of type \mintinline{text}{boolean}, \mintinline{text}{string} or \mintinline{text}{number} is accessed, JavaScript will convert the primitive value into its corresponding built-in object before accessing the property or invoking the method, as illustrated in \coderef{code:background-javascript-wrapper-objects}. It is however considered a bad practice to explicitly use the built-in objects instead of the primitive values. Several JavaScript linters like JSHint will consider this an error \citep{jshint-wrapper-objects-error}. \begin{code} \jscode{code/background/javascript/javascript-wrapper-objects.js} \captionsetup{aboveskip=0pt, belowskip=10pt} \caption[Wrapper Objects for primitives]{\textbf{Wrapper Objects for primitives} - When a method is invoked on a primitive value like \mintinline{text}{number} or \mintinline{text}{string}, an intermediate wrapper object is created.} \label{code:background-javascript-wrapper-objects} \end{code} \subsection{TypeCoercion} \label{sec:background-js-type-coercion} Type coercion is the process of converting one type into another one. In JavaScript, a variable can be converted into different types depending on the operator and the value of the other operands. The specific definition can be found in the ECMAScript Language Specification \citep{ecma-script}. The following section will cover the basics about JavaScript's type coercion. A detailed explanation of how JavaScript converts a variable type into another one for each operator can be seen in the ECMAScript Language Specification \citep{ecma-script}. It is however intended to give insight about this kind of behavior to highlight its importance for performing type inference in JavaScript. Therefore, specific operator related type coercion is explained only for operator \mintinline{text}{+}. There are excellent references regarding JavaScript's Type Coercion, like Kyle Simpson's book `You Don't Know JS: Types \& Grammar' \citep{you-dont-know-js}. \subsubsection{Types of conversion} \label{types_of_conversion} There are two types of type coercion: explicit and implicit \citep{you-dont-know-js}. Explicit type coercion is simply when the developer chooses to convert one type into another one. The developer achieves this by writing a piece of code that explicitly shows her intention, as shown in \coderef{code:background-explicit-type-coercion}. This type of coercion is also called type casting \citep{you-dont-know-js}. \begin{code} \jscode{code/background/javascript/type-coercion/explicit-type-coercion.js} \captionsetup{aboveskip=0pt, belowskip=10pt} \caption[Explicit JavaScript Type Coercion]{\textbf{Explicit JavaScript Type Coercion} - The developer explicitly transforms a type into another one. The return values of \mintinline{text}{String()}, \mintinline{text}{Number()} and \mintinline{text}{Boolean()} are always of type \mintinline{text}{string}, \mintinline{text}{number} and \mintinline{text}{boolean}, respectively.} \label{code:background-explicit-type-coercion} \end{code} On the other hand, depending on the operator and the context, JavaScript will perform implicit type transformations automatically. If the developer is not aware of this, she may face some unexpected behavior, as shown in \coderef{code:background-implicit-type-coercion}. \begin{code} \jscode{code/background/javascript/type-coercion/implicit-type-coercion.js} \captionsetup{aboveskip=0pt, belowskip=10pt} \caption[Implicit JavaScript Type Coercion]{\textbf{Implicit JavaScript Type Coercion} - Examples given by Douglas Crockford in his talk `JavaScript: The Good Parts' at Google \citep{js-the-good-parts}.} \label{code:background-implicit-type-coercion} \end{code} In any case, JavaScript will always convert any value into a primitive value \citep{ecma-script}, regardless of explicit or implicit type coercion. There is no coercion mechanism that results in an \mintinline{text}{object} or a \mintinline{text}{function}. It is fundamental to understand how a value gets coerced into one of these types. JavaScript mainly performs this by applying one of three so called \mintinline{text}{Abstract Operations} \citep{ecma-script}: \mintinline{text}{ToString()}, \mintinline{text}{ToNumber()} and \mintinline{text}{ToBoolean()}. The operation \mintinline{text}{ToPrimitive()} is also called within the operators. However, each operator will call either \mintinline{text}{ToString()}, \mintinline{text}{ToNumber()} or \mintinline{text}{ToBoolean()} after calling \mintinline{text}{ToPrimitive()}. This operation will be explained in an independent section. \subsubsection{ToString} JavaScript performs this conversion in a very intuitive way, as explained in \coderef{code:background-to-string-operation}. Every primitive gets converted as expected. Objects are converted into strings by calling \mintinline{text}{ToPrimitive()} with \mintinline{text}{hint = string}, as presented in \coderef{code:background-to-string-implementation}. \begin{code} \jscode{code/background/javascript/type-coercion/to-string.js} \captionsetup{aboveskip=0pt, belowskip=10pt} \caption[ToString implementation]{\textbf{ToString implementation} - If the input argument has type \mintinline{text}{object}, it will call \mintinline{text}{ToPrimitive()} with \mintinline{text}{hint = string} and then recursively call itself again.} \label{code:background-to-string-implementation} \end{code} \begin{code} \jscode{code/background/javascript/type-coercion/string-conversion.js} \captionsetup{aboveskip=0pt, belowskip=10pt} \caption[ToString operation examples]{\textbf{ToString operation examples} - Numbers are converted into string in an expected way. The values \mintinline{text}{true, false, null, undefined, NaN}, however, are converted into a \mintinline{text}{string} containing the value's name. This is one of the main reasons why it is common to see the words 'undefined', 'null' or 'NaN' in web applications.} \label{code:background-to-string-operation} \end{code} \subsubsection{ToNumber} \coderef{code:background-to-number-operation} presents this type of conversion. Strings and boolean values get converted in the expected way. Curiously, \mintinline{text}{undefined} will be converted into \mintinline{text}{NaN} and \mintinline{text}{null} into \mintinline{text}{0}. Similarly, objects are converted into numbers by calling \mintinline{text}{ToPrimitive()} with \mintinline{text}{hint = number}, as presented in \coderef{code:background-to-number-implementation}. \begin{code} \jscode{code/background/javascript/type-coercion/to-number.js} \captionsetup{aboveskip=0pt, belowskip=10pt} \caption[ToNumber implementation]{\textbf{ToNumber implementation}} \label{code:background-to-number-implementation} \end{code} \begin{code} \jscode{code/background/javascript/type-coercion/number-conversion.js} \captionsetup{aboveskip=0pt, belowskip=10pt} \caption[ToNumber operation examples]{\textbf{ToNumber operation examples} - Every non-printable character is removed from the string before converting it into a number.} \label{code:background-to-number-operation} \end{code} \subsubsection{ToBoolean} Only the values presented in \coderef{code:background-to-boolean-operation} are converted into \mintinline{text}{false}. Everything else is coerced to \mintinline{text}{true}. That means, that every non-empty \mintinline{text}{string}, every non-zero \mintinline{text}{number} and every \mintinline{text}{object} gets coerced to \mintinline{text}{true}. Values that are converted into \mintinline{text}{false}, like \mintinline{text}{null} or \mintinline{text}{undefined} are called \textit{falsy} values \citep{you-dont-know-js}. \begin{code} \jscode{code/background/javascript/type-coercion/boolean-conversion.js} \captionsetup{aboveskip=0pt, belowskip=10pt} \caption[ToBoolean operation examples]{\textbf{ToBoolean operation examples} - Values that are coerced to false are called \textit{falsy} values.} \label{code:background-to-boolean-operation} \end{code} \subsubsection{ToPrimitive} The signature of this function is \mintinline{text}{ToPrimitive(value, preferredType)}. The allowed values for \mintinline{text}{preferredType} are \mintinline{text}{string} or \mintinline{text}{number}. Every JavaScript operator will try to convert an object into a primitive by invoking \mintinline{text}{ToPrimitive} with a different value for \mintinline{text}{preferredType}. If no value is passed, \mintinline{text}{number} is considered as the default, except for JavaScript native \mintinline{text}{Date} objects, where \mintinline{text}{string} is used as default. An implementation of this behavior is shown in \coderef{code:background-to-primitive-operation}. \begin{code} \jscode{code/background/javascript/type-coercion/to-primitive/to-primitive.js} \captionsetup{aboveskip=0pt, belowskip=10pt} \caption[ToPrimitive operation]{\textbf{ToPrimitive operation}} \label{code:background-to-primitive-operation} \end{code} An important aspect that should be considered, is that the type of the return value of \mintinline{text}{ToPrimitive()} does not necessarily have to match the chosen \mintinline{text}{preferredType}. This means concretely that \mintinline{text}{ToPrimitive(a, number)} can be called and return a \mintinline{text}{string} as a result, for example. Of course, when performing an explicit type conversion, JS will eventually convert whatever \mintinline{text}{ToPrimitive} returns into the correct type. But some operators will perform different implicit type conversions \textit{after} calling \mintinline{text}{ToPrimitive()}. The specific type conversion will then be dependent on the type of the return value of \mintinline{text}{ToPrimitive()} and not on the chosen \mintinline{text}{preferredType}. All objects that are converted to \mintinline{text}{boolean} will be coerced to \mintinline{text}{true}. The following paragraphs explain how objects get converted into strings or numbers. \subsubsection{Object to String} The object is first converted into a primitive value calling the \mintinline{text}{ToPrimitive()} method, using \mintinline{text}{string} as \mintinline{text}{preferredType}. The obtained primitive value is then normally converted into a \mintinline{text}{string}, as explained before in \coderef{code:background-to-string-implementation}. \coderef{code:background-object-into-string} shows a straightforward example. \coderef{code:background-object-into-string-not-string-return-value} provides an example where \mintinline{text}{ToPrimitive(val, String)} does not return a \mintinline{text}{string}, either because simply \mintinline{text}{toString()} does not return a \mintinline{text}{string} or because \mintinline{text}{valueOf()} gets being called instead of \mintinline{text}{toString()}. \begin{code} \jscode{code/background/javascript/type-coercion/to-primitive/normal-object-to-string.js} \captionsetup{aboveskip=0pt, belowskip=10pt} \caption[Object into string conversion]{\textbf{Object into string conversion} - An object has a \mintinline{text}{toString()} method that returns a string.} \label{code:background-object-into-string} \end{code} \begin{code} \jscode{code/background/javascript/type-coercion/to-primitive/object-to-string-returning-not-a-string.js} \captionsetup{aboveskip=0pt, belowskip=10pt} \caption[Object into string conversion]{\textbf{Object into string conversion} - An object that does not return a \mintinline{text}{string} even though \mintinline{text}{ToPrimitive()} is called with \mintinline{text}{hint = string}.} \label{code:background-object-into-string-not-string-return-value} \end{code} \subsubsection{Object to Number} The object gets converted into a primitive by calling \mintinline{text}{ToPrimitive(val, Number)}. Normal number conversion is then applied to this primitive value. \coderef{code:background-object-into-number} provides a basic example. Similarly, \coderef{code:background-object-into-string-not-number-return-value} shows how \mintinline{text}{ToPrimitive()} might return something that is not a \mintinline{text}{number}. \begin{code} \jscode{code/background/javascript/type-coercion/to-primitive/normal-object-to-number.js} \captionsetup{aboveskip=0pt, belowskip=10pt} \caption[Object into number conversion]{\textbf{Object into number conversion} - An object has a \mintinline{text}{valueOf()} method that returns a \mintinline{text}{number}.} \label{code:background-object-into-number} \end{code} \begin{code} \jscode{code/background/javascript/type-coercion/to-primitive/object-to-number-returning-not-a-number.js} \captionsetup{aboveskip=0pt, belowskip=10pt} \caption[Object into number conversion]{\textbf{Object into number conversion} - An object that does not return a \mintinline{text}{number} even though \mintinline{text}{ToPrimitive()} is called with \mintinline{text}{hint = number}.} \label{code:background-object-into-string-not-number-return-value} \end{code} \subsubsection{The \mintinline{text}{+} operator} If any of the operands is an object, it will first convert it into a primitive by passing no \mintinline{text}{preferredType} to the \mintinline{text}{ToPrimitive()} method. Afterwards, if any value is a \mintinline{text}{string}, this operator will implicitly convert both values into a \mintinline{text}{string} and perform a normal string concatenation between them. Else, it will implicitly convert each value into a \mintinline{text}{number} and perform a normal arithmetic addition. An implementation is presented in \coderef{code:background-plus-operator-implementation}. \coderef{code:background-plus-operator-simple-examples} shows some basic examples. As explained before and shown in \coderef{code:background-plus-operator-object-example}, \mintinline{text}{ToPrimitive()} may return a \mintinline{text}{string}, even when it is called with no \mintinline{text}{preferredType}. This can happen because: \begin{enumerate} \item \mintinline{text}{valueOf()} is returning a String. \item \mintinline{text}{valueOf()} is not returning a primitive, which would make \mintinline{text}{toString()} being called instead. \item One of the operands is the native JavaScript \mintinline{text}{Date} Object. \end{enumerate} \begin{code} \jscode{code/background/javascript/type-coercion/plus-operator-implementation.js} \captionsetup{aboveskip=0pt, belowskip=10pt} \caption[JavaScript + operator implementation]{\textbf{JavaScript \mintinline{text}{+} operator implementation}} \label{code:background-plus-operator-implementation} \end{code} \begin{code} \jscode{code/background/javascript/type-coercion/plus-operator-simple-examples.js} \captionsetup{aboveskip=0pt, belowskip=10pt} \caption[JavaScript + operator examples]{\textbf{JavaScript \mintinline{text}{+} operator examples}} \label{code:background-plus-operator-simple-examples} \end{code} \begin{code} \jscode{code/background/javascript/type-coercion/plus-operator-object.js} \captionsetup{aboveskip=0pt, belowskip=10pt} \caption[JavaScript + operator with an object]{\textbf{JavaScript \mintinline{text}{+} operator with an object}} \label{code:background-plus-operator-object-example} \end{code} \subsection{NPM Packages} \label{sec:background-npm-packages} The Node Package Manager (NPM) is the default package manager for NodeJS, a JavaScript runtime environment built on Google Chrome's V8 JavaScript engine \citep{nodejs}. It offers a public registry where JavaScript libraries can be uploaded to be used by the developer's community. Dependencies can be installed either globally or as local dependencies of specific JavaScript projects. NPM installs the dependencies described in the \mintinline{text}{package.json} file and local dependencies are saved in the \mintinline{text}{node_modules} directory in the project's root directory. An example of NPM's usage is shown in \coderef{code:background-npm-usage-example}. The registry contains currently more than 1 million modules and it has 11000 million downloads per week. \begin{code} \begin{bashinline} $ cat index.js const abs = require('abs'); console.log(abs('/foo')); $ node index.js Error: Cannot find module 'abs' $ npm i abs $ node index.js /foo \end{bashinline} \caption[NPM usage example]{\textbf{NPM usage example} - Requiring the module before installing the dependencies will fail since the library is not available. After installing the dependencies locally under the \mintinline{text}{node_modules} directory, the \mintinline{text}{abs} module can be imported using the \mintinline{text}{require} function.} \label{code:background-npm-usage-example} \end{code} \section{TypeScript} \label{sec:background-typescript} TypeScript is a programming language developed by Microsoft that compiles to plain JavaScript. Its syntax is a superset of JavaScript, which means in practice that `every JavaScript program is also a TypeScript program' \citep{typescript}. It provides optional types annotations that enable features like auto-completion and static checking. Type annotations are erased and variable names are preserved after compiling, which produces a JavaScript output that is very similar to the TypeScript source code. The extension for TypeScript files is \mintinline{text}{.ts}. An example about some of the basic TypeScript features is provided in \coderef{code:background-typescript-and-javascript}. A complete specification of the language can be found in the TypeScript Language Specification document \citep{typescript}. This section will only cover the most relevant aspects of the language. \begin{code} \begin{bashinline} $ cat index.js function f(a, b) { return a + b; } f(1, 2); $ npm i -g typescript $ mv index.js index.ts $ tsc index.ts $ cat index.js function f(a, b) { return a + b; } f(1, 2); $ cat index-with-type-annotations.ts function f(a: number, b: number) : number { return a + b; } f(1, 2); $ tsc index-with-type-annotations.ts $ cat index-with-type-annotations.js function f(a, b) { return a + b; } f(1, 2); \end{bashinline} \caption[TypeScript compilation example]{\textbf{TypeScript compilation example} - A plain JavaScript file is renamed into a TypeScript file. The TypeScript compiler will compile it into plain JavaScript again. Secondly, a \mintinline{text}{.ts} file with typed annotations gets compiled into a common JavaScript file with no annotations.} \label{code:background-typescript-and-javascript} \end{code} \subsection{Types} The type analysis performed by TypeScript will only happen at compile-time, which means that the compiled JavaScript has no additional overhead at run-time. \subsubsection{Any Type} The \mintinline{text}{any} type closely resembles a common value in JavaScript. It has been introduced to support values with no type annotations. It can be used to represent any JavaScript value. Minimal static checking is performed on values of the type \mintinline{text}{any}: properties with any name can be accessed, methods with any name and any argument list can be invoked and the value can be invoked as a function. \subsubsection{Primitive Types} As stated in the specification, `The primitive types are the \mintinline{text}{Number}, \mintinline{text}{Boolean}, \mintinline{text}{String}, \mintinline{text}{Symbol}, \mintinline{text}{Void}, \mintinline{text}{Null} and \mintinline{text}{Undefined} types and all user defined enum types.' \citep{typescript}. The types \mintinline{text}{Number}, \mintinline{text}{Boolean}, \mintinline{text}{String}, \mintinline{text}{Symbol}, \mintinline{text}{Null} and \mintinline{text}{Undefined} closely match the JavaScript equivalent types. The \mintinline{text}{Void} type stands for the absence of a value, for a example when a function returns no value. \subsubsection{Object Types} The following types are considered to be object types: Class and interface type references, array types, tuple types, function types, and constructor types. Array types can be written as \mintinline{text}{string[]} or as \mintinline{text}{Array<string>}. The signature of a function can be represented with the \mintinline{text}{Function} type. It is useful for the very common case where a callback is an argument of another function. For example the following function has two arguments, a \mintinline{text}{string} and a callback that receives a \mintinline{text}{number} and returns a \mintinline{text}{boolean}: \mintinline{text}{function f(a: string, c: (x: number) => boolean) { // ... }}. Interfaces can be used to parametrize object types by defining the types and signatures of properties and methods of a value. It is important to mention that interfaces are not exported in any way to the compiled JavaScript. They are only useful for static type checking at compile-time and code intelligence features provided by the IDE. Classes are implemented in the expected way according to the usual Object Oriented programming languages. \figref{fig:background-typescript-class-example} shows an example of a TypeScript class implementation and its corresponding JavaScript file after compilation. \input{figures/background/typescript/classes/class-example.tex} \subsubsection{Union \& Intersection Types} A Union Type represents a value that may have only one of multiple possible types. It is represented as \mintinline{text}{let a: string | number}, meaning that \mintinline{text}{a} is either of type \mintinline{text}{string} or \mintinline{text}{number}. On the other hand, the Intersection Type stands for a value that can simultaneously two types. It can be useful for defining a value that implements two interfaces simultaneously. \subsection{Declaration Files} \label{sec:declaration-files-background} Declaration files contain a typed description of an external JavaScript library's API. It allows the IDE to use code intelligence features like auto-completion and avoids misusing the library by declaring specific interfaces and types required by the library. They are necessary for integrating new TypeScript projects with existing JavaScript libraries that are still developed in plain JavaScript. Declaration files have the extension \mintinline{text}{.d.ts}. \figref{fig:background-declaration-files-calculator-example} provides an example where a new TypeScript program is using the declaration file of an external JavaScript library called \mintinline{text}{calculator}. The example library \mintinline{text}{calculator} is not part of the TypeScript project, it is imported in run-time after compilation when the JavaScript code is executed. \input{figures/background/typescript/declaration-files/calculator-example.tex} It is important to mention that the compiler and the IDE will blindly trust the declaration file. There are no run-time checks being performed on the declaration file and the JavaScript code itself. TypeScript is not aware if the JavaScript library corresponding to the declaration file even exists. \figref{fig:background-declaration-files-calculator-error-example} exposes a scenario where a declaration file that does not match the JavaScript implementation. The declaration file states that types \mintinline{text}{number} and \mintinline{text}{string} are allowed for \mintinline{text}{sum} method but the JavaScript code explicitly throws an error when the arguments are not of type \mintinline{text}{number}. The IDE, however, will provide suggestions based on the inaccurate declaration file. The compiler itself will use the declaration file to perform type checking. Therefore, the code is compiled without errors and the error is only encountered at run-time. \input{figures/background/typescript/declaration-files/calculator-example-error.tex} It is clear that inaccurate declaration files will cause errors that are tedious and difficult to track and debug. Discrepancies with the actual JavaScript implementation is going to frustrate developers, stopping them from using TypeScript as an alternative to JavaScript. Hence the importance of accurate declaration files that match the JavaScript implementation. There are two possibilities for importing declaration files of NPM packages: \begin{itemize} \item If the JavaScript library is written in TypeScript, then the declaration file will be generated automatically by the compiler using the \mintinline{text}{--declaration} flag. The generated declaration file is bundled with the JavaScript distribution files and uploaded to the NPM registry as a part of the package. The \mintinline{text}{types} field in the \mintinline{text}{package.json} file should point to the declaration file. Consequently, the declaration file and the JavaScript code will always be synchronized. \item If the library is not written in TypeScript, the declaration files have to be manually created and uploaded to the DefinitelyTyped repository \citep{definitely-typed-repository}. After a successful pull request, files will be automatically published to the \mintinline{text}{@types} organization on NPM \citep{types-organization-npm}. \end{itemize} \subsubsection{Templates} TypeScript provides different templates for writing declaration files \citep{typescript-declaration-files-templates}. The relevant templates are the following: \begin{itemize} \item \mintinline{text}{module.d.ts}: It is used when the exported module is a JavaScript object containing several properties and methods. \item \mintinline{text}{module-class.d.ts}: This template is used when the exported function is supposed to be used as a constructor with the \mintinline{text}{new} operator. \item \mintinline{text}{module-function.d.ts}: It is used when the module exports only a function. \end{itemize} \subsection{Definitely Typed} The declaration files for NPM packages that do not already include them in the package are uploaded to the DefinitelyTyped repository \citep{definitely-typed-repository}. It currently contains declaration files for more than 6000 modules. The developers community adds new declaration files or updates existing ones by creating a pull request with the changes. If it is approved, the changes will be automatically pushed to the NPM registry through Microsoft's \mintinline{text}{types-publisher} tool \citep{types-publisher}. \subsubsection{Consumption} As shown in \coderef{code:background-definitely-typed-consumption}, the declaration files can be simply imported by running \mintinline{text}{npm install --save-dev @types/THE-MODULE}. NPM will install the files under the \mintinline{text}{node_modules/@types} directory and the compiler will include the declaration files automatically. \begin{code} \begin{bashinline} $ npm install --save-dev @types/abs $ cat node_modules/@types/abs/index.d.ts // Type definitions for abs 1.3 // Project: https://github.com/ionicabizau/abs // Definitions by: Aya Morisawa <https://github.com/AyaMorisawa> // Definitions: https://github.com/DefinitelyTyped/DefinitelyTyped /** * Compute the absolute path of an input. * @param input The input path. */ declare function Abs(input: string): string; export = Abs; \end{bashinline} \caption[DefinitelyTyped files consumption]{\textbf{DefinitelyTyped files consumption} - The declaration files are installed as a common NPM package under the \mintinline{text}{@types} directory.} \label{code:background-definitely-typed-consumption} \end{code} \section{Jalangi} \label{sec:jalangi} Jalangi is a framework for performing dynamic analysis in JavaScript with configurable modules \citep{DBLP:conf/sigsoft/SenKBG13}. It first instruments the code and then executes the instrumented code adding the analysis module. Jalangi offers different callbacks that match virtually every JavaScript operations. The relevant callbacks for this work are listed below with short descriptions that have been directly copied from Jalangi's documentation \citep{jalangi-docs}: \begin{itemize} \item \textbf{binaryPre()}: `This callback is called before a binary operation.' \item \textbf{declare()}: `This callback is triggered at the beginning of a scope for every local variable declared in the scope, for every formal parameter, for every function defined using a function statement, for arguments variable, and for the formal parameter passed in a catch statement.' \item \textbf{getField()}: `This callback is called after a property of an object is accessed.' \item \textbf{putFieldPre()}: `This callback is called before a property of an object is written.' \item \textbf{functionEnter()}: `This callback is called before the execution of a function body starts.' \item \textbf{functionExit()}: `This callback is called when the execution of a function body completes' \item \textbf{invokeFun()}: `This callback is called after a function, method, or constructor invocation.' \item \textbf{invokeFunPre()}: `This callback is called before a function, method, or constructor invocation.' \item \textbf{unaryPre()}: `This callback is called before a unary operation.' \item \textbf{write()}: `This callback is called before a variable is written.' \end{itemize} \figref{fig:background-jalangi} provides an example for the usage of Jalangi where invocations of a function are detected and printed by stdout. \input{figures/background/typescript/jalangi/jalangi-example.tex}
{ "alphanum_fraction": 0.7965239018, "avg_line_length": 90.9973684211, "ext": "tex", "hexsha": "89838f31ca3de9474a470d54c0f8ae85e3e84528", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-10-31T18:47:19.000Z", "max_forks_repo_forks_event_min_datetime": "2019-10-31T18:47:19.000Z", "max_forks_repo_head_hexsha": "972335ae7cdf6778e62feec260579374bc2c4bc0", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "proglang/tsd-generation-report", "max_forks_repo_path": "chapters/3-background.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "972335ae7cdf6778e62feec260579374bc2c4bc0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "proglang/tsd-generation-report", "max_issues_repo_path": "chapters/3-background.tex", "max_line_length": 970, "max_stars_count": null, "max_stars_repo_head_hexsha": "972335ae7cdf6778e62feec260579374bc2c4bc0", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "proglang/tsd-generation-report", "max_stars_repo_path": "chapters/3-background.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7866, "size": 34579 }
\section{Conclusions} A framework for up-scaling DEM simulations is presented in this article. Up-scaling is achieved by matching homogenized stress-strain curves from REV-scale DEM simulations to single element continuum models using PSO and LMA optimization algorithms. A Drucker-Prager plasticity model with ductile damage is implemented in the CDM model to empirically capture the effect of the degradation (damage) of the NFR as deformation takes place. The Drucker-Prager model with ductile damage is shown to be a reasonable CDM model approach to represent NFR in a continuum context including effects of pressure dependent yield and the triaxiality based damage initiation criterion. Compared to a full DEM simulation, the CDM model shows a good fit pre-damage, but is unable to emulate the subtle post-yield oscillations arising from non-continuous yielding in the NFR. Most importantly, with this up-scaling framework, very comparable results ($<5\%$ error) to full DEM solutions can be obtained with the CDM method but with two to three orders of magnitude less computational time. The implementation of this up-scaling framework is presented in an illustrative context only. This specific formulation is formulated only for small strains, it is a purely mechanical formulation (no thermo-hydro-mechanical coupling), and only 2D is considered. However, the proposed framework can accommodate the large strain case, the hydro-mechanically coupled case, and the 3-D case by using appropriately designed modules. We suspect that in these more complex cases, the up-scaling from DEM to CDM will yield even greater increases in computational efficiency. \clearpage
{ "alphanum_fraction": 0.8166172107, "avg_line_length": 84.25, "ext": "tex", "hexsha": "1ffa57d1b95b06209255c6a4056f0341c47343fa", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-06-29T23:14:09.000Z", "max_forks_repo_forks_event_min_datetime": "2020-06-29T23:14:09.000Z", "max_forks_repo_head_hexsha": "9c9043effdb72a608ffec11726af97154751722e", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "yetisir/up-scaling-dem-simulations", "max_forks_repo_path": "section_conclusions.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9c9043effdb72a608ffec11726af97154751722e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "yetisir/up-scaling-dem-simulations", "max_issues_repo_path": "section_conclusions.tex", "max_line_length": 567, "max_stars_count": null, "max_stars_repo_head_hexsha": "9c9043effdb72a608ffec11726af97154751722e", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "yetisir/up-scaling-dem-simulations", "max_stars_repo_path": "section_conclusions.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 343, "size": 1685 }
% !TeX root = ../phd-1st-year-presentation.tex % !TeX encoding = UTF-8 % !TeX spellcheck = en_GB \section{The LINFA project} \begin{frame}{The LINFA project} Smart drug restocking for hospital wards \begin{itemize} \item minimise overall cost of ordering and stocking drugs \item predict drug usage and possible shortages \end{itemize} \vspace{2em} The idea: \begin{itemize} \item Build and solve a Markov Decision Process (MDP) model of the ward \begin{itemize} \item actualised at runtime with the current state of the ward \end{itemize} \item suggest the optimal strategy to the user \end{itemize} \end{frame} \subsection{Solution architecture} \begin{frame}{Solution architecture} At the end of the current day \begin{itemize} \item a new PRISM\footnote{Kwiatkowska, M., Norman, G., \& Parker, D. (2011). PRISM 4.0: Verification of probabilistic real-time systems. In Computer aided verification (pp. 585-591). Springer Berlin/Heidelberg.} MDP model is instantiated \begin{itemize} \item through a Java module, with the ward's current state \end{itemize} \end{itemize} \vspace{0.5em} The MDP model models: \begin{itemize} \item stochastic evolution of the ward during each day \item non-deterministic choices (i.e. drug orders) \end{itemize} \begin{center}\scalebox{0.8}{\input{img/mdp_model_structure}}\end{center} Evaluate the optimal choice for the current day \begin{itemize} \item i.e. the choice that, \textit{on average},\\ minimises the \textit{overall cost} after three days \end{itemize} \end{frame} \subsection{Specifications and restrictions} \begin{frame}{Specifications and restrictions} Ward \begin{itemize} \item one ward with fixed posology \item fixed ward capacity \item fixed drug storage capacity \end{itemize} Drug \begin{itemize} \item only one kind of drug \end{itemize} Stochastic characterisation \begin{itemize} \item arriving patients (scheduled/emergency) \item leaving patients \item drug consumption for each patient \end{itemize} Non-deterministic choices \begin{itemize} \item if and how much drugs to reorder $\{0, 10, 20, 30, 40\}$ \end{itemize} Cost function \begin{itemize} \item cost of reordering each drug unit \item stocking cost for each drug \item cost for emergency reorders \end{itemize} \end{frame}
{ "alphanum_fraction": 0.6484171322, "avg_line_length": 34.4230769231, "ext": "tex", "hexsha": "e7342ec7fb26ab1e2c524e89e7655f44e96867d1", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a1226bd41b0208d0aac08c15c3372a759df0cb63", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "oddlord/uni", "max_forks_repo_path": "phd/committee/first-year/presentation/body/linfa.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a1226bd41b0208d0aac08c15c3372a759df0cb63", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "oddlord/uni", "max_issues_repo_path": "phd/committee/first-year/presentation/body/linfa.tex", "max_line_length": 246, "max_stars_count": null, "max_stars_repo_head_hexsha": "a1226bd41b0208d0aac08c15c3372a759df0cb63", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "oddlord/uni", "max_stars_repo_path": "phd/committee/first-year/presentation/body/linfa.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 719, "size": 2685 }
% jam 2004-09-05 \section{Barycentric coordinates} \label{sec:barycentric-coordinates} In this section, I describe how to compute the barycentric coordinates of a point $\q$ with respect to a $n$-simplex in a finite dimensional inner product space $\Vspace$ (with $\dimension (\Vspace ) = m \ge n$) --- and the derivatives with respect to the vertex positions. Let $S$ be the geometric $n$-simplex with vertex positions $\{\p_0 \ldots \p_n\}$; in $\Vspace$. The {\it span} of $S$, $\affine_span ( S )$, is the affine span of its vertex positions $\affine_span \{\p_0 \ldots \p_n\}$, that is, the set of points $\q$ such that $\q = \sum_{j=n}^{m} b_j \p_j $ where $1 = \sum_{j=0}^{n} b_j $. $b_j$ are {\it barycentric coordinates} of $\q$ with respect to $S$. Barycentric coordinates of an arbitrary point $\q \in \Vspace$, are barycentric coordinates of the closest point in $\affine_span(S)$. If $S$ is non-degenerate (the dimension of its span is $m$), then the barycentric coordinates of any point are unique. Assume in what follows that $S$ is non-degenerate. When faced with for a degenerate simplex, there are several options. One can use the minimum norm $\b$. It is often possible to use the barycentric coordinates with respect to the 'largest' non-degenerate face instead. We may also want to choose $\b$ to be as close to convex, $0 \le \b_j \le 1$, as possible. {\it (How?)} Let $\{ \v_0 \ldots \v_{n-1} \} = \{ (\p_0 - \p_n) \ldots (\p_{n-1} - \p_n)\},$ and let $\Vmap = \sum_{j=0}^{n-1} ( \v_j \otimes \e_j^{\Reals^n} )$, the linear map from $\Reals^n \mapsto \Vspace$, whose 'columns' are the $\v_j$. Let $\b = \left( \b_0 \ldots \b_{n-1} \right) \in \Reals^n$ be the unconstrained vector of the first $n$ barycentric coordinates. For $\q \in \affine_span ( S )$, $\q = \Vmap \b + \p_m$ and $\b = \Vmap^{-1} \left ( \q - \p_m \right)$. For arbitrary $\q \in \Vspace$, $\b = \Vmap^{-} (\q - \p_m)$ (see \autoref{sec:Inverses-and-pseudo-inverses}).
{ "alphanum_fraction": 0.6695214106, "avg_line_length": 33.0833333333, "ext": "tex", "hexsha": "75baf9f018a8ed13291034d44fc25f4feb33ee77", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "970bcbf5e31e40017b2333039e1505c7ea2f56dd", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "palisades-lakes/les-elemens", "max_forks_repo_path": "doc/old/fosm/barycentric-coordinates.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "970bcbf5e31e40017b2333039e1505c7ea2f56dd", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "palisades-lakes/les-elemens", "max_issues_repo_path": "doc/old/fosm/barycentric-coordinates.tex", "max_line_length": 79, "max_stars_count": null, "max_stars_repo_head_hexsha": "970bcbf5e31e40017b2333039e1505c7ea2f56dd", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "palisades-lakes/les-elemens", "max_stars_repo_path": "doc/old/fosm/barycentric-coordinates.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 683, "size": 1985 }
\chapter{Image Feature Extraction} A subset of characteristics with a high discriminative capacity to be extracted from the set of cutouts was defined. All selected features refer to information related to the texture of the images and do not undergo segmentation preprocessing. \input{R/average-intensity.tex} Maximum Intensity Value; Minimum Intensity Value; \section{Local Binary Pattern} The Local Binary Pattern (LBP) is a derivation of the Texture Unit defined by \cite{wang_texture_1990}. From the spatial domain, we can define the LBP feature vector $l$ for each pixel $a$ by comparing the pixel to each of its 8 neighbors. If the center pixel's value is greater than the neighbor's value, we assign $0$ to that position in the vector. Otherwise, we write $1$. For example, consider that we have \begin{center} \begin{tabular}{|c|c|c|} \hline $a_{i - 1, j + 1} = 194$ & $a_{i, j + 1} = 144$ & $a_{i + 1, j + 1} = 132$ \\ \hline $a_{i - 1, j} = 180$ & $a_{i, j} = 136$ & $a_{i + 1, j} = 136$ \\ \hline $a_{i - 1, j - 1} = 167$ & $a_{i, j - 1} = 125$ & $a_{i + 1, j - 1} = 129$ \\ \hline \end{tabular} \end{center} The feature vector will be \begin{center} \begin{tabular}{|c|c|c|} \hline $l_{i - 1, j + 1} = 1$ & $l_{i, j + 1} = 1$ & $l_{i + 1, j + 1} = 0$ \\ \hline $l_{i - 1, j} = 1$ & & $l_{i + 1, j} = 0$ \\ \hline $l_{i - 1, j - 1} = 1$ & $l_{i, j - 1} = 0$ & $l_{i + 1, j - 1} = 0$ \\ \hline \end{tabular} \end{center} We represented as a matrix that has $l_{i, j}$ empty. We can store the matrix as a linear vector by following the 8 neighbours in one direction. For example, if we start with $l_{i, j + 1}$ and go clockwise, we will have \begin{center} \begin{tabular}{|c|} \hline $l_{i, j + 1} = 1$ \\ \hline $l_{i + 1, j + 1} = 0$ \\ \hline $l_{i + 1, j} = 0$ \\ \hline $l_{i + 1, j - 1} = 0$ \\ \hline $l_{i, j - 1} = 0$ \\ \hline $l_{i - 1, j - 1} = 1$ \\ \hline $l_{i - 1, j} = 1$ \\ \hline $l_{i - 1, j + 1} = 1$ \\ \hline \end{tabular} \end{center} In total, we have $2^8 = 256$ possible values for the LBP vector $l$. The histogram of LBP in an image should revel its texture information. \input{R/local-binary-pattern.tex} \section{Histogram of Oriented Gradients (HOG)} \section{Gray-Level Co-Occurrence Matrix (GLCM);} 7 Hu Moments; Haralick’s Texture Features (Haralick, 1979): – Angular Second Moment (Energy); – Contrast; – Correlation; – Variance; – Entropy; – Maximal Correlation Coefficient; – Inverse Difference Moment (Homogeneity); – Sum Average, Sum Variance and Sum Entropy; – Difference Variance and Difference Entropy; – Information Measure of Correlation I and II. \input{R/range.tex} \input{R/variance.tex}
{ "alphanum_fraction": 0.6397355858, "avg_line_length": 22.5041322314, "ext": "tex", "hexsha": "6f189129b68337428eb34db804cb3b922b2ab162", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "6a673025a8f2c9dfc72e950db4ca0ca185df0f4b", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "rgaiacs/ufop_masters_dissertation", "max_forks_repo_path": "texture.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6a673025a8f2c9dfc72e950db4ca0ca185df0f4b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "rgaiacs/ufop_masters_dissertation", "max_issues_repo_path": "texture.tex", "max_line_length": 79, "max_stars_count": 1, "max_stars_repo_head_hexsha": "6a673025a8f2c9dfc72e950db4ca0ca185df0f4b", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "rgaiacs/ufop_masters_dissertation", "max_stars_repo_path": "texture.tex", "max_stars_repo_stars_event_max_datetime": "2020-05-02T16:09:01.000Z", "max_stars_repo_stars_event_min_datetime": "2020-05-02T16:09:01.000Z", "num_tokens": 1009, "size": 2723 }
\section{Fat Tails} Roughly speaking, the consequences of ``fat tails" is that rare events tend to play a disproportionally large role in determining the statistical properties of a sample. This is in contrast to ``thin tails", where, as the sample size increases, pretty quickly no single observation modifies the statistical properties of the sample. For ``thin tails", extreme observations tend to result from the combination of several very unlikely events. For ``fat tails", an extreme observation tend result from a single very unlikely event. \subparagraph{Thin Tails} Two people are randomly selected, and their combined height is 4 meters. Most likely this resulted from the selection of two people who are two meters tall, rather than one person that is 10 cm and another that is 3,90 m tall. \subparagraph{Fat Tails} Two people are randomly selected, and their combined net worth is \$ 40M. The probability of having selected two people with a net work of \$ 20M is less likely than having selected one person with a net worth of \$ 200k and another person with a net worth of \$ 39,8 M. Fat tails do not imply that rare events are more frequent, but that they have greater impact when they do happen. In fact, ``fattening" the tails of a Gaussian distribution results in a larger number of observations within one standard deviation. \begin{figure} \centering \includegraphics[width=\textwidth]{fattails01.png} \caption{Probability densities of two independent thin tailed and thick tailed random variables (brazenly copied from Taleb, 2020). Compare to plot of Lp norms. For the thin tailed random variables, the observa- tion of a particular sum is most likely to result from a balanced contribution from both random variables. For the fat tailed random variables, the observation of a particular sum is most likely to result from the contribution of one of the variables.} \label{fig:fattails01} \end{figure} \subsection{Consequences of Fat Tails} \begin{figure} \centering \includegraphics[width=\textwidth]{fattailstudentt02.png} \caption{Behavior of the sample mean as a function of sample size for standard Student-t distributions with different degrees of freedom.} \label{fig:fattails02} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{fattailsstudentt01.png} \caption{Behavior of the variance of the sample mean as a function of sample size for standard Student-t distributions with different degrees of freedom.} \label{fig:fattails03} \end{figure} \subparagraph{The Law of Large Numbers works too Slowly} \subparagraph{The Sample Mean will rarely correspond to the Distribution Mean} For example, for an 80/20 power law, 92\% of the observations will fall below the distribution mean. The sample mean will tend to underestimate the distribution mean because the distribution mean is heavily biased by rare observations that will tend to be underrepresented in the sample. \subparagraph{Metrics such as Sample Mean and Sample Variance will be Unusable} \subparagraph{In finance, metrics like Sharp etc. are unusable} \subparagraph{Gauss-Markov Theorem fails} Therefore, linear least squares regressions do not work. \subparagraph{Maximum Likelihood Methods can still work} For example in case of a Pareto distribution, it is possible to fit the tail exponent using maximum likelihood methods, and estimating the mean from there. Direct observation of the mean would be misleading. The tail exponent intelligently extrapolates the fat tails of the distribution. \subparagraph{Absence of Evidence $\neq$ Evidence of Absence} \subparagraph{PCA is going to cause spurious factors and loads} \subparagraph{Method of Moments does not work} Approximating a distribution by matching its moments does not work when higher moments are undefined or cannot be reliably estimated. \subparagraph{There is no ``typical" large deviation} \subsection{Maximum to Sum} The "maximum to sum" or MS plot allows to see the behavior of the relationship between the observed maximum to the sum for a particular moment as the number of observations increases. \subsection{Maximum Domain of Attraction} The ``maximum domain of attraciton" is, so to speak, the ``right endpoint of the distribution": \begin{equation} x^* = \sup\{x: F(x) < 1 \} \end{equation} \subsection{Hidden Tail, Problems in Estimating Moments} The Glivenko-Cantelli theorem guarantees uniform convergence of the empiral cdf to the true CDF, however, the empirical distribution is necessarily bounded by the values of the minimum and maximum observations. This results in an unobserved contribution to moments $p>0$ that does not necessarily have to be negligible. To illustrate, take $K_n$ to be the maximum observed value. \begin{equation} \mathbb{E}(X^p) = \underbrace{\int_{L}^{K_n} x^p \phi(x)\mathrm{d}x}_{observed} + \underbrace{\int_{K_n}^{\infty} x^p \phi(x)\mathrm{d}x}_{unobserved} \end{equation}
{ "alphanum_fraction": 0.7877690606, "avg_line_length": 60.6219512195, "ext": "tex", "hexsha": "8c92807306cfc09afe23032949b8b56cf6d18e6c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a2f5c1595aed616236b2b889195604f365175899", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "jpbm/probabilism", "max_forks_repo_path": "notes/chapters/sections/stats_fattails.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a2f5c1595aed616236b2b889195604f365175899", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "jpbm/probabilism", "max_issues_repo_path": "notes/chapters/sections/stats_fattails.tex", "max_line_length": 468, "max_stars_count": null, "max_stars_repo_head_hexsha": "a2f5c1595aed616236b2b889195604f365175899", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "jpbm/probabilism", "max_stars_repo_path": "notes/chapters/sections/stats_fattails.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1151, "size": 4971 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % CS624: Analysis of Algorithms % Copyright 2015 Pejman Ghorbanzade <[email protected]> % Creative Commons Attribution-ShareAlike 4.0 International License % More info: https://github.com/ghorbanzade/beacon %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section*{Question 8} Assume you had some kind of super-hardware that, when given two lists of length $n$ that are sorted, merges them into one sorted list, and takes only $n^c$ steps where $c \geq 0$. \begin{enumerate}[label=(\alph*)] \item Write down a recursive algorithm that uses this hardware to sort lists of length $n$. \item Write down a recurrence to describe the run time. \item For what values of $c$ does this algorithm perform substantially better than $\mathcal{O}(n \log n)$? Why is it highly implausible that this kind of super-hardware could exist for these values of $c$? \end{enumerate} \subsection*{Solution} \begin{enumerate}[label=(\alph*)] \item Inspired by \textsc{MergeSort}, the proposed algorithm \textsc{Super-Merger-Sort} is given as Algorithm 3. The call to the super-hardware is performed by calling the \textsc{Super-Merge} method. Using this algorithm, we can sort an array of length $n$ by a top level call of \textsc{Super-Merger-Sort}(A, $1$, $n$). \begin{algorithm}[H] \caption{\textsc{Super-Merger-Sort}($A$, $p$, $r$)} \begin{algorithmic}[1] \If {$p < r$} \State $q \leftarrow \lfloor \frac{p+r}{2} \rfloor$ \State \textsc{Super-Merger-Sort}($A$, $p$, $q$) \State \textsc{Super-Merger-Sort}($A$, $q+1$, $r$) \State \textsc{Super-Merge}($A$,$p$,$q$,$r$) \EndIf \end{algorithmic} \end{algorithm} \item As the original \textsc{MergeSort} algorithm has hardly been modified, the recursion tree will be the same as the recursion tree for \textsc{MergeSort} (Figure 3 of Lecture note 2), the only difference being that the constant runtime of each level of the tree would this time be $n^c$. \item Based on this fact, the total runtime of the proposed \textsc{Super-Merger-Sort} algorithm with be $\mathcal{O}(n^{c}\log n)$ for sorting an array of length $n$. The proposed algorithm would therefore perform better when $c < 1$. This is of course highly implausible because any such super-hardware should at least read once all $n$ elements of the two merged lists to sort them and this itself is $\mathcal{O}(n)$. \end{enumerate}
{ "alphanum_fraction": 0.6965829559, "avg_line_length": 52.8043478261, "ext": "tex", "hexsha": "5b340bbcda0f59e5c205203393872469cae48f0d", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2020-12-06T17:18:05.000Z", "max_forks_repo_forks_event_min_datetime": "2019-09-20T05:58:32.000Z", "max_forks_repo_head_hexsha": "c36e3d1909b9e1e47b1ad3cda81f7f33b713adc4", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ghorbanzade/beacon", "max_forks_repo_path": "umb-cs624-2015s/src/tex/hw02/hw02q08.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c36e3d1909b9e1e47b1ad3cda81f7f33b713adc4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ghorbanzade/beacon", "max_issues_repo_path": "umb-cs624-2015s/src/tex/hw02/hw02q08.tex", "max_line_length": 291, "max_stars_count": 2, "max_stars_repo_head_hexsha": "c36e3d1909b9e1e47b1ad3cda81f7f33b713adc4", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ghorbanzade/beacon", "max_stars_repo_path": "umb-cs624-2015s/src/tex/hw02/hw02q08.tex", "max_stars_repo_stars_event_max_datetime": "2020-01-01T11:16:51.000Z", "max_stars_repo_stars_event_min_datetime": "2019-11-13T20:00:10.000Z", "num_tokens": 680, "size": 2429 }
\section{Case Study: Lorenz System} The Lorenz system is defined by \begin{align*} \dot x &= \sigma (y-x)\\ \dot y &= x(\rho-z)-y\\ \dot z &= xy-\beta z \text, \end{align*} where $\sigma = 10$, $\beta = \frac 8 3$ and $\rho$ is the variable parameter. As a reduction operator for the creation of the bifurcation diagram \[ \mathbf y \mapsto \{||\mathbf y(t) - \mathbf a||\ |\ t \in [0,2\pi),\ \mathbf y_3(t) = \rho + 7, \ \mathbf y'_3(t) > 0\} \] is used, where $\mathbf a = (\sqrt{\beta(\rho-1)}, \sqrt{\beta(\rho-1)}, \rho-1)^T$ is one of the fixed points of the system. It maps a solution to the distances of its intersections with the $z = \rho+7$ plane to the fixed point $\mathbf a$. This particular choice was made empirically, as most other choices result in changing numbers of intersections for single solution branches, too many intersections or result in many lines overlapping one another. The system features stable periodic solutions for $\rho \in \{99.65, 100.5, 160, 350\}$. These values are used as starting points for the creation of the bifurcation diagram. The general process of \begin{itemize} \item finding a stable solution \item tracing it to a bifurcation point or until otherwise satisfied \item doubling the period to find period doubling bifurcations \item switching to the double period branch using perturbation \end{itemize} is the same as for the Rössler system. %Moreover, this is possible using the same parameters. %distracting? See figures \ref{fig:lorenzfull}, \ref{fig:lorenzcut} for visual results. The tracing of the solutions for $\rho \to 0$ had to be suspended, because it became too slow to be practical. See the discussion in \autoref{sec:outro} for details. \newgeometry{top=0cm} \begin{figure} \centering\makebox[0pt]{\rotatebox{90}{\includegraphics{img/lorenzoverview.png}}} \caption{ An overview of the Lorenz bifurcation diagram. } \label{fig:lorenzfull} \end{figure} \begin{figure} \centering\makebox[0pt]{\rotatebox{90}{\includegraphics{img/lorenzcut80.png}}} \caption{ The periodic orbits for $\rho=80$, together with an underlying trajectory obained through forward intergration (grey) in phase-space. The colors of the periodic solutions match the bifurcation diagram. } \label{fig:lorenzcut} \end{figure} \restoregeometry
{ "alphanum_fraction": 0.7404347826, "avg_line_length": 41.0714285714, "ext": "tex", "hexsha": "d7d33bd630b504b54a89be1ff94e8644b9821b35", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "fcf289c7ef5f8500ebcb238e36c6a7ee9e054147", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "285714/ncm", "max_forks_repo_path": "doctheory/lorenz.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "fcf289c7ef5f8500ebcb238e36c6a7ee9e054147", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "285714/ncm", "max_issues_repo_path": "doctheory/lorenz.tex", "max_line_length": 212, "max_stars_count": null, "max_stars_repo_head_hexsha": "fcf289c7ef5f8500ebcb238e36c6a7ee9e054147", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "285714/ncm", "max_stars_repo_path": "doctheory/lorenz.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 673, "size": 2300 }
\documentclass{article} \usepackage[utf8]{inputenc} \usepackage{listings} \title{EDA132 Project1 - Search} \author{Lewis Belcher (900429-3552), Axel Larsson (920515-0395)} \date{} \usepackage{natbib} \usepackage{graphicx} \begin{document} \maketitle \section{Introduction} \subsection{Outline} In this project we use a simple MiniMax implementation to play the game of Reversi with Othello rules. Figure \ref{fig:othello-start} shows how the game starts with Othello rules. Player black moves first and the turns alternate between players while moves are available (play is given back to the opposition if no moves are available to the current player). The game is won by the player with the most tiles of their colour on the board when there are no more legal moves for either player. \begin{figure}[h!] \centering \includegraphics[scale=0.6]{othello-start.png} \caption{Reversi with Othello rules starting position.} \label{fig:othello-start} \end{figure} \subsection{Programming Language} Python 3.4 was used as the programming language for this project. \section{Implementation} The board, game, human players and AI were implemented in an object oriented way with a lot of abstraction between classes. The board is first created along with two players which are then used to create a game instance. This game instance has a method \texttt{play} which is then used to play the game. The code is divided into two modules, both residing in the package \texttt{othello}; \texttt{game} and \texttt{players}. The \texttt{game} module has the implementations for the board representation, given by the class \texttt{Board} and the \texttt{Game} class. This is also where the main method resides to actually play the game. The \texttt{players} module has classes representing both the AI player, running the MiniMax algorithm, and the \texttt{Human} player which prompts the user for which move to make. There is another package containing tests for the respective modules, to be found in the directory \texttt{/h/d5/f/dat11al1/othello/tests}. \subsection{Board} The board was based largely on a \emph{numpy} array. Free spaces are represented by 0's and taken spaces are represented by the integer value of each player (each of which has their own implemented value, see the following subsection). An ASCII representation of the board was implemented in this class where white is given by the ASCII letter \texttt{'O'} and black by \texttt{'*'}. An example of this is shown in Figure \ref{fig:ascii-board}. The file \texttt{/h/d5/f/dat11al1/othello/othello/game.py} contains the relevant code. \begin{figure}[h!] \centering \includegraphics[scale=0.6]{ascii-board.png} \caption{ASCII representation of the Board class.} \label{fig:ascii-board} \end{figure} \subsection{Players} The player colours \textit{white} and \textit{black} have integer representations of 1 and -1 respectively. In doing this it becomes trivial to sum up across all values on the board to determine who has the advantage. We make use of this implementation in the MiniMax algorithm in that the sum multiplied by the integer representation of the player is what we want to maximize (the converse for the minimizing step). The code for these classes can be found in \texttt{/h/d5/f/dat11al1/othello/othello/players.py}. \subsection{Game} The class \texttt{Game} is what runs and monitors the entire game. It is passed a board on which to play and two players. It contains various methods including checking for legal moves and determining which tiles must be flipped for a given move. The board is updated upon every move and the game keeps track of whose turn it is (including handing over play if there are no legal moves) and the outcome of the game. The code for this class can be found in \texttt{/h/d5/f/dat11al1/othello/othello/game.py}. \subsection{MiniMax Algorithm} The MiniMax algorithm used was adapted from the book \textit{Artificial Intelligence: A Modern Approach} \cite{Russell:2003:AIM:773294}. The implementation can be found under the class \texttt{MiniMaxAI} in \texttt{/h/d5/f/dat11al1/othello/othello/players.py}. The notion is to maximise the utility for the colour given at instantiation (as the first argument) under the assumption that the opposing player will move optimally. In our implementation we use a simple heuristic that a higher number of flipped tiles is favorable (this is a very naive heuristic but time did not permit us to implement more advanced techniques such as favorable positions or killer moves). \subsection{Timing} The time limit for the search was implemented in a very straightforward way: \begin{enumerate} \item A time limit is set when creating an AI instance \item At the start of each search an attribute is set within the AI class which contains information of the current time \item After every step in the search \texttt{current time - time at start of the search} is checked to see if it exceeds the time limit set at instantiation, if it does then the search terminates \end{enumerate} \section{Usage} The path to the program root is \texttt{/h/d5/f/dat11al1/othello}. While standing in that directory in the terminal, execute \texttt{python3 othello -h} to get a useful help text as such: \begin{lstlisting} usage: othello [-h] [-v] [-t TIME] Play othello vs an AI. optional arguments: -h, --help show this help message and exit -v, --visualise visualise game board, default is to output only AI moves -t TIME, --time TIME time limit in seconds for each ply, default is 10s \end{lstlisting} The \texttt{-t} option is used to indicate the amount of allotted time in seconds for the AI before making a decision on each ply. The \texttt{-v} flag can be used to get a nice ASCII visualisation of the game board. \subsection{Requirements} The minimum requirements to execute the game is: \begin{itemize} \item Python3 \item numpy \end{itemize} If \texttt{virtualenv}, \texttt{setuptools} and \texttt{pip} is available on the system (which, unfortunately, they are not on the student computer system), these can be used to install the program in a virtual environment, but this is entirely optional. \subsection{Playing} While standing with the terminal in \texttt{/h/d5/f/dat11al1/othello}, type \texttt{python3 othello -v} to start the game with the ASCII visualisation. Add the \texttt{-t} option as explained in the previous section if a time limit other than 10 seconds is desired. If more machine-friendly output is required (i.e. no ASCII grid), the \texttt{-v} flag can be omitted. While playing, the black player always starts and is assumed to be a human typing in the coordinates on the command line. The white player is the AI and will respond with its move. \subsection{Testing} While standing with the terminal in \texttt{/h/d5/f/dat11al1/othello/tests}, type \texttt{python3 game\_test.py} and \texttt{python3 players\_test.py} to unit test the modules \texttt{game} and \texttt{players} respectively. \bibliographystyle{plain} \bibliography{references} \end{document}
{ "alphanum_fraction": 0.7736429771, "avg_line_length": 61.6206896552, "ext": "tex", "hexsha": "3406704e7a9a8ac8ec66f25525d8e6d59b34c6ea", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "e10a15d3a70636ca6c94d778994463ee50f8bffa", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "AxelTLarsson/othello", "max_forks_repo_path": "doc/src/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e10a15d3a70636ca6c94d778994463ee50f8bffa", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "AxelTLarsson/othello", "max_issues_repo_path": "doc/src/main.tex", "max_line_length": 669, "max_stars_count": null, "max_stars_repo_head_hexsha": "e10a15d3a70636ca6c94d778994463ee50f8bffa", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "AxelTLarsson/othello", "max_stars_repo_path": "doc/src/main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1747, "size": 7148 }
\chapter{Measuring \texorpdfstring{\ARaw}{ARaw}} \label{chap:cpv:araw} In order to calculate \dACP, \ARaw\ must first be measured for each mode. This is the asymmetry in the \PLambdac\ and \APLambdac\ yields, as in \cref{eqn:cpv:introduction:araw}. The yields are measured with \chisq\ fits to the \PLambdac\ and \APLambdac\ mass spectra, in a similar manner to the fit to the charge-combined sample discussed in \cref{chap:cpv:prelim_fits}. The weights from the kinematic weighting, from \cref{chap:cpv:kinematic_weighting}, and the phase space efficiency correction, from \cref{chap:cpv:phsp}, are used in the fits such that the bin contents $b_{i}$, to which to fit model is compared, is \begin{equation} b_{i} = \sum_{j = 1}^{N_{i}} \frac{% w_{j}(\pT^{\PLambdac}, \Eta^{\PLambdac}, \pT^{\Pproton}, \Eta^{\Pproton}) }{% \eff_{j}(\msqphm, \msqhh, \thetap, \phip, \phihh) }, \label{eqn:cpv:araw:total_weights} \end{equation} where $N_{i}$ is the number of candidate decays in the $i$th \phh\ mass bin, $w_{j}$ is per-candidate kinematic weight for the $j$th candidate in the $i$th bin, and $\eff_{j}$ is the efficiency. Each weight is scaled by a factor $B = \sum_{i} b_{i}/\sum_{i} b_{i}^{2}$ for the reasons given in \cref{chap:cpv:kinematic_weighting:stats}. The variance on the bin contents $\unc{b_{i}}$ is given by the effective number of entries, as in \cref{eqn:cpv:kinematic_weighting:neff} \begin{equation} \unc{b_{i}}^{2} = \neff = B\sum_{i}^{N} b_{i}. \end{equation} For \ppipi\ the kinematic weights are those described in \cref{chap:cpv:kinematic_weighting}, whilst for \pKK\ they are unity. In an attempt reduce the effects of experimenter bias, the fitted central values of \ARaw\ are blinded. To be able to assess the quality of the fit results without unblinding the result, only the fit to \PLambdac\ data is inspected. For the \APLambdac\ data, only the pull distribution of the fit with respect to the data is inspected, to indicate the fit quality. The specific method used to blind the central value is discussed in \cref{chap:cpv:araw:blinding}. \section{Simultaneous fit} \label{chap:cpv:araw:simultaneous_fit} Fits are performed simultaneously on the charge-separated \PLambdac\ and \APLambdac\ samples, measuring \ARaw\ by including it as a parameter in the fit. This is advantageous as it increases the statistical sensitivity of the measurement, as all the data are considered at once, and the error analysis is taken care of by the fitter. The fit model for each charge is created as a clone of a `master' \ac{PDF}, whose signal and background components are identical to those described in \cref{chap:cpv:prelim_fits}. Before creating the two models, one per charge, it is specified which parameters of the master \ac{PDF} are to become charge-specific. As each \PLambdac\ decay mode may interact differently with the detector than those of the \APLambdac, the matter and antimatter states may have different mass resolutions. This motivates the splitting of all of the signal model parameters. Likewise, the background composition can be different for the positively- and negatively-charged final states, and so the background model parameter $a_{0}$ is also split. The signal \ac{PDF}, $f_{+}$ for \PLambdac\ and $f_{-}$ for \APLambdac, is then \begin{equation} f_{\pm}(x; \mu_{\pm}, w_{\pm}, \alpha_{\pm}) = \alpha_{\pm}{}G(x; \mu_{\pm}, \alpha_{\pm}w_{\pm}) + (1 - \alpha_{\pm})G(x; \mu_{\pm}, \frac{\alpha_{\pm}}{w_{\pm}}), \end{equation} and the corresponding background \ac{PDF} is \begin{equation} g_{\pm}(x) = 1 + a_{\pm,0}x, \end{equation} as in \cref{eqn:cpv:prelim_fits:sig_model} and~\cref{eqn:cpv:prelim_fits:bkg_model}. The width parameters $\sigma_{\pm, 1}$ and $\sigma_{\pm, 2}$ of the signal model are defined as in \cref{eqn:cpv:prelim_fits:sigma_def}. The total \ac{PDF} for each charge is \begin{equation} h_{\pm}(x) = \nsigpm f_{\pm}(x) + \nbkgpm g_{\pm}(x), \end{equation} where the signal and background yields are parameterised as \begin{align} \nsigpm &= \frac{1}{2}\nsig(1 \pm \ARaw),\\ \nbkgpm &= \frac{1}{2}\nbkg(1 \pm \ARawBg). \end{align} Here, \nsig\ and \nbkg\ are the signal and background yields for the \emph{combined} \PLambdac\ and \APLambdac\ sample. This parameterisation of the charge-specific yields permits $\ARaw$ to measured directly from the fit. The fully selected data are used, with one simultaneous fit performed for each year and magnet polarity sub-sample. The loss function is defined as a \chisq, which is minimised with \migrad\ and a covariance matrix is computed with \hesse~\cite{James:1975dr,James:1994vla}. \section{Blinding} \label{chap:cpv:araw:blinding} The blinding is done by fitting a variable as the \emph{transform} of \ARaw. The transformation itself can be adding some random offset to the central value, or changing its order of magnitude, but it is applied in such a way as to leave the uncertainty on the parameter estimate unchanged. Only the transformed value is reported, leaving the unblinded value hidden. This analysis applies a linear blinding offset, transforming the unblinded central value $x$ as \begin{equation} x' = x + \beta\xi, \end{equation} where $\xi$ is a number sampled from a normal distribution with mean zero and unit width, and $\beta$ is a known scaling parameter, which is chosen to be $0.25$. The number $\xi$ is chosen deterministically from a string in the fitting program, such that the same input string always gives the same $\xi$, but in a way that the smallest changes in the input change the output unpredictably. In this way, the string is effectively the seed for a random number generator, and $\xi$ is the first number returned by the generator. Different strings are used for different modes, but are the same for the fits to the different data sub-samples. % The string used for the \pKK mode is \texttt{2ONNiFZLqk4yvv6ovBeg} and the % string used for the \ppipi mode is \texttt{4VjezpHK0wzlZ6qbeolq}. As the blinding offsets are different for each mode, the difference between $\ARaw(\pKK)$ and $\ARaw(\ppipi)$, that is \dACP, is also blinded. However, the difference between two blinded values of \dACP\ is equal to the difference between the unblinded values. This is useful for systematic studies, which will be described in \cref{chap:cpv:syst}. Similarly, as the blinding parameters are the same for all data sub-samples, it is still meaningful to compare blinded values of \ARaw\ and \dACP\ between them to check for consistency, as the differences are not blinded. In addition to blinding the values of \ARaw\ and \dACP, the plots of the fits to the \APLambdac\ data are blind (as comparing \PLambdac\ and \APLambdac\ fits side-by-side would allow one to infer the size of the asymmetry), as are plots of the data and model asymmetry in bins of \PLambdac\ mass. The pull plots and \chisq\ values are not blinded in the fit presentations as they do not indicate the asymmetry value. \section{Fit results} \label{chap:cpv:araw:results} Fits to the \pKK\ and \ppipi\ 2012 magnet down data are shown in \cref{fig:cpv:araw:fits:pKK,fig:cpv:araw:fits:ppipi}. The corresponding correlation matrices for the fits are given in \cref{fig:cpv:araw:correlation}, and the parameter values are given in \cref{tab:cpv:araw:params:pKK,tab:cpv:araw:params:ppipi}. \begin{figure} \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=\textwidth]{cpv/araw/LcTopKK_2012_MagDown_fit-Lcp.pdf} \caption{\PLambdac} \label{fig:cpv:araw:fits:pKK:Lcp} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=\textwidth]{cpv/araw/LcTopKK_2012_MagDown_fit-Lcm.pdf} \caption{\APLambdac} \label{fig:cpv:araw:fits:pKK:Lcm} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=\textwidth]{cpv/araw/LcTopKK_2012_MagDown_fit_pdf_araw.pdf} \caption{\ARaw} \label{fig:cpv:araw:fits:pKK:ARaw} \end{subfigure} \caption{% Results of the simultaneous fit to the \pKK\ 2012 magnet down dataset. The difference between the \PLambdac\ data and model (\subref*{fig:cpv:araw:fits:pKK:Lcp}) and the \APLambdac\ data and model (\subref*{fig:cpv:araw:fits:pKK:Lcm}) is shown in the bottom plot~(\subref*{fig:cpv:araw:fits:pKK:ARaw}). The full offline selection is applied. The solid blue line is the total fit to the data in black points, and the dotted red and dashed blue lines are the background and signal components, respectively. Below each fit is a pull plot, showing the difference between the total fit model and the data in each bin, normalised by the Poisson uncertainty on the number of entries in that bin. } \label{fig:cpv:araw:fits:pKK} \end{figure} \begin{figure} \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=\textwidth]{cpv/araw/LcToppipi_2012_MagDown_fit-Lcp.pdf} \caption{\PLambdac} \label{fig:cpv:araw:fits:ppipi:Lcp} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=\textwidth]{cpv/araw/LcToppipi_2012_MagDown_fit-Lcm.pdf} \caption{\APLambdac} \label{fig:cpv:araw:fits:ppipi:Lcm} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=\textwidth]{cpv/araw/LcToppipi_2012_MagDown_fit_pdf_araw.pdf} \caption{\ARaw} \label{fig:cpv:araw:fits:ppipi:ARaw} \end{subfigure} \caption{% Results of the simultaneous fit to the \ppipi\ 2012 magnet down dataset. The difference between the \PLambdac\ data and model (\subref*{fig:cpv:araw:fits:ppipi:Lcp}) and the \APLambdac\ data and model (\subref*{fig:cpv:araw:fits:ppipi:Lcm}) is shown in the bottom plot~(\subref*{fig:cpv:araw:fits:ppipi:ARaw}). The full offline selection is applied. The solid blue line is the total fit to the data in black points, and the dotted red and dashed blue lines are the background and signal components, respectively. Below each fit is a pull plot, showing the difference between the total fit model and the data in each bin, normalised by the Poisson uncertainty on the number of entries in that bin. } \label{fig:cpv:araw:fits:ppipi} \end{figure} \begin{figure} \begin{center} \begin{subfigure}[b]{0.6\textwidth} \includegraphics[width=\textwidth]{cpv/araw/LcTopKK_2012_MagDown_correlation_matrix.pdf} \caption{\pKK} \label{fig:cpv:araw:correlation:pKK} \end{subfigure}\\ \begin{subfigure}[b]{0.6\textwidth} \includegraphics[width=\textwidth]{cpv/araw/LcToppipi_2012_MagDown_correlation_matrix.pdf} \caption{\ppipi} \label{fig:cpv:araw:correlation:ppipi} \end{subfigure} \end{center} \caption{% Correlation matrices for the fit parameters used in the simultaneous fit to the \PLambdac\ and \APLambdac\ mass spectra in the fully selected 2012 magnet down dataset for \pKK~(\subref*{fig:cpv:araw:correlation:pKK}) and \ppipi~(\subref*{fig:cpv:araw:correlation:ppipi}). The corresponding fits are shown in \cref{fig:cpv:araw:fits:pKK,fig:cpv:araw:fits:ppipi}. } \label{fig:cpv:araw:correlation} \end{figure} \begin{table} \centering \caption{% Model parameters as determined in the preliminary fit to the 2012 magnet down subset of the \pKK\ data. } \label{tab:cpv:araw:params:pKK} \input{tables/cpv/araw/LcTopKK_2012_MagDown_fit-parameters} \end{table} \begin{table} \centering \caption{% Model parameters as determined in the preliminary fit to the 2012 magnet down subset of the \ppipi\ data. } \label{tab:cpv:araw:params:ppipi} \input{tables/cpv/araw/LcToppipi_2012_MagDown_fit-parameters} \end{table} \section{Validation} \label{chap:cpv:araw:validation} It has been shown in \cref{chap:cpv:prelim_fits:validation} that the measurements of the signal and background yields in the fits to the charge-combined data samples are not biased. It remains to validate the simultaneous fit that has been described in this \lcnamecref{chap:cpv:araw}. This is done in using real data. Using real data, the fits are performed with the charge of the \PLambdac\ candidate being randomly assigned. The true values of \ARaw\ and \ARawBg\ should then be zero. This checks for mistakes in the implementation of the fitter, and that the uncertainty on those parameters is not under- or overestimated. \subsection{Null test} \label{chap:cpv:araw:validation:null} Fits as described in this \lcnamecref{chap:cpv:araw} are performed on real data where the charge of the \PLambdac\ candidate is randomly assigned, with there being an equal probability of any one decay being assigned a positive or a negative charge. One thousand permutations of the charge assignments are tested for each mode and data sub-sample, the fit is performed for each permutation and the value of \ARaw\ is recorded. As with the studies described in \cref{chap:cpv:prelim_fits:validation}, the pull distributions of \ARaw\ should have a width of one, and the unblinded value should have a mean of zero. % As the true value of \ARaw\ is known when a set of random chosen signs is used, % being zero, such a study could unblind the value of \ARaw, as an unbiased % fitter should measure \ARaw\ to be the value of the blinding offset. % To prevent this, the blinding string for each mode is changed, such that the % blinding offset for this study is different to that used in the nominal fits. The true charge of the \PLambdac\ candidates do not enter the fit, and so the charge permutation experiments are performed without blinding. The results are shown in \cref{fig:cpv:araw:validation:null:pKK,fig:cpv:araw:validation:null:ppipi} for the 2012 magnet down \pKK\ and \ppipi\ data. The width of the pull distribution of \ARaw\ for both is consistent with unity, and the means are consistent with zero, indicating that the fit is unbiased. \begin{figure} \begin{subfigure}[t]{0.32\textwidth} \includegraphics[width=\textwidth]{cpv/araw/validation/LcTopKK_2012_MagDown_araw.pdf} \caption{Values} \label{fig:cpv:araw:validation:null:pKK:values} \end{subfigure} \begin{subfigure}[t]{0.32\textwidth} \includegraphics[width=\textwidth]{cpv/araw/validation/LcTopKK_2012_MagDown_araw_err.pdf} \caption{Errors} \label{fig:cpv:araw:validation:null:pKK:errors} \end{subfigure} \begin{subfigure}[t]{0.32\textwidth} \includegraphics[width=\textwidth]{cpv/araw/validation/LcTopKK_2012_MagDown_araw_pull.pdf} \caption{Pulls} \label{fig:cpv:araw:validation:null:pKK:pulls} \end{subfigure} \caption{% Validation of the simultaneous fits to the \PLambdac\ and \APLambdac\ mass spectra in the 2012 magnet down \pKK\ dataset, using randomly assigned \PLambdac\ charges. The plots shows the distribution of the central values (\subref*{fig:cpv:araw:validation:null:pKK:values}), uncertainties (\subref*{fig:cpv:araw:validation:null:pKK:errors}), and pulls (\subref*{fig:cpv:araw:validation:null:pKK:pulls}) of the \ARaw\ parameter, assuming that the true value is zero. The pull distribution is overlaid with a fit of a Gaussian distribution with mean $\mu$ and width $\sigma$. } \label{fig:cpv:araw:validation:null:pKK} \end{figure} \begin{figure} \begin{subfigure}[t]{0.32\textwidth} \includegraphics[width=\textwidth]{cpv/araw/validation/LcToppipi_2012_MagDown_araw.pdf} \caption{Values} \label{fig:cpv:araw:validation:null:ppipi:values} \end{subfigure} \begin{subfigure}[t]{0.32\textwidth} \includegraphics[width=\textwidth]{cpv/araw/validation/LcToppipi_2012_MagDown_araw_err.pdf} \caption{Errors} \label{fig:cpv:araw:validation:null:ppipi:errors} \end{subfigure} \begin{subfigure}[t]{0.32\textwidth} \includegraphics[width=\textwidth]{cpv/araw/validation/LcToppipi_2012_MagDown_araw_pull.pdf} \caption{Pulls} \label{fig:cpv:araw:validation:null:ppipi:pulls} \end{subfigure} \caption{% Validation of the simultaneous fits to the \PLambdac\ and \APLambdac\ mass spectra in the 2012 magnet down \ppipi\ dataset, using randomly assigned \PLambdac\ charges. The plots shows the distribution of the central values (\subref*{fig:cpv:araw:validation:null:pKK:values}), uncertainties (\subref*{fig:cpv:araw:validation:null:pKK:errors}), and pulls (\subref*{fig:cpv:araw:validation:null:pKK:pulls}) of the \ARaw\ parameter, assuming that the true value is zero. The pull distribution is overlaid with a fit of a Gaussian distribution with mean $\mu$ and width $\sigma$. } \label{fig:cpv:araw:validation:null:ppipi} \end{figure}
{ "alphanum_fraction": 0.7460517624, "avg_line_length": 45.2527173913, "ext": "tex", "hexsha": "06d39eab6714c0b215ac3fa6b85258a22890834a", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2020-01-06T23:42:27.000Z", "max_forks_repo_forks_event_min_datetime": "2019-05-13T07:54:57.000Z", "max_forks_repo_head_hexsha": "d727d04b7ee619ba0eb45c7faf1004eb418e046e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "alexpearce/Thesis", "max_forks_repo_path": "chapters/cpv/measuring_araw.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d727d04b7ee619ba0eb45c7faf1004eb418e046e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "alexpearce/Thesis", "max_issues_repo_path": "chapters/cpv/measuring_araw.tex", "max_line_length": 96, "max_stars_count": 1, "max_stars_repo_head_hexsha": "d727d04b7ee619ba0eb45c7faf1004eb418e046e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "alexpearce/Thesis", "max_stars_repo_path": "chapters/cpv/measuring_araw.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-18T00:58:34.000Z", "max_stars_repo_stars_event_min_datetime": "2022-02-18T00:58:34.000Z", "num_tokens": 5155, "size": 16653 }
\chapter{Design} \label{Design} In this chapter, we are going to, firstly, introduce the requirements and assumptions we make for the design of WebCure. Having set them, afterwards, we will in detail describe the design of the system. \section{Requirements} \label{4-Requirements} We listed the functional requirements of WebCure in \tableref*{table:req1} and non-functional requirements in \tableref*{table:req2}. \begin{table}[!htbp] \centering \caption{Functional requirements.} \label{table:req1} \begin{tabular}{|p{1cm}|p{14cm}|} \hline R1 & Retrieval, increment and decrement of the counter CRDT should be possible. \\ \hline R2 & Retrieval, adding and removing elements from the set CRDT should be possible. \\ \hline R3 & Retrieval, assigning and resetting the multi-value register CRDT should be possible. \\ \hline R4 & Retrieval elements of any supported CRDTs should be possible according to the passed timestamp. \\ \hline R5 & The client should cache only the latest data available at server's side. \\ \hline R6 & It should not be possible to create elements of different CRDTs with the same name (due to limitations of AntidoteDB). \\ \hline R7 & When offline, it should be possible to make read/write operations on supported CRDTs. \\ \hline R8 & The user should be able to remove from the client any stored data element. \\ \hline R9 & Any operations performed offline, once the connection is restored, should be sent to the server immediately. \\ \hline R10 & The execution model of offline operations at the client should be sequential (updates are ordered). \\ \hline R11 & When the connection is re-established after having data changes in offline mode, the client storage should be updated appropriately (with a consideration of the client's offline changes and possible changes on the server). \\ \hline \end{tabular} \caption*{} \end{table} \begin{table}[!htbp] \centering \caption{Non-functional requirements.} \label{table:req2} \begin{tabular}{|p{1cm}|p{14cm}|} \hline NFR1 & The system should be available online and offline (except for the functionality with timestamp-related updates). \\ \hline NFR2 & The system should be available with a poor network connection. \\ \hline \end{tabular} \caption*{} \end{table} First of all, looking at the first three functional requirements, \textit{R1--R3}, we see that for the implementation were selected such CRDTs as counter, set and multi-value register. The reason for that is that these data types cover the most operations on CRDTs and, if our design works for them, it will work for the rest as well. The requirement \textit{R4} makes the user able to get the data from the server at different timestamps and compare it. Next, talking about the requirement \textit{R5}, even though it is possible to request data at different timestamps, it is vital for the client to store in cache only the latest available at the server data. The reason for it is to make it possible to continue working on the most relevant data, while offline. The requirement \textit{R6} is due to one of the limitations AntidoteDB possesses and is related to one of the assumptions we make in the following section. Requirements \textit{R7--R9} specify what kind of actions the user can perform on a client when offline, as well as their synchronisation with the server. Finally, the requirements \textit{R10} and \textit{R11} guarantee that the changes performed offline at the client, as well as changes at the server side, are synchronised in a way satisfying causal consistency model. Next, there are non-functional requirements: \textit{NFR1} and \textit{NFR2}. Basically, both of them support our claim that a client can work offline, as well as under uncertain network conditions. \section{Assumptions} \label{4-Assumptions} In this section, we are going to give a list of assumptions we make for WebCure. \begin{enumerate} \item {\textbf{Timestamps}. Firstly, the database storage used for the server's side should have the concept of timestamps (like in AntidoteDB, described in \secref*{2-antidotedb}), in order for the protocol we are going to describe in \secref*{4-protocol} to work correctly.} \item {\textbf{Cache is persistent}. For WebCure to work online and, especially, offline, we believe that the browser's cache is safe from automatic clearing. Contrarily, if the cache could be cleared automatically depending on the browser's behaviour, it makes it impossible to support the claim that the application can work offline. To guarantee this assumption, a Persistent Storage API described in \secref*{persistentstorage} can be used.} \item{\textbf{Name duplicates}. We limit the creation of different CRDT elements with the same name in the system due to limitations of AntidoteDB, as the requirement \textit{R7} describes it in the \tableref*{table:req1}. As AntidoteDB is in the process of ongoing development, currently the database crashes when there is an attempt to create elements of different CRDTs with the same name. Thus, this condition has to be fulfilled.} \item{\textbf{Server's database is always on}. We assume that the server's database is not going to be reset and lose all its data. The client entirely relies on the server's storage for the synchronisation and only sends back operations performed offline on client's side. Therefore, it will be impossible to restore the server's database from the client's storage, even if it was up-to-date before the server's data loss. With additional changes to the current protocol, it might be possible, though, but that is not the topic we cover in this thesis. However, even in such a situation, the client will be able to continue the offline work.} \end{enumerate} As we specified the requirements, we can go further into and design the protocol of the system. \section{Protocol} \label{4-protocol} The fundamental part of WebCure will be its protocol design. We are going to describe it in an event-based way in the form of pseudo-code in the following sections. % \begin{itemize} % \item {A client receives an update from the server} % \item {A client sends an update to the server} % \item {Two clients interact with a server} %\end{itemize} \subsection{Data transmission} As we already know from \chapref*{Background}, because AntidoteDB is using CRDT data types, the following options are possible to update the database: state-based and operation-based. This thesis will consider only the operation-based approach, as it has such an advantage over the state-based approach as less transferred data in most situations. Therefore, whenever a client needs to update the database, it will send to the server a list of operations. However, whenever it needs to read the value, it will receive the current state of the object from the database. For this thesis, we are going to use such data types as counters, sets and multi-value registers, to which the reader was introduced in \secref*{2-crdts}. \subsection{Description} \begin{figure}[!htb] \begin{center} \def\svgwidth{0.6\linewidth} \input{images/protocol/overview.pdf_tex} \caption {An overview of the communication protocol.} \label{fig:protocol1} \end{center} \end{figure} Firstly, as we would like to focus on the communication part between a server and a client, we will for now keep both of them as black boxes\footnote{In software engineering, a black box is a system, which can be viewed in terms of its inputs and outputs, without the understanding of its internal workings.\cite{49}}, as they are represented in \figref*{fig:protocol1}. Next, we will go through different stages of their communication and describe, how we handled these processes. \subsection*{Graphics notations} \begin{figure}[!htb] \centering \def\svgwidth{0.35\linewidth} \subfloat[]{{\input{images/notations/timeline.pdf_tex}}}% \qquad \def\svgwidth{0.35\linewidth} \subfloat[]{{\input{images/notations/transmission.pdf_tex}}}% \def\svgwidth{0.35\linewidth} \subfloat[]{{\input{images/notations/data.pdf_tex}}}% \qquad \def\svgwidth{0.35\linewidth} \subfloat[]{{\input{images/notations/operation.pdf_tex}}}% \qquad \caption{An overview of notations used in the following chapters for the protocol explanation.}% \label{fig:notations}% \end{figure} Consider the notations, which are going to be used for a further protocol description. In \figref*{fig:notations} \textbf{(a)}, we can see a notation for the timeline. Timelines will be used for the matter of showing the sequence of events happening. In \figref*{fig:notations} \textbf{(b)}, the arrow shows the transmission of data between a subsystem \textit{A} and subsystem \textit{B}, as well as its direction and a command. \figref*{fig:notations} \textbf{(c)} represents the state of the data on a system's side, while \figref*{fig:notations} \textbf{(d)} points to a timestamp, at which an operation that changes the system's storage was applied. Next, as we already mentioned, we will explain the protocol in an event-based way. \subsection*{A client receives an update from the server} \begin{figure}[!htb] \begin{center} \def\svgwidth{\linewidth} \input{images/architecture/read.pdf_tex} \caption {The communication between a client and a server for the read function.} \label{fig:design2} \end{center} \end{figure} We assume that a client initiates its work with empty storage. Then, a user might want to request the actual data from the server. In this case, as can be seen in \figref*{fig:design2}, a user has to pass to the server an \textit{id} of the data to \textit{read}. If the request is successful, the server is going to \textit{respond} with a \textit{value} for the requested \textit{id} and the timestamp of the last write -- \textit{t\textsubscript{0}}, so the client will store this information in its own storage. \begin{lstlisting}[caption={Pseudocode for requesting the data: client.}, label={lst:read1}] // Read function that pulls database changes // @param id: the id of the object, for which the update was requested; function read(id) { GetHttpRequest( // send an http-request to get the data from the server by id id, function onSuccess(value, timestamp) { // get the value and timestamp from the server // create an object that maintains all necessary data var item = { id: id, // id of an data item operations: [], // operations performed offline sentOperations: [], // operations performed offline and already sent to the server state: value, // a state received from the server type: 'set' // a type of CRDT }; storeInCache(item); // cache an item created earleir storeInCache(timestamp); // cache the timestamp }, function onFail() { getFromCache(id); // get the object from cache by id } ); } \end{lstlisting} In \lstref*{lst:read1} we can see the pseudocode of the logic for \textit{read} functionality. At \textit{line 5}, an asynchonous function \textit{GetHttpRequest} has two callbacks -- \textit{onSuccess}, in case a request proceeds successfully, and \textit{onFail} in case of a failure. In case of success, the value and the timestamp associated with passed \textit{id} are going to be fetched from the server. After that, at \textit{line 10} we create an element \textit{item}, which has the following properties: \textit{id} for the id of a data item, \textit{state} for the actual state of the data item on the server, \textit{type} for the type of CRDT, \textit{operations} for the operations performed at the client's side while offline, \textit{sentOperations} for the operations performed at the client's side while offline, but which are already sent to the server. Next, a function \textit{storeInCache} is called twice at lines \textit{18 and 19}. Firstly, to store in cache an object \textit{item} for the offline use. Secondly, to store in cache a \textit{timestamp} received from the server. A client will always receive either the data associated with the latest timestamp from the server or, if a client chooses to specify the timestamp, it will receive the data associated with that timestamp. In case of failure, however, the method \textit{getFromCache} is going to be called with an argument \textit{id}, as can be seen at \textit{line 22}. If \textit{id} exists at client's cache, then it will be returned. Now, imagine that after receiving an element \textit{id} from the server, the client wants to change it and send it back to the server. \subsection*{A client sends an update to the server} \begin{figure}[!htb] \begin{center} \def\svgwidth{\linewidth} \input{images/architecture/write.pdf_tex} \caption {The communication between a client and a server for the update function.} \label{fig:design3} \end{center} \end{figure} Looking at \figref*{fig:design3}, in the case of writing the information to the server, a client has to send an \textit{id} with an operation to perform. After that, the server is going to apply the received operation on its side and, in case of success, the new state of the data will receive a timestamp \textit{t\textsubscript{1}}, and an acknowledgement of the successful commit will be sent back to the client. What happens in case of unsuccessful acknowledgement will be explained below. Therefore, once the client is notified that the update was applied on the server successfully, a user can get the latest changes to the client's side now. Thus, when the read request for \textit{id} is sent again, the server will send back the new value -- \textit{value'} and a new timestamp -- \textit{t\textsubscript{1}}, for the element \textit{id}. \begin{lstlisting}[caption={Pseudocode for making a request to change the data: client.}, label={lst:update1}] // Update function that processes user-made update // @param id: an id for the object that should be updated; // @param op: operation performed on the object for the specified id function update(id, op) { PostHttpRequest( // send an http-request to update the data on the server { id, op }, function onSuccess(result) { // no actions performed }, function onFail() { var item = getFromCache(id); // get the object from cache by id item.operations.push(op); // store the operation on a client's side in order to try sending it again later storeInCache(item); // cache an updated item } ); } \end{lstlisting} However, as \lstref*{lst:update1} shows, the function \textit{update} needs to have an access to the parameters \textit{id} and \textit{op}. There is an attempt to send the operation \textit{op} for the element \textit{id} to the server, by using \textit{PostHttpRequest}. It is asynchronous and has two callbacks -- \textit{onSuccess} and \textit{onFail}, just as before it was explained for the \textit{read} function. If the request succeeds, then the client gets notified about it, and no further actions are taken. However, in the case of failure, as we can see at \textit{line 13}, firstly we are getting the object from the cache by \textit{id}. If it exists, then we add the operation \textit{op} to that object into its property \textit{operations}, and, afterwards, store an updated object in cache again using \textit{storeInCache} at \textit{line 15}. That makes the update available while the user is offline and gives an opportunity to send the operation again when the connection is back. Next, imagine that a client loses its network connection, so any updates made from that point onwards will be stored locally. \subsection*{Offline behaviour} \begin{figure}[!htb] \begin{center} \def\svgwidth{\linewidth} \input{images/architecture/offline.pdf_tex} \caption {The communication between a client and a server while offline with a transition to online.} \label{fig:design4} \end{center} \end{figure} Refer to \figref*{fig:design4}, where appropriate markings can clearly distinguish periods when the client was offline and online. The client has an element \textit{id} with a value \textit{value'} at timestamp \textit{t\textsubscript{1}} and then makes a local change applying some operation, which changes the previous value to a new one -- \textit{value''}. Pay attention that a new value does not receive a timestamp assigned to it, while locally: to support the causal consistency claims, the server should take responsibility for assigning timestamps. Then, after some time, the connection gets back, and the client sends an immediate update message to the server with \textit{id} of an element, the applied operation and a timestamp \textit{t\textsubscript{1}}. The server's side, as was already described above, applies that operation on a \textit{t\textsubscript{1}} to back up the causality claims, and returns an acknowledgement of success. Eventually, the client sends a \textit{read} message and gets back the \textit{value''} as well as the assigned to it timestamp \textit{t\textsubscript{2}}. \begin{lstlisting}[caption={Pseudocode for sending offline performed operations to the server: client.}, label={lst:offline}] // Update function that processes user-made updates performed offline when the connection is restored // @param offlineOperations: an array that contains all the operations performed on a client's side offline function synchronise(offlineOperations) { if (ONLINE) { offlineOperations.forEach(operation => { send(operation); }); } } \end{lstlisting} The logic described above can be seen in \lstref*{lst:offline}, which has the function named \textit{synchronise} that should be triggered at the time when the client's side restored a network connection. There, we can see that every operation performed offline is sent once at a time to the server. For causality, the array should be sorted in the order the operations were performed initially. Now, we move to the point when more than one client interacts with a server, in order to see how scalable the described protocol is. \subsection*{Two clients interact with a server.} \begin{figure}[!htb] \begin{center} \def\svgwidth{\linewidth} \input{images/architecture/twoClients.pdf_tex} \caption {The communication between two clients and a server.} \label{fig:design5} \end{center} \end{figure} We assume that, initially, as can be seen in \figref*{fig:design5}, the server has a stored element \textit{(id, value)} at the timestamp \textit{t\textsubscript{0}}. Therefore, when both clients request to read the data from the server, they get that data and store it locally. At the representation above, a \textit{Client 1} is acting first and sends an update to the server changing the value of an element \textit{id} to \textit{value'} at \textit{t\textsubscript{1}}. Observe that \textit{Client 1} does not request the latest data from the server and still only has its local changes. In parallel, a \textit{Client 2} makes the change later at time \textit{t\textsubscript{2}}, and an element \textit{id} is now set to \textit{value''}. Then, both clients request the updated data from the server and both receive the actual value of the element \textit{id} at the timestamp \textit{t\textsubscript{2}}, which is \textit{value''}. We would like to stress the point that all systems - the server and both clients end up having the same data. Now we would like to give a brief overview of the next two chapters: firstly, in \chapref*{Technologies} we will give a proper introduction into the technologies we used to implement the described protocol and, then, in \chapref*{Implementation} we will go through its development.
{ "alphanum_fraction": 0.759040404, "avg_line_length": 80.8163265306, "ext": "tex", "hexsha": "319336cfbdc166ccc3b6799c305398530debce61", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b5a4db911649488be6c670ab5357648b6b03433a", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "red17electro/WebCache", "max_forks_repo_path": "Arbeit/chapters/4-Design.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "b5a4db911649488be6c670ab5357648b6b03433a", "max_issues_repo_issues_event_max_datetime": "2019-01-16T11:10:10.000Z", "max_issues_repo_issues_event_min_datetime": "2019-01-16T11:09:33.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "red17electro/WebCache", "max_issues_repo_path": "Arbeit/chapters/4-Design.tex", "max_line_length": 1296, "max_stars_count": 1, "max_stars_repo_head_hexsha": "b5a4db911649488be6c670ab5357648b6b03433a", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "red17electro/WebCache", "max_stars_repo_path": "Arbeit/chapters/4-Design.tex", "max_stars_repo_stars_event_max_datetime": "2019-01-16T11:04:50.000Z", "max_stars_repo_stars_event_min_datetime": "2019-01-16T11:04:50.000Z", "num_tokens": 4834, "size": 19800 }
\documentclass{beamer} \usetheme{CambridgeUS} \title[Process]{The Koopman operator} \subtitle{Nonlinear MPC + Kalman Filter} \institute[Polimi]{Politecnico di Milano} \author{Sergio Vanegas} \date{\today} \usepackage{listings} \usepackage{caption} \usepackage{subcaption} \usepackage{dirtytalk} \usepackage{graphicx} \usepackage{siunitx} \begin{document} \begin{frame}[plain,noframenumbering] \maketitle \end{frame} \begin{frame}{Table of Contents} \tableofcontents \end{frame} \section{Introduction} \begin{frame}{Introduction} In this presentation we present the final version of the nonlinear MPC for the Van Der Pol oscillator that will be used as reference from now on, and then proceed to evaluate different methodologies to replicate its behaviour using the Koopman operator. We start by defining the nonlinear MPC, simulation parameters and data format for replicability's sake. Then, we introduce the memory-reliant Koopman controller, basing the amount of stored samples on the Control Horizon parameter of the reference MPC. In this section we also compare its performance to the (polished) memoryless implementation discussed last meeting. After that, we discuss the Kalman-informed framework from which we will repeat the Koopman identification process, including the memoryless vs memory-reliant comparison. Finally, we draw some conclusions based on the overall results and determine the best implementation for each scenario. \end{frame} \section{Reference controller} \begin{frame}{Nonlinear MPC - Parameters} \begin{itemize} \item Sampling time: $0.01$ seconds \item Total simulation time: $2000$ seconds \item Number of I/O: 2/2 (identity relationship) \item Internal model sub-step: 10 steps $\rightarrow 0.001$ seconds \item Prediction horizon: 15 samples $\rightarrow 0.15$ seconds \item Control horizon: 3 samples $\rightarrow 0.03$ seconds \item Output variables' weights: $\left[1.0 \quad 0.0\right]$ \item Second state boundaries: $\left[-1.0 \quad 1.0\right]$ \end{itemize} Once again, we use the Van der Pol simulations as reference. We remove the integrator block reset signal since we are now sure the controller is stable (first-state control + second-state boundaries). \end{frame} \begin{frame}{Simulink model - Reference} \begin{figure} \centering \includegraphics[width=0.75\textwidth]{Simulink_VDP.png} \end{figure} \end{frame} \section{Koopman controller} \begin{frame}{Nonlinear Koopman controller - Parameters} Since we are now only predicting the immediately successive state instead of the whole observable collection, we have a lot more freedom regarding the number of observables and the dataset size used during the identification algorithm. \begin{itemize} \item Number of observables: 25 \item Input type: hybrid (reference, noisy measured states and their difference) \item Regularization coefficient: $0.001$ \item Longest delay: $z^{-2}$ \end{itemize} Given the non-linearity of the new implementation, the matrix multiplications and ZOH's had to be implemented manually, fitting the observable function inside the delayed feedback loop. The $L^2$ error was calculated w.r.t. the noiseless signals. \end{frame} \begin{frame}{Simulink model - Benchmark} \begin{figure} \centering \includegraphics[width=0.75\textwidth]{Simulink_Koopman.png} \end{figure} \end{frame} \begin{frame}{Nonlinear Koopman controller - Numerical results} We numerically compare the performance of the controllers by observing their $L^2$ error throughout a $20$ second\footnote{time-frame selected based on the duration of the recorded trajectories} over three different references. \begin{itemize} \item Impulse error (single sample, second-state reference equal to 0) \begin{itemize} \item MPC $\rightarrow \left(0.114,0.353\right)$ \item Memoryless Koopman $\rightarrow \left(0.105,0.082\right)$ \item Memory-reliant Koopman $\rightarrow \left(0.106,0.087\right)$ \end{itemize} \item Step error ($@t=0.0$, second-state reference equal to 0) \begin{itemize} \item MPC $\rightarrow \left(0.450,0.705\right)$ \item Memoryless Koopman $\rightarrow \left(0.559,0.623\right)$ \item Memory-reliant Koopman $\rightarrow \left(0.556,0.639\right)$ \end{itemize} \item Trajectory error (non-convergent reference) \begin{itemize} \item MPC $\rightarrow \left(0.352,0.649\right)$ \item Memoryless Koopman $\rightarrow \left(0.486,0.477\right)$ \item Memory-reliant Koopman $\rightarrow \left(0.490,0.511\right)$ \end{itemize} \end{itemize} \end{frame} \begin{frame}{Nonlinear Koopman controller - Impulse response} \begin{figure} \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Undelayed_Koopman_Pulse.png} \caption{Memoryless controller} \end{subfigure} \hfill \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Delayed_Koopman_Pulse.png} \caption{Memory-reliant controller} \end{subfigure} \end{figure} \end{frame} \begin{frame}{Nonlinear Koopman controller - Step response} \begin{figure} \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Undelayed_Koopman_Step.png} \caption{Memoryless controller} \end{subfigure} \hfill \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Delayed_Koopman_Step.png} \caption{Memory-reliant controller} \end{subfigure} \end{figure} \end{frame} \begin{frame}{Nonlinear Koopman controller - Trajectory tracking} \begin{figure} \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Undelayed_Koopman_Ref.png} \caption{Memoryless controller} \end{subfigure} \hfill \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Delayed_Koopman_Ref.png} \caption{Memory-reliant controller} \end{subfigure} \end{figure} \end{frame} \section{Kalman-informed Koopman controller} \begin{frame}{Kalman-informed MPC - Considerations} \begin{itemize} \item Same MPC controller parameters \item Kalman filter fed-back with the noisy measured first state (measure function represented by matrix $\left[1 \quad 0\right]$) \item Jacobian generated by interpreting the forward-Euler scheme as a single sub-step operation \item 0 variance inside the state function; all noise is considered to come from the measurement function \item We keep using the Van der Pol clean simulations as reference. \end{itemize} \end{frame} \begin{frame}{Simulink model - Reference} \begin{figure} \centering \includegraphics[width=0.75\textwidth]{Simulink_Kalman.png} \end{figure} \end{frame} \begin{frame}{Kalman-Koopman controller - Parameters} A similar pipeline was intended for the Kalman-informed Koopman controller. Nevertheless, a memory-reliant version could not be evaluated, since no combination of parameters stabilized the controller (memory size for both input and output, regularization coefficient, etc). Therefore, different observable collections were compared in its stead. \begin{itemize} \item Number of observables: 10/20/50 \item Input type: hybrid (reference, noisy first measured state and its difference) \item Regularization coefficient: 0.001 \end{itemize} \end{frame} \begin{frame}{Simulink model - Benchmark} \begin{figure} \centering \includegraphics[width=0.75\textwidth]{Simulink_KK.png} \end{figure} \end{frame} \begin{frame}[allowframebreaks]{Kalman-Koopman controller - Numerical results} We repeat the numerical benchmark used for the Nonlinear Koopman controller but, as mentioned before, we compare different library sizes instead of memoryless vs memory-reliant. \begin{itemize} \item Impulse error (single sample, second-state reference equal to 0) \begin{itemize} \item Kalman-MPC $\rightarrow \left(0.114,0.269\right)$ \item 10 observables $\rightarrow \left(2.759,1.944\right)$ \item 20 observables $\rightarrow \left(2.763,1.950\right)$ \item 50 observables $\rightarrow \left(2.780,1.972\right)$ \end{itemize} \item Step error ($@t=0.0$, second-state reference equal to 0) \begin{itemize} \item Kalman-MPC $\rightarrow \left(0.496,0.694\right)$ \item 10 observables $\rightarrow \left(1.833,0.401\right)$ \item 20 observables $\rightarrow \left(1.882,0.401\right)$ \item 50 observables $\rightarrow \left(1.873,0.402\right)$ \end{itemize} \item Trajectory error (non-convergent reference) \begin{itemize} \item Kalman-MPC $\rightarrow \left(0.333,0.534\right)$ \item 10 observables $\rightarrow \left(2.895,2.215\right)$ \item 20 observables $\rightarrow \left(2.808,2.178\right)$ \item 50 observables $\rightarrow \left(2.766,2.156\right)$ \end{itemize} \end{itemize} \end{frame} \begin{frame}{Kalman-Koopman controller - Impulse response} \begin{figure} \centering \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{KK_10_Pulse.png} \caption{10 functions} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{KK_20_Pulse.png} \caption{20 functions} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{KK_50_Pulse.png} \caption{50 functions} \end{subfigure} \end{figure} \end{frame} \begin{frame}{Kalman-Koopman controller - Step response} \begin{figure} \centering \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{KK_10_Step.png} \caption{10 functions} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{KK_20_Step.png} \caption{20 functions} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{KK_50_Step.png} \caption{50 functions} \end{subfigure} \end{figure} \end{frame} \begin{frame}{Kalman-Koopman controller - Trajectory tracking} \begin{figure} \centering \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{KK_10_Ref.png} \caption{10 functions} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{KK_20_Ref.png} \caption{20 functions} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{KK_50_Ref.png} \caption{50 functions} \end{subfigure} \end{figure} \end{frame} \begin{frame}{Kalman-Koopman controller - Split operator} An additional setup consisting on 2 different Koopman operators (one for the Kalman filter and one for the MPC) was tested; compared to the combined controller, the results were noticeably better (although the impulse response diverged in the same way). Both a memoryless and a memory-reliant version, paired with the memoryless nonlinear Koopman controller, were put to the test. \begin{itemize} \item Number of observables: $50$ \item Input type: direct (noisy measured first state + koopman control signal) \item Regularization coefficient: $0.001$ \end{itemize} \end{frame} \begin{frame}{Simulink model - Split benchmark} \begin{figure} \centering \includegraphics[width=0.75\textwidth]{Simulink_Split.png} \end{figure} \end{frame} \begin{frame}{Split K-K controller - Numerical results} \begin{itemize} \item Impulse error (single sample, second-state reference equal to 0) \begin{itemize} \item MPC $\rightarrow \left(0.114,0.269\right)$ \item Memoryless Koopman $\rightarrow \left(2.674,1.800\right)$ \item Memory-reliant Koopman $\rightarrow \left(2.628,1.992\right)$ \end{itemize} \item Step error ($@t=0.0$, second-state reference equal to 0) \begin{itemize} \item MPC $\rightarrow \left(0.496,0.694\right)$ \item Memoryless Koopman $\rightarrow \left(0.583,0.652\right)$ \item Memory-reliant Koopman $\rightarrow \left(0.555,0.666\right)$ \end{itemize} \item Trajectory error (non-convergent reference) \begin{itemize} \item MPC $\rightarrow \left(0.333,0.534\right)$ \item Memoryless Koopman $\rightarrow \left(1.400,1.249\right)$ \item Memory-reliant Koopman $\rightarrow \left(2.165,1.713\right)$ \end{itemize} \end{itemize} \end{frame} \begin{frame}{Split K-K controller - Impulse response} \begin{figure} \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Undelayed_Split_Pulse.png} \caption{Memoryless controller} \end{subfigure} \hfill \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Delayed_Split_Pulse.png} \caption{Memory-reliant controller} \end{subfigure} \end{figure} \end{frame} \begin{frame}{Split K-K controller - Step response} \begin{figure} \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Undelayed_Split_Step.png} \caption{Memoryless controller} \end{subfigure} \hfill \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Delayed_Split_Step.png} \caption{Memory-reliant controller} \end{subfigure} \end{figure} \end{frame} \begin{frame}{Split K-K controller - Trajectory tracking} \begin{figure} \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Undelayed_Split_Ref.png} \caption{Memoryless controller} \end{subfigure} \hfill \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Delayed_Split_Ref.png} \caption{Memory-reliant controller} \end{subfigure} \end{figure} \end{frame} \section{Conclusions} \begin{frame}[allowframebreaks]{Conclusions} \begin{itemize} \item It comes as no surprise that the more measured data there is available, the better. Even by tricking the controllers with delayed data, no tangible improvement was achieved. As it stands now, the most accurate results are consistently produced by a memoryless implementation, even in a full-state observation scenario. \item Even though the Koopman framework has proven to be a powerful tool to approximate nonlinear dynamical systems, concatenating multiple processes and expecting a clear observation proved to be quite inaccurate. Splitting the nonlinear systems, however, proved to be a better solution (granted, not as good as the full-observable), which can be interpreted as the Koopman operator benefiting from the additional data coming from the Kalman-filtered states. \item Regardless of the above, using the Koopman operator to approximate the MPC controller using a full-state observation (or alternatively, a Kalman filter as a full-state provider) not only yielded satisfactory results, but it also beat the nonlinear MPC both in stability and steady-state error. \end{itemize} \end{frame} \end{document}
{ "alphanum_fraction": 0.6651938259, "avg_line_length": 42.0148514851, "ext": "tex", "hexsha": "c51ba3115e7c8299d069d5c3eb5afdf42df89b9d", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-11-21T14:26:02.000Z", "max_forks_repo_forks_event_min_datetime": "2021-11-21T14:26:02.000Z", "max_forks_repo_head_hexsha": "22daa8196b611089e6753e600c39922c55522d9b", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "sergiovaneg/LaTex_Documents", "max_forks_repo_path": "Thesis_Progress_4/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "22daa8196b611089e6753e600c39922c55522d9b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "sergiovaneg/LaTex_Documents", "max_issues_repo_path": "Thesis_Progress_4/main.tex", "max_line_length": 467, "max_stars_count": null, "max_stars_repo_head_hexsha": "22daa8196b611089e6753e600c39922c55522d9b", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "sergiovaneg/LaTex_Documents", "max_stars_repo_path": "Thesis_Progress_4/main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4419, "size": 16974 }
\documentclass{beamer} \usepackage[utf8]{inputenc} \usepackage{default} \input{preamble-devcon.tex} \title{\textsc{swap, swear and swindle}: \\incentive system for swarm and beyond} \author{Viktor Trón and Aron Fischer} \AtBeginSection[] { \begin{frame}<beamer> \frametitle{Outline} \tableofcontents[currentsection,sectionstyle=show/shaded,subsectionstyle=show/show/shaded,subsubsectionstyle=show/show/show/hide] \end{frame} } \begin{document} \begin{frame} \titlepage \end{frame} \blankslide{\includegraphics[width=0.8\textwidth]{ecosystem0.jpg}} \begin{frame}{Outline} \tableofcontents[subsectionstyle=shaded/shaded,subsubsectionstyle=show/hide/hide] \end{frame} \begin{section}{content delivery} \subsection{data retrieval} \blockslide{\textbf{data out}}{How to retrieve data stored in the swarm.} \begin{frame}{data retrieval} \begin{columns}[T] \begin{column}{0.4\textwidth} \small \begin{itemize} \item<1-> node id, chunk id, function as addresses in the same keyspace \item<4-> dapp retrieves \texttt{awesome-swarm-slides.pdf} \item<5-> get its address \textbf{H} \item<6-> content with address \textbf{H} stored with the node whose own address is \emph{closest} to \textbf{H} \item<7-> swarm's \textbf{retrieval process} is responsible for deliviering \end{itemize} \end{column} \begin{column}{0.6\textwidth} \begin{tikzpicture} \node[scale=0.7]{ \begin{tikzpicture} \node[visible on=<2->] at (2,2) {\textbf{the swarm network:}}; \node[node,visible on=<2->] at (0,0) {}; \node[node,visible on=<2->] at (1,-0.9) {}; \node[node,visible on=<2->] at (0.5,-3) {}; \node[node,visible on=<2->] at (0,-6.3) {}; \node[node,visible on=<2->] at (5.8,-1.2) {}; \node[node,visible on=<2->] at (4,-5.9) {}; \node[node,visible on=<2->] at (4.1,-3) {}; \node[node,visible on=<2->] at (2.1,-2) {}; \node[node,visible on=<2>] at (4,0) (younode) {swarm-node}; \node[peer,visible on=<3->] at (4,0) (you) {You}; \node[visible on=<5->] (chunk) at (1,-5) {$\bullet$}; \node[visible on=<5->] (chunklabel) at (2,-4.5) {H} edge[point,->,visible on =<5->] (chunk); \node[visible on=<2-6>, node] at (-0.5,-4.2) (close) {}; \node[visible on=<7->, peer] at (-0.5,-4.2) (closestnode) {Closest Node}; \node[visible on=<7->] at (-0.5, -5.5) (hlabel) {Look for ``H'' here} edge[visible on=<7->,point,->] (closestnode); \end{tikzpicture} }; \end{tikzpicture} \end{column} \end{columns} \end{frame} \begin{frame}{swarm retrieval process} \begin{tikzpicture} \node[peer,visible on=<1->] at (12,0) (retriever){retriever}; \node[visible on=<13>,scale=2] at (12,-1.5) {\Smiley}; \node[visible on=<2->] (chunk) at (4,-5) {$\bullet$}; \node[visible on=<2->] (chunklabel) at (3,-4) {data address} edge[point,->,visible on =<2->] (chunk); \node[peer,visible on=<4-12>, dimmed on=<13->] at (8.5,-0.5) (connectedpeer){peer} (retriever.-150) edge[point, ->,visible on=<5>,dimmed on=<6-12>, bend left=15] node[below=2pt,visible on=<5>, dimmed on=<6-12>] {request} (connectedpeer.-10); \node[node,visible on=<6-11>, dimmed on=<12->] at (6,-2) (firstnode){some node} (connectedpeer.-70) edge[point,->,visible on=<6>, dimmed on=<7-11>,bend left=30] node[right of=2pt,visible on=<6>, dimmed on=<7-11>] {request} (firstnode.east); \node[node,visible on=<7-10>, dimmed on=<11->] at (7.5,-4.5) (secondnode){other node} (firstnode.-70) edge[point,->,dashed,visible on=<7>, dimmed on=<8-10>,bend left=10] node[right of=2pt,visible on=<7>, dimmed on=<8-10>] {requests...} (secondnode.120); \node[peer,visible on=<3-9>, dimmed on=<10->] at (4,-6) (closestnode){closest node} (secondnode.-110) edge[point,->,visible on=<8>, dimmed on=<9>,out=-110,in=0] node[right of=2pt,visible on=<8>, dimmed on=<9>] {request} (closestnode.east) (closestnode) edge[thick,point,->,visible on=<9>] node[above=2pt,visible on=<9>]{deliver} (secondnode) (secondnode.north west) edge[thick, point,->,visible on=<10>, dashed] node[left of=2pt,visible on=<10>]{deliveries} (firstnode.-120) (firstnode.55) edge[thick,point,->,visible on=<11>] node[left of=2pt,visible on=<11>]{deliver} (connectedpeer.-150) (connectedpeer) edge[thick,point,->,visible on=<12>] node[above=2pt,visible on=<12>]{deliver} (retriever) ; \node[chunk, scale=0.6,visible on=<3->, below=3pt of closestnode.-50] {}; \node[chunk, scale=0.6,visible on=<9->, below=3pt of secondnode.-50] {}; \node[chunk, scale=0.6,visible on=<10->, below=3pt of firstnode.-50] {}; \node[chunk, scale=0.6,visible on=<11->, below=3pt of connectedpeer.-50] {}; \node[chunk, scale=0.6,visible on=<12->, below=3pt of retriever.-50] {}; \end{tikzpicture} \end{frame} \subsection[swap]{paying for data} \plainblockslide{}{\textsc{swap}: \textbf{sw}arm \textbf{a}ccounting \textbf{p}rotocol} \begin{frame}{\textsc{swap}: swarm accounting protocol} \begin{columns}[T] \begin{column}{0.5\textwidth} \uncover<2->{\begin{block}{per-peer bandwidth accounting} keeps track of all data retrieved both directions \end{block} } \uncover<9->{ \begin{block}{settlement} service for service or tally too imbalanced $\rightarrow$ a \emph{payment} is initiated \end{block} } \end{column} \begin{column}{0.5\textwidth} \begin{center} \begin{tikzpicture} \node[peer,visible on=<3->] at (-2,0) (node1) {me}; \node[peer,visible on=<4->] at (2,0) (node2) {peer} (node1.50) edge[point,->,dashed,visible on=<5->,bend right=30,out=50,in=130] node[below=10pt,visible on=<7->,scale=0.8] (sup){data delivered} (node2.130) (node2.-130) edge[point,->,dashed,visible on=<6->,bend left=30,in=130,out=50] node[above=10pt,visible on=<9->,scale=0.8]{data received} (node1.-50) ; \node[visible on=<8->,below= 2mm of sup]{\Large{-}}; \end{tikzpicture} \end{center} \end{column} \end{columns} \end{frame} \begin{frame}{chequebook vs payment channel} \begin{itemize} \item \emph{not feasible} to pay for every chunk of data delivered with a transation \item<2-> even batch payments would constitute unacceptable blockchain bloat (and transaction cost). \end{itemize} \uncover<3->{instead of processing every payment on-chain, SWAP employs a \emph{chequebook} smart contract:} \begin{itemize} \item<4-> cheques are passed between connected swarm nodes (peers) off-chain. \item<4-> peers can cash in (process on-chain) the received cheques at any time. \item<4-> issued cheques are \emph{cumulative}, i.e., \textbf{only the last cheque needs to be cashed for settlement}. \end{itemize} \uncover<5>{\textsc{swap} will soon also be usable via \emph{payment channels} (see Raiden).} \end{frame} \begin{frame}{chequebook vs payment channel} \begin{overlayarea}{\textwidth}{5.2cm} \setbeamercovered{transparent}% Dim out "inactive" elements \begin{columns}[t] \column{0.5\textwidth} \begin{block}{chequebook} \textbf{pro:} \begin{itemize} \item<1>{offchain payments} \item<2>{low barrier to entry (pay anyone)} \end{itemize} \textbf{con:} \begin{itemize} \item<3>{cheques can bounce (payment not guaranteed)} \end{itemize} \end{block} \column{0.5\textwidth} \begin{block}{channel} \textbf{pro:} \begin{itemize} \item<1>{offchain payments} \item<3>{secure - payments guaranteed} \end{itemize} \textbf{con:} \begin{itemize} \item<2>{high barrier to entry (must first join channel network)} \end{itemize} \end{block} \end{columns} \end{overlayarea} \end{frame} \begin{frame} \begin{block}{SWARM + SWAP demonstrates} \begin{itemize} \item programmable incentives \item drive towards low latency retrieval \item auto-scaling delivery network \end{itemize} \end{block} \end{frame} \begin{frame}{swarm CDN is auto-scaling} \begin{tikzpicture} \node[peer,visible on=<1->] at (12,0) (retriever){}; \node[visible on=<1->] (chunk) at (4,-5) {$\bullet$}; \node[visible on=<1->] (chunklabel) at (3,-4) {data address} edge[point,->,visible on =<1->] (chunk); \node[node,visible on=<2->,] at (8.5,-0.5) (connectedpeer){} (retriever.-150) edge[point, ->,visible on=<4-8>, bend left=15] node[below=2pt,visible on=<4>] {request} (connectedpeer.-10); \node[node,visible on=<2->] at (6,-2) (firstnode){} (connectedpeer.-70) edge[point,->,visible on=<4-7>,bend left=30] node[right of=2pt,visible on=<4>] {request} (firstnode.east); \node[node,visible on=<2->] at (7.5,-4.5) (secondnode){} (firstnode.-70) edge[point,->,dashed,visible on=<4-6>,bend left=10] node[right of=2pt,visible on=<4>] {requests...} (secondnode.120); \node[peer,visible on=<2->] at (4,-6) (closestnode){} (secondnode.-110) edge[point,->,visible on=<4-5>,out=-110,in=0] node[right of=2pt,visible on=<4>] {request} (closestnode.east) (closestnode) edge[thick,point,->,visible on=<5>] (secondnode) (secondnode.north west) edge[thick, point,->,visible on=<6>, dashed] (firstnode.-120) (firstnode.55) edge[thick,point,->,visible on=<7>] (connectedpeer.-150) (connectedpeer) edge[thick,point,->,visible on=<8>] (retriever) ; \node[chunk, scale=0.6,visible on=<3->, below=3pt of closestnode.-50] {}; \node[chunk, scale=0.6,visible on=<5->, below=3pt of secondnode.-50] {}; \node[chunk, scale=0.6,visible on=<6-9>, below=3pt of firstnode.-50] {}; \node[chunk, scale=0.6,visible on=<7-9>, below=3pt of connectedpeer.-50] {}; \node[chunk, scale=0.6,visible on=<8->, below=3pt of retriever.-50] {}; \node[peer, visible on=<11->] at (12,-6) (newguy){}; \node[node, visible on=<11->] at (11,-4) (newnode){} (newguy) edge[point,->,visible on=<12-15>,dashed,bend left=20] (newnode) (newnode) edge[point,->,visible on=<13-14>,bend left=20] (secondnode) ; \node[chunk, scale=0.6,visible on=<14->, below=3pt of newnode.-50] {} (secondnode) edge[thick,point,->,visible on=<14>] (newnode); \node[chunk, scale=0.6,visible on=<15->, below=3pt of newguy.-50] {} (newnode) edge[thick,point,->,visible on=<15>] (newguy); \node[visible on=<16>,scale=4] at (3,-1) {\Smiley}; \end{tikzpicture} \end{frame} \end{section} \begin{section}{content storage} \subsection[pay-as-you-store]{deferred payments and proof-of-custody} \begin{frame}{} SWAP allows for speedy retrieval of \emph{popular content}, but there is \textbf{no guarantee that less popular content will remain available}. Whatever is not accessed for a long time is likely to be deleted.\\[5mm] The first step: change the swarm's incentives by \textbf{paying nodes to store your content}. \end{frame} \begin{frame}{payment for proof-of-custody} The basic idea: \begin{enumerate} \item commit in advance to paying for data to be available in the swarm. \item over time, challenge the swarm to provide proof that the data is still available: request \emph{proof-of-custody}. \item every valid proof-of-custody releases the next payment installment to the storing nodes. \end{enumerate} \begin{block}{Remember:} The \textbf{proof-of-custody} here is a small message - a single hash - which cryptographically proves that the issuer has access to the data. \end{block} \end{frame} \begin{frame}{proof of custody + payment channel} These deferred payments constitute a \textbf{conditional escrow}: payment is made up-front, payment is held (escrow) and is only released when a valid proof-of-custody is received (condition).\\[5mm] This procedure can be handled off-chain and can be directly \textbf{integrated into the payment channels.} All you need is a payment-channel \emph{judge contract} that can understand swarm storage receipts. \end{frame} \subsection[insurance]{storage insurance and negative incentives} \begin{frame}{If data goes missing...} If data goes missing nodes will lose potential revenue for no longer being able to generate proofs-of-custody, but there are \emph{no further consequences} (yet). \\[5mm] Therefore, to complete the storage incentive scheme, we introduce an \emph{insurance system} the can \textbf{punish offending nodes for not keeping their storage promises}. \end{frame} \plainblockslide{}{\textsc{swear}: \textbf{sw}arm \textbf{e}nforcement of \textbf{a}rchiving \textbf{r}ules} \begin{frame}{\textsc{swear} to store} SWEAR is a smart contract that allows nodes to register as long-term storage nodes by posting a \textbf{security deposit}.\\[5mm] Registered nodes can sell promissory notes guaranteeing long-term data availablilty -- essentially insurance against deleting. \\[5mm] Implementation: swarm syncing process with added receipts. \end{frame} \begin{frame}{the syncing process} \begin{columns}[T] \begin{column}{0.4\textwidth} \only<1-10>{ \begin{block}{syncing} \begin{itemize} \item<3-> chunks to be stored at the nodes whose address is closest to the chunk ID \item<5-> relaying: syncing \item<7-> data is passed on from node to node % \item<10->[] %dummy. I needed this to get line spacing to work in the point above. \end{itemize} \end{block} } \uncover<11>{\frametitle{insured upload to swarm}} \only<11->{ \begin{block}{insured storage} syncing via registered nodes with each swap receipted. \end{block} \begin{block}{insured storage:} \begin{itemize} \item owner passes data to a registered peer and receives an insurance receipt \item relaying: syncing \item all receipts are accounted and paid for \end{itemize} \end{block} } \end{column} \begin{column}{0.6\textwidth} \begin{tikzpicture} \node[scale=0.7,visible on=<1-10>] at (0,0) { \begin{tikzpicture} \node[invisible] at (0,-9) (dummy){}; \node[invisible] at (5,-5) (dummy2){}; \node[peer,visible on=<2->] at (-2,0) (owner){owner}; \node[visible on=<3->] (chunk) at (4,-5) {$\bullet$}; \node[visible on=<3->] (chunklabel) at (5,-4) {chunk address} edge[point,->,visible on =<3->] (chunk); \node[peer,visible on=<5->] at (0.5,-0.5) (connectedpeer){peer} (owner.east) edge[point, ->,visible on=<6->, bend left=15] node[above=2pt,visible on=<6->] {sync} (connectedpeer.150); \node[node,visible on=<7->] at (1,-2) (firstnode){some node} (connectedpeer.-70) edge[point,->,visible on=<7->,bend left=10] node[right of=2pt,visible on=<7->] {sync} (firstnode.north); \node[node,visible on=<8->] at (0.5,-4.5) (secondnode){other node} (firstnode.-70) edge[point,->,dashed,visible on=<8->,bend left=10] node[right of=2pt,visible on=<8->] {syncing...} (secondnode.70); \node[peer,visible on=<4->] at (4,-6) (closestnode){closest node} (secondnode.-110) edge[point,->,visible on=<9->,out=-110,in=180] node[above=2pt,visible on=<9->] {sync} (closestnode.west); \end{tikzpicture} }; \node[scale=0.7,visible on=<12->] at (0,0) { \begin{tikzpicture} \node[invisible] at (0,-9) (dummy){}; \node[invisible] at (5,-5) (dummy2){}; \node[peer,visible on=<11->] at (-2,0) (owner){owner}; \node[visible on=<12->] (chunk) at (4,-5) {$\bullet$}; \node[visible on=<12->] (chunklabel) at (5,-4) {chunk address} edge[point,->,visible on =<11->] (chunk); \node[peer,visible on=<12->] at (0.5,-0.5) (connectedpeer){peer} (owner.east) edge[point, ->,visible on=<12->, bend left=15] node[above=2pt,visible on=<12->] {store} (connectedpeer.150) (connectedpeer.-170) edge[point,->,visible on=<12->,thick,bend right=15] (owner.-20); \node[node,visible on=<12->] at (1,-2) (firstnode){some node} (connectedpeer.-70) edge[point,->,visible on=<12->,bend left=10] node[right of=2pt,visible on=<12->] {store} (firstnode.north) (firstnode.140) edge[point,->,visible on=<12->,thick,bend right=10] node[left of=2pt,visible on=<12->] {receipts} (connectedpeer.-140); \node[node,visible on=<12->] at (0.5,-4.5) (secondnode){other node} (firstnode.-70) edge[point,->,dashed,visible on=<12->,bend left=10] node[right of=2pt,visible on=<12->] {store...} (secondnode.70) (secondnode.110) edge[point,->,visible on=<12->,dashed,thick,bend right=10] node[left of=2pt,visible on=<12->] {receipts} (firstnode.-110); \node[peer,visible on=<12->] at (4,-6) (closestnode){closest node} (secondnode.-110) edge[point,->,visible on=<12->,out=-110,in=180] node[above=2pt,visible on=<12->] {store} (closestnode.west) (closestnode.-160) edge[point,->,visible on=<12->,thick,out=180,in=-110] node[left of=2pt,visible on=<12->] {receipt} (secondnode.-130); \end{tikzpicture} }; \end{tikzpicture} \end{column} \end{columns} \end{frame} \plainblockslide{}{\textsc{swindle}: \textbf{s}torage \textbf{w}ith \textbf{in}surance \textbf{d}eposit, \textbf{l}itigation and \textbf{e}scrow } \blockslide[\textsc{swindle}]{TL;DR}{if insured data is lost, the storers lose their deposit} \begin{frame}{litigation upon data loss} \begin{block}{if insured data is not found} %If insured data is lost, anyone holding a valid receipt can launch the litigation procedure.\\ %A node so challenged may defend itself by presenting litigation by challenge \end{block} \begin{block}{defense by providing} \begin{itemize} \item proof-of-custody of the data (eventually the data itself) \item a storage receipt for the data, shifting the blame and implicating another node as the culprit. \end{itemize} \end{block} \begin{block}{upload and disappear} \begin{itemize} \item swear to sync and receipting $\rightarrow$ immediate settlement with the peer at upload \item finger-pointing along chain of receipts $\rightarrow$ correct accountability of storer thereafter \end{itemize} \end{block} %Only at the time of litigation is the chain from owner to storer explicitly determined. %This is an important feature -- litigation may take time but initial storage can be fast allowing you to `upload and disappear'. \end{frame} \begin{frame} \begin{center} \textsc{ swap $\bullet$ swear $\bullet$ swindle } \end{center} \end{frame} \begin{frame}{ethersphere orange paper series} \begin{block}{} Viktor Trón, Aron Fischer, Dániel Nagy A and Zsolt Felföldi, Nick Johnson: swap, swear and swindle: incentive system for swarm. May 2016 \end{block} \begin{block}{} Viktor Trón, Aron Fischer, Nick Johnson: smash-proof: auditable storage for swarm secured by masked audit secret hash. May 2016 \end{block} \end{frame} \end{section} \begin{frame}{swarm: status and usage} \begin{block}{what is the development status of swarm?} \begin{enumerate} \item golang implementation: proof-of-concept iteration 2 release 4, code has been merged to go-ethereum develop branch \item Microsoft Azure hosting a testnet of 100+ nodes over 3 regions \item expanding team, come join or contribute \end{enumerate} \end{block} \begin{block}{how can swarm be used?} \begin{itemize} \item \texttt{bzzd} - swarm daemon, communicates with ethereum via IPC, so any ethereum client works \item APIs: JSON RPC (via websockets, http, or ipc), http proxy, cli, fuse driver (planned) \item API bindings: web3.js and CLI \end{itemize} \end{block} \end{frame} \begin{frame}[plain]{join us} \begin{block}<1->{contact and contribute} \begin{description} \item[swarm channel:] \texttt{gitter.im/ethereum/swarm} \item[swarm info page \& orange papers:] \texttt{swarm-gateways.net} \item[swarm gateway:] \texttt{swarm-gateways.net} \texttt{web3.download} \end{description} \end{block} \begin{block}{} \begin{itemize} \footnotesize \item Daniel Nagy A., Nick Johnson, Viktor Trón, Zsolt Felföldi (core team) \item Aron Fischer \& Ethersphere orange lounge group \item Ram Devish, Bas van Kervel, Alex van der Sande (Mist integration) \item Felix Lange (integration, devp2p) \item Alex Beregszaszi (git, mango) \item Igor Shadurin (file manager dapp) \item Nick Johnson, Alex van der Sande (Ethereum Name Service) \item Gavin Wood, Vitalik Buterin, Jeffrey Wilcke (visionaries) \end{itemize} \end{block} \end{frame} \end{document}
{ "alphanum_fraction": 0.6849246483, "avg_line_length": 38.9337231969, "ext": "tex", "hexsha": "2a71c8b0233a0b29c4d92e735c26b8b96dbef3bd", "lang": "TeX", "max_forks_count": 18, "max_forks_repo_forks_event_max_datetime": "2020-06-30T18:51:51.000Z", "max_forks_repo_forks_event_min_datetime": "2016-11-15T06:55:57.000Z", "max_forks_repo_head_hexsha": "94b44093d92ca2d906cfae090c40fe9cb99dc3db", "max_forks_repo_licenses": [ "CC-BY-2.0" ], "max_forks_repo_name": "vaporsphere-staging/swarm-docs", "max_forks_repo_path": "slides/devcon2-sw3/devcon2.tex", "max_issues_count": 11, "max_issues_repo_head_hexsha": "94b44093d92ca2d906cfae090c40fe9cb99dc3db", "max_issues_repo_issues_event_max_datetime": "2021-03-18T17:00:53.000Z", "max_issues_repo_issues_event_min_datetime": "2018-08-09T14:26:48.000Z", "max_issues_repo_licenses": [ "CC-BY-2.0" ], "max_issues_repo_name": "vaporsphere-staging/swarm-docs", "max_issues_repo_path": "slides/devcon2-sw3/devcon2.tex", "max_line_length": 216, "max_stars_count": 14, "max_stars_repo_head_hexsha": "94b44093d92ca2d906cfae090c40fe9cb99dc3db", "max_stars_repo_licenses": [ "CC-BY-2.0" ], "max_stars_repo_name": "vaporsphere-staging/swarm-docs", "max_stars_repo_path": "slides/devcon2-sw3/devcon2.tex", "max_stars_repo_stars_event_max_datetime": "2021-05-27T09:08:13.000Z", "max_stars_repo_stars_event_min_datetime": "2016-05-12T14:59:18.000Z", "num_tokens": 6562, "size": 19973 }
% mainfile: ../../../../master.tex \subsection{Correcting an Error in the Indices} % The part of the label after the colon must match the file name. Otherwise, % conditional compilation based on task labels does NOT work. \label{task:20150121_jkn0} \tags{bugFix} \authors{jkn} \files{preamble.tex, master.tex, diary.macro.tex} \persons{Christer Peter Volk} Christer pointed out that the hyperref links to the indices did not work anymore and that the indices were not at the same levels as the parts in the table of contents. It turned out that the cause of this was that the {\tt splitidx} package apparently does not work anymore with hyperref in newer versions of TexLive. Therefore, I have decided to use the {\tt imakeidx} package instead as it seems to work well. The newest version (1.0.4) contains this update to how the indices are generated.
{ "alphanum_fraction": 0.7758215962, "avg_line_length": 77.4545454545, "ext": "tex", "hexsha": "975812d2168fdd9a172dd61a275ec32f02d4b409", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2021-03-22T11:33:57.000Z", "max_forks_repo_forks_event_min_datetime": "2020-06-18T10:42:59.000Z", "max_forks_repo_head_hexsha": "460869ca28c5515b28e7a1c3a44c61e375fd96c0", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "jkjaer/latexResearchDiary", "max_forks_repo_path": "entries/2015/01/21/20150121_jkn0.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "460869ca28c5515b28e7a1c3a44c61e375fd96c0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "jkjaer/latexResearchDiary", "max_issues_repo_path": "entries/2015/01/21/20150121_jkn0.tex", "max_line_length": 494, "max_stars_count": 6, "max_stars_repo_head_hexsha": "460869ca28c5515b28e7a1c3a44c61e375fd96c0", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "jkjaer/latexResearchDiary", "max_stars_repo_path": "entries/2015/01/21/20150121_jkn0.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-29T12:17:34.000Z", "max_stars_repo_stars_event_min_datetime": "2019-05-07T07:52:06.000Z", "num_tokens": 204, "size": 852 }
\section{\module{telnetlib} --- Telnet client} \declaremodule{standard}{telnetlib} \modulesynopsis{Telnet client class.} \sectionauthor{Skip Montanaro}{[email protected]} \index{protocol!Telnet} The \module{telnetlib} module provides a \class{Telnet} class that implements the Telnet protocol. See \rfc{854} for details about the protocol. In addition, it provides symbolic constants for the protocol characters (see below), and for the telnet options. The symbolic names of the telnet options follow the definitions in \code{arpa/telnet.h}, with the leading \code{TELOPT_} removed. For symbolic names of options which are traditionally not included in \code{arpa/telnet.h}, see the module source itself. The symbolic constants for the telnet commands are: IAC, DONT, DO, WONT, WILL, SE (Subnegotiation End), NOP (No Operation), DM (Data Mark), BRK (Break), IP (Interrupt process), AO (Abort output), AYT (Are You There), EC (Erase Character), EL (Erase Line), GA (Go Ahead), SB (Subnegotiation Begin). \begin{classdesc}{Telnet}{\optional{host\optional{, port}}} \class{Telnet} represents a connection to a Telnet server. The instance is initially not connected by default; the \method{open()} method must be used to establish a connection. Alternatively, the host name and optional port number can be passed to the constructor, to, in which case the connection to the server will be established before the constructor returns. Do not reopen an already connected instance. This class has many \method{read_*()} methods. Note that some of them raise \exception{EOFError} when the end of the connection is read, because they can return an empty string for other reasons. See the individual descriptions below. \end{classdesc} \begin{seealso} \seerfc{854}{Telnet Protocol Specification}{ Definition of the Telnet protocol.} \end{seealso} \subsection{Telnet Objects \label{telnet-objects}} \class{Telnet} instances have the following methods: \begin{methoddesc}{read_until}{expected\optional{, timeout}} Read until a given string, \var{expected}, is encountered or until \var{timeout} seconds have passed. When no match is found, return whatever is available instead, possibly the empty string. Raise \exception{EOFError} if the connection is closed and no cooked data is available. \end{methoddesc} \begin{methoddesc}{read_all}{} Read all data until \EOF; block until connection closed. \end{methoddesc} \begin{methoddesc}{read_some}{} Read at least one byte of cooked data unless \EOF{} is hit. Return \code{''} if \EOF{} is hit. Block if no data is immediately available. \end{methoddesc} \begin{methoddesc}{read_very_eager}{} Read everything that can be without blocking in I/O (eager). Raise \exception{EOFError} if connection closed and no cooked data available. Return \code{''} if no cooked data available otherwise. Do not block unless in the midst of an IAC sequence. \end{methoddesc} \begin{methoddesc}{read_eager}{} Read readily available data. Raise \exception{EOFError} if connection closed and no cooked data available. Return \code{''} if no cooked data available otherwise. Do not block unless in the midst of an IAC sequence. \end{methoddesc} \begin{methoddesc}{read_lazy}{} Process and return data already in the queues (lazy). Raise \exception{EOFError} if connection closed and no data available. Return \code{''} if no cooked data available otherwise. Do not block unless in the midst of an IAC sequence. \end{methoddesc} \begin{methoddesc}{read_very_lazy}{} Return any data available in the cooked queue (very lazy). Raise \exception{EOFError} if connection closed and no data available. Return \code{''} if no cooked data available otherwise. This method never blocks. \end{methoddesc} \begin{methoddesc}{read_sb_data}{} Return the data collected between a SB/SE pair (suboption begin/end). The callback should access these data when it was invoked with a \code{SE} command. This method never blocks. \versionadded{2.3} \end{methoddesc} \begin{methoddesc}{open}{host\optional{, port}} Connect to a host. The optional second argument is the port number, which defaults to the standard Telnet port (23). Do not try to reopen an already connected instance. \end{methoddesc} \begin{methoddesc}{msg}{msg\optional{, *args}} Print a debug message when the debug level is \code{>} 0. If extra arguments are present, they are substituted in the message using the standard string formatting operator. \end{methoddesc} \begin{methoddesc}{set_debuglevel}{debuglevel} Set the debug level. The higher the value of \var{debuglevel}, the more debug output you get (on \code{sys.stdout}). \end{methoddesc} \begin{methoddesc}{close}{} Close the connection. \end{methoddesc} \begin{methoddesc}{get_socket}{} Return the socket object used internally. \end{methoddesc} \begin{methoddesc}{fileno}{} Return the file descriptor of the socket object used internally. \end{methoddesc} \begin{methoddesc}{write}{buffer} Write a string to the socket, doubling any IAC characters. This can block if the connection is blocked. May raise \exception{socket.error} if the connection is closed. \end{methoddesc} \begin{methoddesc}{interact}{} Interaction function, emulates a very dumb Telnet client. \end{methoddesc} \begin{methoddesc}{mt_interact}{} Multithreaded version of \method{interact()}. \end{methoddesc} \begin{methoddesc}{expect}{list\optional{, timeout}} Read until one from a list of a regular expressions matches. The first argument is a list of regular expressions, either compiled (\class{re.RegexObject} instances) or uncompiled (strings). The optional second argument is a timeout, in seconds; the default is to block indefinitely. Return a tuple of three items: the index in the list of the first regular expression that matches; the match object returned; and the text read up till and including the match. If end of file is found and no text was read, raise \exception{EOFError}. Otherwise, when nothing matches, return \code{(-1, None, \var{text})} where \var{text} is the text received so far (may be the empty string if a timeout happened). If a regular expression ends with a greedy match (such as \regexp{.*}) or if more than one expression can match the same input, the results are indeterministic, and may depend on the I/O timing. \end{methoddesc} \begin{methoddesc}{set_option_negotiation_callback}{callback} Each time a telnet option is read on the input flow, this \var{callback} (if set) is called with the following parameters : callback(telnet socket, command (DO/DONT/WILL/WONT), option). No other action is done afterwards by telnetlib. \end{methoddesc} \subsection{Telnet Example \label{telnet-example}} \sectionauthor{Peter Funk}{[email protected]} A simple example illustrating typical use: \begin{verbatim} import getpass import sys import telnetlib HOST = "localhost" user = raw_input("Enter your remote account: ") password = getpass.getpass() tn = telnetlib.Telnet(HOST) tn.read_until("login: ") tn.write(user + "\n") if password: tn.read_until("Password: ") tn.write(password + "\n") tn.write("ls\n") tn.write("exit\n") print tn.read_all() \end{verbatim}
{ "alphanum_fraction": 0.7670565302, "avg_line_length": 33.25, "ext": "tex", "hexsha": "c7a4226dccdc018a1c7b4b9272d285a1878c1cb5", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2019-07-18T21:33:17.000Z", "max_forks_repo_forks_event_min_datetime": "2017-01-30T21:52:13.000Z", "max_forks_repo_head_hexsha": "93e24b88564de120b1296165b5c55975fdcb8a3c", "max_forks_repo_licenses": [ "PSF-2.0" ], "max_forks_repo_name": "jasonadu/Python-2.5", "max_forks_repo_path": "Doc/lib/libtelnetlib.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "93e24b88564de120b1296165b5c55975fdcb8a3c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "PSF-2.0" ], "max_issues_repo_name": "jasonadu/Python-2.5", "max_issues_repo_path": "Doc/lib/libtelnetlib.tex", "max_line_length": 72, "max_stars_count": 1, "max_stars_repo_head_hexsha": "93e24b88564de120b1296165b5c55975fdcb8a3c", "max_stars_repo_licenses": [ "PSF-2.0" ], "max_stars_repo_name": "jasonadu/Python-2.5", "max_stars_repo_path": "Doc/lib/libtelnetlib.tex", "max_stars_repo_stars_event_max_datetime": "2018-08-21T09:19:46.000Z", "max_stars_repo_stars_event_min_datetime": "2018-08-21T09:19:46.000Z", "num_tokens": 1814, "size": 7182 }
\documentclass[11pt,fleqn]{article} % \usepackage{cs70,latexsym,epsf} \usepackage{latexsym,epsf,fleqn} \usepackage{amsmath,amsthm,amsfonts,amssymb} \usepackage{mathtools} \usepackage{array} \usepackage{booktabs} \usepackage{geometry} \geometry{ a4paper, total={170mm,257mm}, left=20mm, top=20mm, } \newcommand\Set[2]{\{\,#1\mid#2\,\}} \newcommand\underoverset[3]{\underset{#1}{\overset{#2}{#3}}} \newcommand{\mbf}[1]{\mbox{{\bfseries #1}}} \newcommand{\N}{\mathbb{N}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\R}{\mathbb{R}} \newcommand{\Q}{\mathbb{Q}} \begin{document} \section*{CS 70 homework 1 solutions} Your full name: Joey Yandle \newline Your login name: dragon \newline Homework 1 \newline Your section number: 0 \newline Your list of partners: Galina Vinnik \newline \begin{enumerate} \item For each of the following, define proposition symbols for each simple proposition in the argument (for example, $P$ = ``I will ace this homework''). Then write out the logical form of the argument. If the argument form corresponds to a known inference rule, say which it is. If not, show that the proof is correct using truth tables. \begin{enumerate} \item I will ace this homework and I will have fun doing it. Therefore, I will ace this homework. \begin{align} P &= \text{``I will ace this homework''}\\ Q &= \text{``I will have fun doing it''}\\ P &\land Q \implies P \end{align} $P \land Q$ implies both $P$ and $Q$, and $P \implies P$. \begin{align} \therefore P \land Q \implies P\qed \end{align} \item It is hotter than 100 degrees today or the pollution is dangerous. It is less than 100 degrees today. Therefore, the pollution is dangerous. \begin{align} P &= \text{``It is hotter than 100 degrees today''}\\ Q &= \text{``The pollution is dangerous''}\\ \neg P &= \text{``It is less than 100 degrees today''}\\ Q &= \text{``The pollution is dangerous''}\\ (P &\lor Q) \land \neg P \implies Q \end{align} We know that not only is $(P \lor Q)$ true, but also $P$ is not true, so $Q$ must be true. \begin{align} \therefore (P \lor Q) \land \neg P \implies Q \qed \end{align} (This is ignoring the possibility of the temperature being exactly 100 degrees, in which case it is possible for the pollution to not be dangerous) \newpage \item Tina will join a startup next year. Therefore, Tina will join a startup next year or she will be unemployed. \begin{align} P &= \text{``Tina will join a startup next year''}\\ Q &= \text{``Tina will be unemployed''}\\ P &\implies (P \lor Q) \end{align} $P \implies P$, and $(P \lor Q)$ is true if $P$ or $Q$ is true. \begin{align} \therefore P \implies (P \lor Q) \qed \end{align} \item If I work all night on this homework, I will answer all the exercises. If I answer all the exercises, I will understand the material. Therefore, if I work all night on this homework, I will understand the material. \begin{align} P &= \text{``I work all night on this homework''}\\ Q &= \text{``I will answer all the exercises''}\\ R &= \text{``I will understand the material''}\\ (P &\implies Q) \land (Q \implies R) \iff P \implies R \end{align} $P \implies Q$, and $Q \implies R$. So if $P$ is true, $Q$ is true, and if $Q$ is true then $R$ is true. \begin{align} \therefore P \implies R\qed \end{align} \end{enumerate} \item Recall that $\N=\{0,1,\ldots\}$ denotes the set of natural numbers, and $\Z=\{\ldots,-1,0,1,\ldots\}$ denotes the set of integers. \begin{enumerate} \item Define $P(n)$ by \[ P(n) = \forall m \in \N , \; m<n \implies \neg (\exists k \in \N , \; n=mk \; \wedge \; k<n) \] Concisely, for which numbers $n\in\N$ is $P(n)$ true? $m | n \implies \exists k \in \N \ni n=mk$, which implies $\lnot P(n)$. Therefore $P(n)$ is true when $m<n \implies m \nmid n$. Such numbers are called prime numbers. \begin{align} \therefore P(n)\;\text{is true}\;\forall n \in \mathbb{P}\qed \end{align} \item Rewrite the following in a way that removes all negations (``$\neg$, $\ne$'') but remains equivalent. \[ \forall i . \; \neg \forall j . \; \neg \exists k . \; (\neg \exists \ell . \; f(i,j) \ne g(k,\ell)). \] $\forall i,j\;\exists k,l \ni f(i,j) = g(k,\ell)$ \item Prove or disprove: $\forall m \in \Z . \; \exists n \in \Z . \; m \ge n$. Let $n = m$. By reflexion, $m \ge m$. \begin{align} \therefore \exists n \ni m \ge n \end{align} \item Prove or disprove: $\exists m \in \Z . \; \forall n \in \Z . \; m \ge n$. Assume such an $m$ exists. Let $n = m+1$. This implies \begin{align} m &\ge m+1\\ 0 &\ge 1 \end{align} This is false, so using proof by contradiction the proposition fails. \end{enumerate} \newpage \item Alice and Bob are playing a game of chess, with Alice to move first. If $x_1,\dots,x_n$ represents a sequence of possible moves (i.e., first Alice will make move $x_1$, then Bob will make move $x_2$, and so on), we let $W(x_1,\dots,x_n)$ denote the proposition that, after this sequence of moves is completed, Bob is checkmated. \begin{enumerate} \item State using quantifier notation the proposition that Alice can force a checkmate on her second move, no matter how Bob plays. \begin{align} \forall x_2\;\exists x_1, x_3 \ni W(x_1, x_2, x_3) = \top \end{align} \item Alice has many possibilities to choose from on her first move, and wants to find one that lets her force a checkmate on her second move. State using quantifier notation the proposition that $x_1$ is \emph{not} such a move. \begin{align} \forall x_2, x_3\; W(x_1, x_2, x_3) = \bot \end{align} \end{enumerate} \item Joan is either a knight or a knave. Knights always tell the truth, and only the truth; knaves always tell falsehoods, and only falsehoods. Someone asks Joan, ``Are you a knight?'' She replies, ``If I am a knight then I'll eat my hat.'' \begin{enumerate} \item Must Joan eat her hat? \begin{align} P &= \text{``Joan is a knight''}\\ Q &= \text{``Joan will eat hat''}\\ P &\implies Q \end{align} If Joan is a knight, then she speaks truth, and so by the proposition will eat her hat. If she is not a knight, then she speaks false, so we must negate her proposition: \begin{align} \lnot (P &\implies Q) \iff (P \land \lnot Q) \end{align} So if Joan is a knave, then her statement must be false, but if it is false, then she must be a knight. So Joan cannot be a knave, and thus must eat her hat. \item Let's set this up as problem in propositional logic. Introduce the following propositions: \begin{eqnarray} P &=& \text{``Joan is a knight''}\\ Q &=& \text{``Joan will eat her hat''}. \end{eqnarray} Translate what we're given into propositional logic, i.e., re-write the premises in terms of these propositions. \begin{align} P &\implies (P \implies Q) \iff (P \land Q)\\ \lnot P &\implies \lnot (P \implies Q) \iff (P \land \lnot Q) \end{align} \item Using proof by enumeration, prove that your answer from part (1) follows from the premises you wrote in part (2). (No inference rules allowed.) \begin{tabular}{cccc} \toprule $P$ & $Q$ & $P \land Q$ & $P \land \lnot Q$ \\ \midrule T & T & T & F \\ T & F & F & T \\ F & T & F & F \\ F & F & F & F \\ \bottomrule \end{tabular} So $P \implies (P \implies Q)$ is supported by enumeration, while $\lnot P \implies \lnot (P \implies Q)$ leads to contradiction. Thus Joan will eat the hat. \end{enumerate} \newpage \item For each claim below, prove or disprove the claim. \begin{enumerate} \item Every positive integer can be expressed as the sum of two perfect squares. (A perfect square is the square of an integer. 0 may be used in the sum.) \begin{align} a \in \mathbb{Z} &\implies \exists j,k \in \Z \ni a = j^2 + k^2\\ (a = j^2 + k^2) &\implies |j|,|k| \le \sqrt{a} \end{align} This cannot be true for all positive integers, since it is not true for $3$; the only integers $\le \sqrt{3}$ are $0$ and $1$, and the max sum of their squares is $2$. \begin{align} \therefore (j^2 + k^2 < 3)\;\forall j,k \le \sqrt{3} \end{align} \item For all rational numbers $a$ and $b$, $a^b$ is also rational. \begin{align} b \in \mathbb{R} &\implies \exists j,k \in \Z \ni b = \frac{j}{k}\\ k \nmid j &\implies a^b = a^{\frac{j}{k}} = a^j a^{\frac{1}{k}}\\ |k| > 1 &\implies \exists a \ni a^{\frac{1}{k}} \notin \mathbb{R}\;(\text{e.g.}\;a \in \mathbb{P})\\ \therefore \exists a, b &\in \mathbb{R} \ni a^b \notin \mathbb{R} \end{align} \end{enumerate} \end{enumerate} \end{document}
{ "alphanum_fraction": 0.6661170651, "avg_line_length": 32.1628787879, "ext": "tex", "hexsha": "982ea4607eb5ad0bf7d63771de511b340db12c42", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "503dd8df6543a448e8df04d48de8c9d401e09534", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "xoloki/math", "max_forks_repo_path": "cs70/homework.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "503dd8df6543a448e8df04d48de8c9d401e09534", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "xoloki/math", "max_issues_repo_path": "cs70/homework.tex", "max_line_length": 170, "max_stars_count": null, "max_stars_repo_head_hexsha": "503dd8df6543a448e8df04d48de8c9d401e09534", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "xoloki/math", "max_stars_repo_path": "cs70/homework.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2861, "size": 8491 }
\documentclass[10pt]{beamer} \usetheme{Boadilla} % My favorite! \setbeamercovered{invisible} % To remove the navigation symbols from % the bottom of slides% \setbeamertemplate{navigation symbols}{} \setbeamertemplate{itemize items}[default] \setbeamertemplate{enumerate items}[default] \xdefinecolor{lavendar}{rgb}{0.2, 0.2, 0.72} % \usepackage{graphicx,epsfig} %\usepackage{bm} % For typesetting bold math (not \mathbold) %\logo{\includegraphics[height=0.6cm]{yourlogo.eps}} % \newcommand{\be}{\begin{equation*}} \newcommand{\ee}{\end{equation*}} \newcommand{\ba}{\begin{eqnarray}} \newcommand{\ea}{\end{eqnarray}} \newcommand{\vso}{\vskip15pt} \newcommand{\vst}{\vskip30pt} \newcommand{\nsub}{n_\mathrm{sub}} \def\smallfrac#1#2{\hbox{${{#1}\over {#2}}$}} \title[]{Impact of LHC data on (NN)PDFs} \author{Nathan Hartland} \institute { University of Edinburgh\\ %\includegraphics[height=2cm]{edinburghcrest.pdf} \medskip } % \today will show current date. % Alternatively, you can specify a date. % \date{\today} \begin{document} \renewcommand{\inserttotalframenumber}{14} \begin{frame} \begin{centering} \vskip20pt \center{\huge\color{lavendar} \textbf{Impact of LHC data on (NN)PDFs}} \vskip20pt Nathan Hartland\\ \small{The University of Edinburgh}\\ \vso \includegraphics[height=2cm]{edinburghcrest.pdf} \vskip10pt {\bf The NNPDF Collaboration:}\\ R.~D.~Ball, V.~Bertone, F.~Cerutti, C.~Deans, L.~Del~Debbio, \\S~Forte, A~Guffanti, N.H, J.I.~Latorre, J.~Rojo and M.~Ubiali. \vskip20pt XLVInd Rencontres de Moriond\\ La Thuile, Aosta valley, Italy\\ March 10th -March 17th, 2012 \end{centering} \end{frame} \section{Parton distribution fitting} \begin{frame} \frametitle{Parton distributions for the LHC} \be \sigma_X= \sum_{a,b} \int_0^1 dx_1dx_2 f_a(x_1,Q^2)f_b(x_2,Q^2)\sigma_{q_aq_b \to X} \left( x_1,x_2,Q^2 \right) \ee \begin{itemize} \item<1-> Need to have a reliable determination of PDFs for LHC physics. \item<1-> An accurate estimation of PDF uncertainties is crucial. \end{itemize} \begin{columns} \begin{column}{0.45\textwidth} \includegraphics[width=1.0\textwidth]{w+w-lhc7nlo68err.eps} \end{column} \begin{column}{0.55\textwidth} \includegraphics[width=1.0\textwidth]{ratiogglumi1_68cl.eps}\\ \quad { \centering \quad\quad \small \color{blue} G. Watt [hep-ph/1106.5788]}\\ \end{column} \end{columns} \end{frame} \begin{frame} \frametitle{NNPDF approach to parton fitting} \begin{itemize} \item<1->Use of Neural Networks as unbiased and extremely flexible interpolators. \begin{itemize} \item<1->Each PDF has 37 free parameters to vary in the fit. \item<1->Total of 259 free parameters minimises parametrisation bias. \end{itemize} \item<1->Monte Carlo approach to uncertainty estimation. \begin{itemize} \item<1->Perform an independent NN fit upon an ensemble of artificial data sets.\\ \item<1->Ensemble of PDF replicas faithfully represent the uncertainty in the original experimental data without the need for a tolerance criterion. \end{itemize} \end{itemize} \begin{columns} \begin{column}{0.25\textwidth} \be \langle\mathcal{O}\rangle=\smallfrac{1}{N}\,\sum_{k=1}^{N}\mathcal{O}[f_k]\, .\ee \end{column} \begin{column}{0.4\textwidth} \be {\mathrm{Var}}[\mathcal{O}]=\smallfrac{1}{N}\,\sum_{k=1}^{N}(\mathcal{O}[f_k] - \langle\mathcal{O}\rangle )^2 .\ee \end{column} \end{columns} \begin{figure}[b!] \begin{center} \includegraphics[width=0.45\textwidth]{glureplicas25.eps} \includegraphics[width=0.45\textwidth]{glureplicas100.eps} \end{center} \vskip-0.5cm \label{fig:pdf-jets} \end{figure} \end{frame} \begin{frame} \frametitle{ NNPDF collider only fits } \underline{Target}: An NNPDF Fit based only upon collider data \begin{itemize} \item<1-> Free of contamination from higher twists. \item<1-> No nuclear corrections required. \end{itemize} \begin{figure}[b!] \begin{center} \includegraphics[width=0.50\textwidth]{xT3_Q_2_log-nnpdf21nnlo-collider.eps} \includegraphics[width=0.50\textwidth]{xSinglet_Q_2_log-nnpdf21nnlo-collider.eps} \end{center} \vskip-0.5cm \label{fig:pdf-jets} \end{figure} \begin{itemize} \item<1->HERA + Tevatron data provide insufficient constraints in an NNPDF fit \end{itemize} \begin{itemize} \item<1-> LHC data will be crucial to properly constrain future collider only NNPDFs \end{itemize} \end{frame} \begin{frame} \frametitle{Including new experimental data} How can we add new LHC data to an existing parton set? \begin{itemize} \item<1-> Full Refit\\ \end{itemize} \underline{Tools}: APPLgrid/FastNLO projects $\to$ MC Weights on an interpolation grid \be W = \sum_p \sum_{l=0}^{\nsub} \sum_{i_{y_1}} \sum_{i_{y_2}} \sum_{i_\tau} W_{i_{y_1},i_{y_2},i_\tau}^{(p)(l)} \, \left( \frac{\alpha_s\left({Q^2}^{(i_\tau)}\right)}{2\pi}\right)^{p} F^{(l)}\left(x_1^{(i_{y_1})}, x_2^{(i_{y_1})}, {Q^2}^{(i_\tau)}\right) \ee Fast ... but can we get faster? $\to$ combine weight tables with FastKernel evolution: \be E^\tau_{\alpha\beta j k} = \int_{x_\alpha}^1 \frac{dy}{y}\Gamma_{ij}\left( \frac{x_\beta}{y},Q_0^2,Q_\tau^2 \right) \mathcal{I}^{(\beta)}(y). \ee \be f_i(x_{\alpha},Q^2_\tau) = \sum_j^{N_{\mathrm{pdf}}}R_{ij}N_j(x_{\alpha},Q_\tau^2) = \sum_{\beta}^{N_x} \sum_{j,k}^{N_{\mathrm{pdf}}} R_{ij}E^\tau_{\alpha\beta jk}N^0_k(x_\beta).\ee \begin{columns} \begin{column}{0.6\textwidth} Combined Weight-Evolution tables \begin{itemize} \item<1-> More of the calculation is precomputed\\ \item<1-> Smaller flavour basis at initial scale\\ \end{itemize} \end{column} \begin{column}{0.4\textwidth} % \begin{block}{} \be W= \sum_{\alpha,\beta}^{N_x}\sum_{i,j}^{N_{\mathrm{pdf}}} \sigma_{\alpha\beta i j}N_i^0(x_\alpha)N_j^0(x_\beta)\ee %\end{block} \end{column} \end{columns} \end{frame} \begin{frame} \frametitle{Including new experimental data} How can we add new LHC data to an existing parton set? \begin{itemize} \item<1-> Full Refit $\to$ Work in progress! \item<1-> Reweight existing Monte Carlo parton set. {\small \color{blue} Giele, Keller [hep-ph/9803393] }\\ \end{itemize} If the new data is statistically independent of the data in the prior set: \be \mathcal{P}_{\rm new}(f) = \mathcal{N}_{\chi}\mathcal{P}(\chi^2|f)\;\mathcal{P}_{\rm old}(f), \ee \be \langle\mathcal{O}\rangle_{\mathrm {new}}=\int \mathcal{O}[f] \, \mathcal{P}_{ \mathrm {new}}(f)\,Df=\smallfrac{1}{N}\,\sum_{k=1}^{N}w_k\mathcal{O}[f_k].\, \ee Weights determined by statistical inference \be w_k =\mathcal{N}_\chi\mathcal{P}(\chi^2|f_k) = \frac{(\chi^{2}_k)^{(n-1)/2} e^{-\frac{1}{2}\chi^{2}_k}} {\smallfrac{1}{N}\sum_{k=1}^{N}(\chi^{2}_k)^{(n-1)/2} e^{-\frac{1}{2}\chi^{2}_k}}\, .\ee Number of effective replicas reduced after reweighting: \be N_{\textrm{ eff}} \equiv \exp \left(\frac{1}{N_{\mathrm{rep}}}\sum_{k=1}^{N_{\mathrm{rep}}}w_k\ln(N_{\mathrm{rep}}/w_k)\right)\ee \center{ \small R.~D.~Ball {\it et al.} Nucl.\ Phys.\ B {\bf 849} 112 {\color{blue} [arXiv:1012.0836]}. } \end{frame} \begin{frame} \frametitle{Application: NNPDF2.2 Parton Set .} New data added by reweighting NNPDF2.1 Fit: $W$ leptonic charge asymmetry. \begin{centering} \center{\small R.~D.~Ball {\it et al}, Nucl.\ Phys.\ B {\bf 855} 608 {\color{blue} [arXiv:1108.1758] }.}\\ \end{centering}\vskip5pt Defined in terms of $W^{\pm}\to l^\pm\nu_l $ differential cross-sections $d\sigma_{l^\pm}/d\eta_l$ \be A^l_W=\frac{d\sigma_{l^{+}}/d\eta_{l}-d\sigma_{l^{-}}/d\eta_{l}} {d\sigma_{l^{+}}/d\eta_{l}+d\sigma_{l^{-}}/d\eta_{l}}, \ee \begin{itemize} \item<1-> ATLAS $\mu$ charge asymmetry. \hspace*{\fill { \color{blue} [arXiv:1103.2929]}} \item<1-> CMS $e+\mu$ charge asymmetry. \hspace*{\fill { \color{blue} [arXiv:1103.3470] }} \item<1-> D0 $e+\mu$ charge asymmetry. \hspace*{\fill { \color{blue} [arXiv:0709.4254]}} \end{itemize} \begin{figure}[h!] \centering \epsfig{width=0.44\textwidth,figure=xu-nnpdf22.eps} \epsfig{width=0.44\textwidth,figure=xd-nnpdf22.eps} \end{figure} \end{frame} \begin{frame} \frametitle{Updated LHC Data} \begin{itemize} \item<1-> LHC data in NNPDF2.2 now superseded \begin{itemize} \item<1-> Full covariance matrix available for the ATLAS W and Z rapidity distributions. \item<1-> Higher integrated luminosity 234 pb$^{-1}$ data for CMS muon asymmetry. \end{itemize} \end{itemize} \begin{itemize} \item<1-> Additional LHC Data \begin{itemize} \item<1-> $36$ pb$^{-1}$ Inclusive jet measurements (Full covariance matrix for ATLAS). \item<1-> $36$ pb$^{-1}$ LHCb Z rapidity distribution, W lepton asymmetry. \item<1-> {\color{blue}$840$pb$^{-1}$ CMS W electron asymmetry with full covariance matrix.} \item<1-> {\color{blue}$4.67$fb$^{-1}$ CMS Inclusive jet measurement.} \end{itemize} \end{itemize} \vso \begin{centering} \textbf { \footnotesize $\chi^2$ to electroweak vector boson production data}\\ \end{centering} \begin{table}[h] \scriptsize \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline Dataset , $\chi^2$ & NNPDF2.1 & MSTW08 & ABKM09 & JR09 & HERAPDF1.5 \\ \hline \hline ATLAS W/Z Rapidity & 2.7 & 3.6 & 3.6 & 5.0 & 2.0\\ \hline CMS $\mu$ asym + Z Rap & 2.0 & 3.0 & 2.8 & 3.6& 2.8\\ \hline LHCb W asym + Z Rap & 0.8 & 0.7 & 1.2 & 0.4& 0.6\\ \hline \end{tabular} \end{center} \end{table} \begin{centering} \textbf { \footnotesize $\chi^2$ to inclusive jet data}\\ \end{centering} \begin{table}[h] \scriptsize \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline Dataset, $\chi^2$ & NNPDF2.1 & MSTW08 & ABKM09 & JR09 & HERAPDF1.5 \\ \hline \hline ATLAS Incl. Jets $R=0.4$ & 0.93 & 1.18 & 1.41 & 1.63 & 1.21 \\ ATLAS Incl. Jets $R=0.6$ & 1.38 & 1.31 & 1.46 & 1.88 & 1.43 \\ \hline \end{tabular} \end{center} \end{table} \end{frame} \begin{frame} \frametitle{Impact of LHC EW vector boson data - ATLAS} \textbf{\footnotesize Preliminary reweighting results} \begin{table}[h] \scriptsize \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline Dataset & $\chi^2$ & $\chi^2_{\rm rw} $ & $N_{\rm eff}$ \\ \hline \hline ATLAS & 2.7 & 1.2 & 16 \\ \hline ATLAS $W^+$ 36 pb$^{-1}$ & 5.7 & 1.5 & 17 \\ ATLAS $W^-$ 36 pb$^{-1}$ & 2.5 & 1.0 & 205 \\ ATLAS $Z$ 36 pb$^{-1}$ & 1.8 & 1.1 & 581 \\ \hline \end{tabular} \end{center} \end{table} \begin{center} \includegraphics[width=0.52\textwidth]{dat-th-atlaswp} \includegraphics[width=0.52\textwidth]{dat-th-atlasz}\\ \end{center} \end{frame} \begin{frame} \frametitle{Impact of LHC EW vector boson data - CMS} \textbf{\footnotesize Preliminary reweighting results} \begin{table}[h] \scriptsize \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline Dataset & $\chi^2$ & $\chi^2_{\rm rw} $ & $N_{\rm eff}$ \\ \hline \hline CMS & 2.0 & 1.2 & 56 \\ \hline CMS Z rapidity 36 pb$^{-1}$ & 1.9 & 1.4 & 223 \\ CMS muon asymmetry 234 pb$^{-1}$ & 2.0 & 0.4 & 200 \\ \hline \end{tabular} \end{center} \end{table} \begin{center} \includegraphics[width=0.52\textwidth]{dat-th-cmswlasy} \includegraphics[width=0.52\textwidth]{dat-th-cmsz}\\ \end{center} \end{frame} \begin{frame} \frametitle{Impact of LHC EW vector boson data - LHCb} \textbf{\footnotesize Preliminary reweighting results} \begin{table}[h] \scriptsize \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline Dataset & $\chi^2$ & $\chi^2_{\rm rw} $ & $N_{\rm eff}$ \\ \hline \hline LHCb & 0.8 & 0.8 & 972 \\ \hline LHCb Z rapidity 36 pb$^{-1}$ & 1.1 & 1.0 & 962 \\ LHCb $W$ lepton asymmetry 36 pb$^{-1}$ & 0.8 & 0.5 & 961 \\ \hline \end{tabular} \end{center} \end{table} \begin{center} \includegraphics[width=0.52\textwidth]{dat-th-lhcbw} \includegraphics[width=0.52\textwidth]{dat-th-lhcbz}\\ \end{center} \end{frame} \begin{frame} \frametitle{Impact of LHC EW vector boson data} \begin{centering} \small Ratio of $d$, $\bar{u}$ PDFs reweighted with ATLAS data to NNPDF2.1\\ \end{centering} \begin{figure}[h] \begin{center} \includegraphics[width=0.40\textwidth]{xd_Q2_10000_log-atlas.eps} \includegraphics[width=0.40\textwidth]{xubar_Q2_10000_log-atlas.eps}\\ \end{center} \end{figure} \begin{centering} \small Ratio of $d$, $\bar{u}$ PDFs reweighted with CMS data to NNPDF2.1\\ \end{centering} \begin{figure}[h] \begin{center} \includegraphics[width=0.40\textwidth]{xd_Q2_10000_log-cms.eps} \includegraphics[width=0.40\textwidth]{xubar_Q2_10000_log-cms.eps} \end{center} \end{figure} \end{frame} \begin{frame} \frametitle{Impact of ATLAS inclusive jet data} \textbf{\footnotesize Preliminary reweighting results} \begin{table}[h] \small \begin{center} \begin{tabular}{|c|c|c|c|} \hline Dataset & $\chi^2$ & $\chi^2_{\rm rw} $ & $N_{\rm eff}$ \\ \hline NNPDF2.1 NNLO + ATLAS Incl. Jets $R=0.4$ & 0.93 & 0.91 & 904 \\ NNPDF2.1 NNLO + ATLAS Incl. Jets $R=0.6$ & 1.42 & 1.24 & 610 \\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[h!] \centering \epsfig{width=0.8\textwidth,figure=xg_Q2_10000_lin-atlasR.eps} \end{figure} \end{frame} \begin{frame} \frametitle{Summary} \begin{itemize} \item<1-> NNPDF Parton Sets \begin{itemize}\footnotesize \item<1-> Neural Network parametrisation of PDFs. \\ { \color{blue} Redundant parametrisation for an unbiased fit.} \item<1-> Monte Carlo uncertainty determination.\\ { \color{blue}Faithful representation of the experimental uncertainties.} \end{itemize} \item<1-> Bayesian Reweighting \begin{itemize}\footnotesize \item<1-> Powerful technique for including new data into existing parton fits. \\ { \color{blue} Fast assessment of data impact.} \end{itemize} \item<1-> Impact of LHC data \begin{itemize}\footnotesize \item<1-> ATLAS $W/Z$ measurements:\\ { \color{blue} Substantial constraints, particularly from $W^{+}$ data.} \item<1-> CMS $W/Z$ measurements:\\ { \color{blue} Less constraining $\to$ full covariance matrix is unavailable.}\\ \item<1-> LHCb $W/Z$ measurements:\\ { \color{blue} Data does not yet provide significant constraint upon PDFs.}\\ \item<1-> ATLAS inclusive jet measurements:\\ { \color{blue} Moderate constraint upon gluon PDF.} \end{itemize} \end{itemize} \begin{centering} \textbf{ LHC data already providing significant constraints on parton distributions.}\\ \end{centering} \end{frame} \begin{frame} \begin{center} BACKUPS \end{center} \end{frame} \begin{frame} \frametitle{ATLAS Determination of $R_s$} \begin{centering} \small Ratio of strange to non-strange PDFs from a HERA + ATLAS W/Z production fit.\\ \end{centering} \begin{columns} \begin{column}{0.5\textwidth} \epsfig{width=1\textwidth,figure=ATLAS-rs.pdf} \end{column} \begin{column}{0.5\textwidth} \epsfig{width=1\textwidth,figure=NNPDF-rs.eps} \begin{itemize} \item<1-> No discrepancy observed with Rs for NNPDF collider +W/Z only fit \end{itemize} \end{column} \end{columns} \end{frame} \begin{frame} \frametitle{NNPDF2.1NLO/NNLO reweighted with ATLAS jets} \begin{figure}[h] \begin{center} \includegraphics[width=0.49\textwidth]{xg_Q2_10000_lin-atlasnloR.eps} \includegraphics[width=0.49\textwidth]{xg_Q2_10000_log-atlasnloR.eps}\\ \includegraphics[width=0.49\textwidth]{xg_Q2_10000_lin-atlasR.eps} \includegraphics[width=0.49\textwidth]{xg_Q2_10000_log-atlasR.eps} \label{pdfcomp-jets} \end{center} \end{figure} \end{frame} % % % %\begin{frame} %\small %\frametitle{Special features of NNPDF approach} %\textbf{Minimisation by genetic algorithms}\\ %\underline{Problem}: Very large parameter space, $\chi2$ highly nonlocal. \begin{itemize} %\item<1-> Minimisation is challenging. %\end{itemize}\underline{Solution}: Genetic Algorithms (GA) %\begin{itemize} %\item<1-> Generate mutations of fit parameters. %\item<1-> Select those mutations that minimise figure of merit. %\end{itemize} %\vskip10pt %\textbf{ Dynamical fit stopping by cross-validation}\\ %\underline{Problem}: extremely flexible parameterisations are prone to \emph{overfitting}. %\\ %\begin{itemize} %\item<1->Fit has so many parameters, the minimum $\chi^2$ corresponds to a fit not only to %the data, but also statistical noise. %\end{itemize} %\underline{Solution}: dynamical stopping by \emph{Cross Validation}. %\begin{itemize} %\item<1-> Split the dataset into a training set and a validation set. %\item<1-> Use the training set for minimisation, monitor the $\chi^2$ to the validation set. %\item<1-> Stop the fit when the $\chi^2$ to the validation set starts to increase while %the $\chi^2$ to training set is still decreasing. % %\end{itemize} % % % %\end{frame} % %\begin{frame} %\frametitle{Cross Validation} % \begin{figure}[b!] % \begin{center} % \includegraphics[width=0.9\textwidth]{chi2ite-1004-NMC-pd.eps} % \end{center} %\end{figure} %\end{frame} % %\begin{frame} %\frametitle{Unweighting procedure} %\begin{columns} % \begin{column}{0.5\textwidth} %\begin{figure} % \epsfig{width=0.7\textwidth,figure=unwplot-1.eps,angle=-90}\\ %\end{figure} % \end{column} % \begin{column}{0.5\textwidth} %\begin{figure} % \epsfig{width=0.7\textwidth,figure=unwplot-2.eps,angle=-90} %\end{figure} % \end{column} %\end{columns} %\end{frame} % % End of slides \end{document}
{ "alphanum_fraction": 0.6830017978, "avg_line_length": 30.1451048951, "ext": "tex", "hexsha": "38cf6e6616332f83df01655c9de0fcd9451ddc79", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8943b92407851aa0786dde9e061f33f403809a45", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "nhartland/talks", "max_forks_repo_path": "2012/Moriond2012/Moriond.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8943b92407851aa0786dde9e061f33f403809a45", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "nhartland/talks", "max_issues_repo_path": "2012/Moriond2012/Moriond.tex", "max_line_length": 190, "max_stars_count": null, "max_stars_repo_head_hexsha": "8943b92407851aa0786dde9e061f33f403809a45", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "nhartland/talks", "max_stars_repo_path": "2012/Moriond2012/Moriond.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 6512, "size": 17243 }
\documentclass{article} \usepackage[utf8]{inputenc} \usepackage{amsmath} \usepackage{mathtools} \usepackage{tcolorbox} \usepackage{float} \usepackage{amsfonts} \usepackage{svg} \date{} \usepackage{qcircuit} \title{\textbf{Quantum Computing: An Applied Approach}\\\vspace*{1cm} Chapter 8 Problems: Building a Quantum Computer } \begin{document} \maketitle \section{} The circuit model can be defined as the union of three distinct components: (1) a set of $n$ initialized input qubits, (2) a set of qubit lines consisting of one- or multi-qubit unitary gate operations, and (3) a set of up to $n$ observables to measure that project the qubit states onto the subspace spanned by $n$ classical bits. The query model, on the other hand, describes some function that maps $n$ input qubits onto $n$ output qubits via the action of some oracle. In this sense, it consists of an input state $\{|x_i\rangle\}_{i=1}^n$ and a corresponding collection of classical bits $\{c_i\}_{i=1}^n$. \section{} This proof appears relatively elementary. The query complexity involves up to a single oracle call on each of the $n$ input qubits. The circuit, or gate, complexity involves an unbounded number of operations on each of $n$ qubit lines. The query model complexity then represents a lower bound on the circuit complexity, as the circuit complexity involves at least $O(1)$ unitary operations on each qubit. \section{} There is a single single-qubit gate per line in the quantum Fourier transform (QFT), so there are $n$ single-qubit gates. The second part of the question clearly means to ask how many double-qubit gates are needed; there are $i-1$ double-qubit gates on the $i^\text{th}$ line yielding $\sum_{i=1}^{n-1}i=\boxed{\frac{n^2-n}{2}}$ gates overall. \section{} $n=1$ qubits: \newline \Qcircuit @C=1em @R=.7em { & \gate{H} & \qw } \newline\newline $n=2$ qubits: \newline \Qcircuit @C=1em @R=.7em { & \gate{H} & \gate{R_{\pi/2}} & \qw & \qw \\ & \qw & \ctrl{-1} & \gate{H} & \qw } \newline\newline $n=3$ qubits: \newline \Qcircuit @C=1em @R=.7em { & \gate{H} & \gate{R_{\pi/2}} & \gate{R_{\pi/4}} & \qw & \qw & \qw & \qw \\ & \qw & \ctrl{-1} & \qw & \gate{H} & \gate{R_{\pi/2}} & \qw & \qw \\ & \qw & \qw & \ctrl{-2} & \qw & \ctrl{-1} & \gate{H} & \qw } \section{} The intuition behind this proof is that, when initialized in the zero-state $|0\rangle^{\otimes n}$, all of the controlled-rotation gates reduce to the identity. Consequently, the QFT reduces to a set of one Hadamard operation per line yielding: $$ QFT_n|0\rangle^{\otimes n}=H^{\otimes n}|0\rangle ^{\otimes n} $$ as described. \section{} \subsection{} The approach of period finding proposed in this paper by Ekera and Hastad addresses the modular exponentation step and offers several different proposals for reducing the complexity of this step. Overall, especially for low-bit ($n$) inputs, their approach greatly reduces the number of Toffoli gates required and in doing so, lowers the overall number of qubits required to solve problems such as 2048-bit RSA encryption. \subsection{} In terms of the complexity of the modular exponentiation step, Shor's Algorithm involves a Toffoli gate count of $20n_en^2$, where $n_e$ is the number of modular multiplication operations to perform and $n$ is the number of input bits in the integer to be factored. This step dominates the complexity of Shor's Algorithm as a whole. \subsection{} The square-and-multiply approach begins with an initial register qubit $x$ in the state $|1\rangle$. Afterwards, a controlled modular multiplication is run for each exponent qubit $e_j$, with $j$ iterating from 0 to $n_e-1$. After this process, including multiple iterations of the modular multiplication subprocess, the initial qubit $x$ stores the exponentiated value $x=g^e$. \subsection{} Windowed arithmetic scales down the number of qubits needed for operations in intermediate steps, by replacing subprocesses such as modular multiplication with lookup functions that operate in clusters. In the example shown, a window size of 4 is used to lookup classically known values for the expression $g^{e[4n:4(n+1)]2^{4n}}$, reducing computation time by a factor of 4. In this paper, this technique is used in both the modular addition and modular multiplication subroutines, modifying the Toffoli gate count from $4n^2n_e$ to $\frac{2n_en}{c_\text{mul}c_\text{exp}}(2n+2^{c_\text{mul}+c_\text{exp}})$. A tradeoff takes place, as the scalar decrease in the gate counts with the window sizes $c$ is balanced by the number of Toffoli gates involved in the lookup operation (factor of $2^{c_\text{mul}+c_\text{exp}}$). \subsection{} These implications are explored in Table 1. Although the complexity of the approach studied in this paper is asymptotically greater than the others, for small values of $n$ it reaches a practical baseline for gate complexity in solving 2048-bit RSA decryption around two orders of magnitude lower than the next best. Specifically, the minimum number of abstract qubits necessary to factor a 2048-bit key has been reduced from 340 in Fowler et al. (2012) to 2.7. \section{} Under the approximate QFT, there are at most $m$ two-qubit gates per line, and there are $n$ lines in total. There is also one single-qubit gate per line, yielding a gate complexity of $O(nm+n)=O(nm)$. In determining the total number of gates exactly, first notice that the number of single-qubit gates remains the same at $n$. The number of double-qubit gates continues to increase by 1 per line beginning at 0, but stops at $m$ since there are at most $m$ rotations $\theta_{jk}$ where $j-k>m$. There are then $\frac{m(m-1)}{2}$ double-qubit gates in the first $m$ lines, and $(n-m)m$ in the remaining $n-m$. Overall, including the $n$ single-qubit gates this yields $$ \frac{m^2-m}{2}+\frac{2m(n-m)}{2}=\boxed{\frac{m(2n-m+1)}{2}} $$ gates. Since $n>m$ is assumed, this expression has complexity $O(mn)$ as described. It disagrees with the expression listed in the problem set assignment, but matches my own calculations for various values of $m$ and $n$. For example, setting $m=2$ and $n=5$ for the AQFT yields a quantum circuit with 5 single-qubit gates and 0+1+2+2+2=9 two-qubit gates. Assigning these values into the expression above, this yields $5+\frac{2(2\cdot 5-2+1)}{2}=5+9=14$ gates, as expected. \end{document}
{ "alphanum_fraction": 0.7415147706, "avg_line_length": 58.9259259259, "ext": "tex", "hexsha": "210111bd136795310aee56668cd54f4d0685638f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c56ce4d1cfc9fcf0fc926e330bb28186cebdb799", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Alekxos/qc_applied_approach", "max_forks_repo_path": "chapter_8/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c56ce4d1cfc9fcf0fc926e330bb28186cebdb799", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Alekxos/qc_applied_approach", "max_issues_repo_path": "chapter_8/main.tex", "max_line_length": 446, "max_stars_count": 4, "max_stars_repo_head_hexsha": "c56ce4d1cfc9fcf0fc926e330bb28186cebdb799", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Alekxos/qc_applied_approach", "max_stars_repo_path": "chapter_8/main.tex", "max_stars_repo_stars_event_max_datetime": "2021-12-14T18:28:57.000Z", "max_stars_repo_stars_event_min_datetime": "2020-04-20T18:48:59.000Z", "num_tokens": 1779, "size": 6364 }
\documentclass{article} \usepackage[margin=1in]{geometry} \usepackage{natbib} \usepackage{amsmath} \usepackage{graphicx} \usepackage{color} \usepackage[procnames]{listings} \usepackage{textcomp} \usepackage{setspace} \usepackage{palatino} \usepackage{enumitem} \usepackage{array} \setlist{nolistsep} \usepackage{floatrow} \usepackage{sidecap} \sidecaptionvpos{figure}{c} \usepackage{wrapfig} \usepackage{float} \usepackage{framed} \usepackage{caption} \definecolor{gray}{gray}{0.2} \definecolor{green}{rgb}{0,0.5,0} \definecolor{lightgreen}{rgb}{0,0.7,0} \definecolor{purple}{RGB}{160,90,180} \definecolor{darkred}{rgb}{0.6,0,0} \definecolor{orange}{rgb}{1,0.3,0} \definecolor{comments}{RGB}{94,148,148} %\definecolor{comments}{RGB}{102, 153, 153} \definecolor{medblue}{RGB}{65,105,225} \definecolor{lightgray}{rgb}{0.98, 0.98, 0.98} \definecolor{medgray}{rgb}{0.925, 0.925, 0.925} \definecolor{pink}{rgb}{0.8, 0, 0.8} \definecolor{orangey}{RGB}{232, 150, 25} \setlength\parindent{0pt} \usepackage[T1]{fontenc} % quotes \newcommand{\code}[1]{\texttt{\small{#1}}} %\newcommand{\code}[1]{\textbf{\texttt{\small{#1}}}} \usepackage{hyperref} \hypersetup{ colorlinks=true, %set true if you want colored links linktoc=all, %set to all if you want both sections and subsections linked linkcolor=blue, %choose some color if you want links to stand out urlcolor=blue } \lstset{ language=python, rulecolor=\color{black}, frame=leftline, backgroundcolor=\color{medgray}, numbers=left, numberstyle=\scriptsize, basicstyle=\ttfamily\small\setstretch{1.2}, stringstyle=\color{red}, showstringspaces=false, alsoletter={1234567890}, otherkeywords={\ , \}, \{}, breaklines=true, keywordstyle=\ttfamily, emph={access,and,as,break,class,continue,def,del,elif,else,% except,exec,finally,for,from,global,if,import,in,is,% lambda,not,or,pass,print,raise,return,try,while,assert}, emphstyle=\color{purple}\bfseries, emph={[2]self}, emphstyle=[2]\color{gray}, emph={[4]ArithmeticError,AssertionError,AttributeError,BaseException,% DeprecationWarning,EOFError,Ellipsis,EnvironmentError,Exception,% False,FloatingPointError,FutureWarning,GeneratorExit,IOError,% ImportError,ImportWarning,IndentationError,IndexError,KeyError,% KeyboardInterrupt,LookupError,MemoryError,NameError,None,% NotImplemented,NotImplementedError,OSError,OverflowError,% PendingDeprecationWarning,ReferenceError,RuntimeError,RuntimeWarning,% StandardError,StopIteration,SyntaxError,SyntaxWarning,SystemError,% SystemExit,TabError,True,TypeError,UnboundLocalError,UnicodeDecodeError,% UnicodeEncodeError,UnicodeError,UnicodeTranslateError,UnicodeWarning,% UserWarning,ValueError,Warning,ZeroDivisionError,abs,all,any,apply,% basestring,bool,buffer,callable,chr,classmethod,cmp,coerce,compile,% complex,copyright,credits,delattr,dict,dir,divmod,enumerate,eval,% execfile,exit,filter,float,frozenset,getattr,globals,hasattr,% hash,help,hex,id,input,int,intern,isinstance,issubclass,iter,len,% license,list,locals,long,map,max,min,object,oct,open,ord,pow,property,% quit,range,raw_input,reduce,reload,repr,reversed,round,set,setattr,% slice,sorted,staticmethod,str,sum,super,tuple,unichr,unicode,% vars,xrange,zip}, emphstyle=[4]\color{purple}\bfseries, emph={[5]assign_name,num_classes,codon_model,read_tree,print_tree,% branch_het,site_het,compute_frequencies,Site,Evolver,Genetics,% MatrixBuilder,aminoAcid_Matrix,nucleotide_Matrix,mechCodon_Matrix,% mutSel_Matrix,ECM_Matrix,EvoModels,Model,Tree,Partition,% StateFrequencies,EqualFrequencies,RandomFrequencies,CustomFrequencies,% ReadFrequencies,EmpiricalModelFrequencies}, emphstyle=[5]\color{medblue}\bfseries, upquote=true, morecomment=[s][\color{comments}]{"""}{"""}, morecomment=[s][\color{comments}]{'''}{'''}, commentstyle=\color{comments}\slshape, literate={>>>}{\textbf{\textcolor{comments}{>{>}>}}}3, procnamekeys={def,class}, procnamestyle=\color{green}\textbf, tabsize=4, escapeinside={(*@}{@*)} } % Python for external files \newcommand\pythonexternal[2][]{{ \lstinputlisting[#1]{#2}}} % Python for inline \newcommand\pythoninline[1]{{\lstinline!#1!}} \begin{document} \begin{titlepage} \begin{center} \textsc{\Huge Pyvolve User Manual}\\[1cm] {\huge Stephanie J. Spielman}\\[0.5cm] {\large Email: [email protected]} \vspace*{1.5cm} \includegraphics[width=1.5in]{pyvolve_logo_manual.png} \end{center} \end{titlepage} \newpage \tableofcontents \newpage \section{Introduction} Pyvolve is an open-source Python module for simulating genetic data along a phylogeny using Markov models of sequence evolution, according to standard methods \cite{Yang2006}. Pyvolve is freely available under a FreeBSD license and is hosted on github: \href{http://sjspielman.org/pyvolve/}{http://sjspielman.org/pyvolve/}. Pyvolve has several dependencies, including \href{http://biopython.org/wiki/Download}{BioPython}, \href{http://www.scipy.org/install.html}{NumPy}, and \href{http://www.scipy.org/install.html}{SciPy}. These modules must be properly installed and in your Python path. Please file any and all bug reports on the github repository \href{https://github.com/sjspielman/pyvolve/issues}{Issues} section. Pyvolve is written such that it can be seamlessly integrated into your Python pipelines without having to interface with external software platforms. However, please note that for extremely large ($>$10,000 taxa) and/or extremely heterogenous simulations (e.g. where each site evolves according to a unique evolutionary model), Pyvolve may be quite slow and thus may take several minutes or more to run. Faster sequence simulators you may find useful are detailed in ref.\ \citep{Arenas2012}, which gives an overview of various sequence simulation softwares (from 2012). \vspace*{1cm} Pyvolve supports a variety of evolutionary models, including the following: \begin{itemize} \item Nucleotide Models \begin{itemize} \item Generalized time-reversible model \cite{GTR} and all nested variants \end{itemize} \item Amino-acid exchangeability models \begin{itemize} \item JTT \cite{JTT}, WAG \cite{WAG}, LG \cite{LG}, AB \cite{ABmodel}, mtMam \cite{YangNielsenHasagawa1998}, mtREV24 \cite{mtrev24}, and DAYHOFF \cite{dayhoff} \end{itemize} \item Codon models \begin{itemize} \item Mechanistic ($dN/dS$) models (MG-style \cite{MG94} and GY-style \cite{GY94}) \item Empirical codon model \cite{ECM} \end{itemize} \item Mutation-selection models \begin{itemize} \item Halpern-Bruno model \cite{HB98}, implemented for codons and nucleotides \end{itemize} \end{itemize} \renewcommand{\arraystretch}{1.75} %table row spacing, defined here to avoid affecting title \setlength{\parskip}{12pt} Both site- and branch- (temporal) heterogeneity are supported. A detailed and highly-recommended overview of Markov process evolutionary models, for DNA, amino acids, and codons, is available in the book \emph{Computational Molecular Evolution}, by Ziheng Yang \citep{Yang2006}. Although Pyvolve does not simulate insertions and deletions (indels), Pyvovle does include several novel options not available (to my knowledge) in other sequence simulation softwares. These options, detailed in Section~\ref{sec:special}, include custom rate-matrix specification, novel matrix-scaling approaches, and branch length perturbations. \section{Installation} Pyvolve may be downloaded and installed using \code{pip} or \code{easy\_install}. Source code is available from \href{https://github.com/sjspielman/pyvolve/releases}{https://github.com/sjspielman/pyvolve/releases}. \section{Citation} If you use Pyvolve, or code derived from Pyvolve, please cite us: Spielman, SJ and Wilke, CO. 2015. \href{http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0139047}{Pyvolve: A flexible Python module for simulating sequences along phylogenies}. \textbf{PLOS ONE}. 10(9): e0139047. \section{Basic Usage} Similar to other simulation platforms, Pyvolve evolves sequences in groups of \textbf{partitions} (see, for instance, the Indelible simulation platform \citep{Fletcher2009}). Each partition has an associated size and model (or set of models, if branch heterogeneity is desired). Note that all partitions will evolve according to the same phylogeny\footnote{If you wish to have different partitions evolve according to distinct phylogenies, I recommend performing several simulations and then merging the resulting alignments in the post-processing stage.}. The general framework for a simple simulation is given below. In order to simulate sequences, you must define the phylogeny along which sequences evolve as well as any evolutionary model(s) you'd like to use, and assign model(s) to partition(s). Each evolutionary model has associated parameters which you can customize, as detailed in Section~\ref{sec:evomodels}. \begin{lstlisting} ######### General pyvolve framework ######### ############################################# # Import the Pyvolve module import pyvolve # Read in phylogeny along which Pyvolve should simulate my_tree = pyvolve.read_tree(file = "file_with_tree_for_simulating.tre") # Define evolutionary model(s) with the Model class my_model = pyvolve.Model(<model_type>, <custom_model_parameters>) # Define partition(s) with the Partition class my_partition = pyvolve.Partition(models = my_model, size = 100) # Evolve partitions with the callable Evolver class my_evolver = pyvolve.Evolver(tree = my_tree, partitions = my_partition) my_evolver() # evolve sequences \end{lstlisting} Each of these steps is explained below, in detail with several examples. For additional information, consult the API documentation, at \href{http://sjspielman.org/pyvolve}{http://sjspielman.org/pyvolve}. Further, all functions and classes in Pyvolve have highly descriptive docstrings, which can be accessed with Python's \code{help()} function. \section{Defining phylogenies}\label{sec:phylogeny} Phylogenies must be specified as newick strings (see \href{https://en.wikipedia.org/wiki/Newick_format}{this wikipedia page} for details) \emph{with branch lengths}. Pyvolve reads phylogenies using the function \code{read\_tree}, either from a provided file name or directly from a string: \begin{lstlisting} # Read phylogeny from file with the keyword argument "file" phylogeny = pyvolve.read_tree(file = "/path/to/tree/file.tre") # Read phylogeny from string with the keyword argument "tree" phylogeny = pyvolve.read_tree(tree = "(t4:0.785,(t3:0.380,(t2:0.806,(t5:0.612,t1:0.660):0.762):0.921):0.207);") \end{lstlisting} To implement branch (temporal) heterogeneity, in which different branches on the phylogeny evolve according to different models, you will need to specify \emph{model flags} at particular nodes in the newick tree, as detailed in Section~\ref{sec:branchhet}. Further, to assess that a phylogeny has been parsed properly (or to determine the automatically-assigned names of internal nodes), use the \code{print\_tree} function: \begin{lstlisting} # Read phylogeny from string phylogeny = pyvolve.read_tree(tree = "(t4:0.785,(t3:0.380,(t2:0.806,(t5:0.612,t1:0.660):0.762):0.921):0.207);") # Print the parsed phylogeny pyvolve.print_tree(phylogeny) ## Output from the above statement: node_name branch_length model_flag ''' >>> t4 0.785 None >>> internalNode3 0.207 None >>> t3 0.38 None >>> internalNode2 0.921 None >>> t2 0.806 None >>> internalNode1 0.762 None >>> t5 0.612 None >>> t1 0.66 None ''' \end{lstlisting} In the above output, tabs represent nested hierarchies in the phylogeny. Each line shows the node name (either a tip name, "root", or an internal node), the branch length leading to that node, and the model flag associated with that node. This final value will be \code{None} if model flags are not provided in the phylogeny. Again, note that model flags are only required in cases of branch heterogeneity (see Section~\ref{sec:branchhet}). It is also possible to provide a phylogeny with named internal nodes. Any internal nodes without provided names will be automatically assigned. \begin{lstlisting} # Read phylogeny with some internal node names (myname1, myname2) phylogeny = pyvolve.read_tree(tree = "(t4:0.785,(t3:0.380,(t2:0.806,(t5:0.612,t1:0.660)myname1:0.762)myname2:0.921):0.207);") # Print the parsed phylogeny pyvolve.print_tree(phylogeny) ## Output from the above statement: node_name branch_length model_flag ''' >>> t4 0.785 None >>> internalNode1 0.207 None >>> t3 0.38 None >>> myname2 0.921 None >>> t2 0.806 None >>> myname1 0.762 None >>> t5 0.612 None >>> t1 0.66 None ''' \end{lstlisting} You can also rescale, as desired, branch lengths on your input phylogeny using the keyword argument \code{scale\_tree} in the \code{read\_tree} function. This argument takes a numeric value and multiplies \textbf{all} branch lengths in your tree by this scalar: \begin{lstlisting}[label={lst:scaletree}] t = pyvolve.read_tree(file = "name_of_file.tre", scale_tree = 2.5) # Multiply all branch lengths by 2.5 \end{lstlisting} This argument is useful for changing the overall tree length, hence increasing or decreasing the expected number of substitutions over the course of simulation. \section{Defining Evolutionary Models}\label{sec:evomodels} The evolutionary models built into Pyvolve are outlined in Table 1 of this manual. Pyvolve uses \code{Model} objects to store evolutionary models: \begin{lstlisting} # Basic framework for defining a Model object (second argument optional) my_model = Model(<model_type>, <custom_model_parameters_dictionary>) \end{lstlisting} A single argument, \code{$<$model\_type$>$}, is required when defining a \code{Model} object. Available model types are shown in Table 1. Each model type has various associated parameters, which can be customized via the second optional argument to \code{Model}, written above as \code{$<$custom\_model\_parameters\_dictionary$>$}. This argument should be a dictionary of parameters to customize, and each modeling framework has particular keys which can be included in this dictionary. Available model types and associated customizable parameters are shown in Table 1 and detailed in the subsections below. Note that there are additional optional keyword arguments which may be passed to \code{Model}, including arguments pertaining to site-rate heterogeneity (see Section~\ref{sec:sitehet}). \begin{table}[H] \centering { \scalebox{0.8}{ \begin{tabular}{>{\centering\arraybackslash}p{1.5in} >{\centering\arraybackslash}p{2in} >{\raggedright}p{4in}} \hline \multicolumn{1}{c}{\textbf{Modeling framework}} & \multicolumn{1}{c}{\textbf{Pyvolve model type(s)}} & \multicolumn{1}{c}{\textbf{Optional parameters} (\code{"key"})} \tabularnewline \hline Nucleotide models & \code{"nucleotide"} & \vspace{-\topsep} \begin{itemize}[leftmargin=*] \item Equilibrium frequencies (\code{"state\_freqs"}) \item Mutation rates (\code{"mu"} or \code{"kappa"}) \end{itemize} \tabularnewline Empirical amino-acid models & \code{"JTT"}, \code{"WAG"}, \code{"LG"}, \code{"AB"}, \code{"DAYHOFF"}, \code{"MTMAM"}, or \code{"MTREV24"} & \vspace{-\topsep} \begin{itemize}[leftmargin=*] \item Equilibrium frequencies (\code{"state\_freqs"}) \end{itemize} \tabularnewline Mechanistic ($dN/dS$) codon models & \code{"GY"}, \code{"MG"}, or \code{"codon"} & \vspace{-\topsep} \begin{itemize}[leftmargin=*] \item Equilibrium frequencies (\code{"state\_freqs"}) \item Mutation rates (\code{"mu"} or \code{"kappa"}) \item $dN/dS$ (\code{"alpha"}, \code{"beta"}, and/or \code{"omega"}) \end{itemize} \tabularnewline Mutation-selection models & \code{"MutSel"} & \vspace{-\topsep} \begin{itemize}[leftmargin=*] \item Equilibrium frequencies (\code{"state\_freqs"}) OR fitness values (\code{"fitness"}) \item Mutation rates (\code{"mu"} or \code{"kappa"}) \end{itemize} \tabularnewline Empirical codon model (ECM) & \code{"ECMrest"}, \code{"ECMunrest"}, or \code{"ECM"} & \vspace{-\topsep} \begin{itemize}[leftmargin=*] \item Equilibrium frequencies (\code{"state\_freqs"}) \item Transition-tranversion bias(es) (\code{"k\_ti"} and/or \code{"k\_tv"}) \item $dN/dS^\dagger$ (\code{"alpha"}, \code{"beta"}, and/or \code{"omega"}) \end{itemize} \tabularnewline \hline \end{tabular}}} \begin{flushleft} \footnotesize{\textbf{Table 1.} Accepted model types in Pyvolve with associated customizable parameters. Names given in the column "Pyvolve model type(s)" should be specified as the first argument to \code{\footnotesize{Model}} as strings (case-insensitive). Customizable parameters indicated in the column "Optional parameters" should be specified as keys in the custom model-parameters dictionary, the second argument using when defining a \code{\footnotesize{Model}} object. \newline $^\dagger$Note that the interpretation of this $dN/dS$ value is different from the usual interpretation.} \end{flushleft} \end{table} Subsections below explain each modeling framework in detail, with examples of parameter customizations. \subsection{Nucleotide Models}\label{sec:nucleotide_basic} Nucleotide rate matrix elements, for the substitution from nucleotide $i$ to $j$, are generally given by \begin{equation} q_{ij} = \mu_{ij} \pi_j \end{equation} where $\mu_{ij}$ describes the rate of change from nucleotide $i$ to $j$ (i.e.\ mutation rate), and $\pi_j$ represents the equilibrium frequency of the target nucleotide $j$. Note that mutation rates are symmetric, e.g.\ $\mu_{ij} = \mu_{ji}$. By default, nucleotide models have equal equilibrium frequencies and equal mutation rates. A basic model can be constructed with, \begin{lstlisting} # Simple nucleotide model nuc_model = pyvolve.Model("nucleotide") \end{lstlisting} To customize a nucleotide model, provide a custom-parameters dictionary with optional keys \code{"state\_freqs"} for custom equilibrium frequencies and \code{"mu"} for custom mutation rates (see Section~\ref{sec:freqs} for details on frequency customization and Section~\ref{sec:mu} for details on mutation rate customization). \begin{lstlisting} # Define mutation rates in a dictionary with keys giving the nucleotide pair # Below, the rate from A to C is 0.5, and similarly C to A is 0.5 custom_mu = {"AC":0.5, "AG":0.25, "AT":1.23, "CG":0.55, "CT":1.22, "GT":0.47} # Define custom frequencies, in order A C G T. This can be a list or numpy array. freqs = [0.1, 0.45, 0.3, 0.15] # Construct nucleotide model with custom mutation rates and frequencies. nuc_model = pyvolve.Model( "nucleotide", {"mu":custom_mu, "state_freqs":freqs} ) \end{lstlisting} As nucleotide model mutation rates are symmetric, if you provide a rate for $A \rightarrow T$ (key \code{"AT"}), it will automatically be applied as the rate for $T \rightarrow A$. Any unspecified mutation rate pairs will have a value of 1. As an alternate to \code{"mu"}, you can provide the key \code{"kappa"}, which corresponds to the transition:transversion ratio (e.g.\ for an HKY85 model \citep{HKY85}), in the custom-parameters dictionary. When kappa is specified, tranversion mutation rates are set to 1, and transition mutation rates are set to the provided \code{"kappa"} value. \begin{lstlisting} # Construct nucleotide model with transition-to-transversion bias, and default frequencies nuc_model = pyvolve.Model( "nucleotide", {"kappa":2.75, "state_freqs":freqs} ) \end{lstlisting} \subsection{Amino-acid models}\label{sec:amino_basic} Amino-acid exchangeability matrix elements, for the substitution from amino acid $i$ to $j$, are generally given by \begin{equation} q_{ij} = r_{ij} \pi_j \end{equation} where $r_{ij}$ is a symmetric matrix that describes the probability of changing from amino acid $i$ to $j$, and $\pi_j$ is the equilibrium frequency of the target amino acid $j$. The $r_{ij}$ matrix corresponds to an empirically determined model, such as WAG \citep{WAG} or LG \citep{LG}. By default, Pyvolve assigns the \emph{default model} equilibrium frequencies for empirical models. These frequencies correspond to those published with each respective model's original paper. A basic amino-acid model can be constructed with, \begin{lstlisting} # Simple amino-acid model aa_model = pyvolve.Model("WAG") # Here, WAG can be one of JTT, WAG, LG, DAYHOFF, MTMAM, MTREV24 (case-insensitive) \end{lstlisting} To customize an amino-acid model, provide a custom-parameters dictionary with the key \code{"state\_freqs"} for custom equilibrium frequencies (see Section~\ref{sec:freqs} for details on frequency customization). Note that amino-acid frequencies must be in the order A, C, D, E, ... Y. \subsection{Mechanistic ($dN/dS$) codon models}\label{sec:mechcodon_basic} GY-style \citep{GY94} matrix elements, for the substitution from codon $i$ to $j$, are generally given by \begin{equation}\label{eq:GY94} q_{ij} = \left\{ \begin{array}{rl} \mu_{o_it_j} \pi_j \alpha & \text{synonymous change} \\ \mu_{o_it_j} \pi_j \beta & \text{nonsynonymous change} \\ 0 & \text{multiple nucleotide changes} \\ \end{array} \right., \end{equation} where $\mu_{o_it_j}$ is the mutation rate (e.g.\ for a change AAA to AAC, the corresponding mutation rate would be A $\rightarrow$ C), $\pi_j$ is the frequency of the target \emph{codon} $j$, $\alpha$ is the rate of synonymous change ($dS$), and $\beta$ is the rate of nonsynonymous change ($dN$). MG-style \citep{MG94} matrix elements, for the substitution from codon $i$ to $j$, are generally given by \begin{equation}\label{eq:MGstyle} q_{ij} = \left\{ \begin{array}{rl} \mu_{o_it_j}\pi_{t_j} \alpha &\text{synonymous change} \\ \mu_{o_it_j}\pi_{t_j} \beta &\text{nonsynonymous change} \\ 0 &\text{multiple nucleotide changes} \end{array} \right. , \end{equation} where $\mu_{o_it_j}$ is the mutation rate, $\pi_{t_j}$ is the frequency of the target \emph{nucleotide} $t_j$ (e.g.\ for a change AAA to AAC, the target nucleotide would be C), $\alpha$ is the rate of synonymous change ($dS$), and $\beta$ is the rate of nonsynonymous change ($dN$). Both GY-style and MG-style codon models use symmetric mutation rates. Codon models \emph{require} that you provide a $dN/dS$ rate ratio as a parameter in the custom-parameters dictionary. There are several ways to specify this value: \begin{itemize} \item Specify a single parameter, \code{"omega"}. This option sets the synonymous rate to 1. \item Specify a single parameter, \code{"beta"}. This option sets the synonymous rate to 1. \item Specify a two parameters, \code{"alpha"} and \code{"beta"}. This option sets the synonymous rate to $\alpha$ and the nonsynonymous rate to $\beta$. \end{itemize} By default, mechanistic codon models have equal mutation rates and equal equilibrium frequencies. Basic mechanistic codon models can be constructed with, \begin{lstlisting} # Simple GY-style model (specify as GY) gy_model = pyvolve.Model("GY", {"omega": 0.5}) # Simple MG-style model (specify as MG) mg_model = pyvolve.Model("MG", {"alpha": 1.04, "beta": 0.67}) # Specifying "codon" results in a *GY-style* model codon_model = pyvolve.Model("codon", {"beta": 1.25}) \end{lstlisting} To customize a mechanistic codon model, provide a custom-parameters dictionary with optional keys \code{"state\_freqs"} for custom equilibrium frequencies and \code{"mu"} for custom mutation rates (see Section~\ref{sec:freqs} for details on frequency customization and Section~\ref{sec:mu} for details on mutation rate customization). Note that codon frequencies must ordered alphabetically (AAA, AAC, AAG, ..., TTG, TTT) \emph{without} stop codons. \begin{lstlisting} # Define mutation rates in a dictionary with keys giving the nucleotide pair # Below, the rate from A to C is 0.5, and similarly C to A is 0.5 custom_mu = {"AC":0.5, "AG":0.25, "AT":1.23, "CG":0.55, "CT":1.22, "GT":0.47} # Construct codon model with custom mutation rates codon_model = pyvolve.Model( "codon", {"mu":custom_mu, "omega":0.55} ) \end{lstlisting} Mechanistic codon model mutation rates are symmetric; if you provide a rate for $A \rightarrow T$ (key \code{"AT"}), it will automatically be applied as the rate for $T \rightarrow A$. Any unspecified mutation rate pairs will have a value of 1. As an alternate to \code{"mu"}, you can provide the key \code{"kappa"}, which corresponds to the transition:transversion ratio (e.g.\ for an HKY85 model \citep{HKY85}), in the custom-parameters dictionary. When kappa is specified, tranversion mutation rates are set to 1, and transition mutation rates are set to the provided \code{"kappa"} value. \begin{lstlisting} # Construct codon model with transition-to-transversion bias, and default frequencies codon_model = pyvolve.Model( "codon", {"kappa":2.75, "alpha":0.89, "beta":0.95} ) \end{lstlisting} Importantly, by default, mechanistic codon models are scaled so that the mean substitution rate per unit time is 1. Here, "mean substitution rate" includes \emph{both} synonymous and nonsynonymous substitutions in its calculations. An alternative scaling scheme (note, which the author \textbf{strongly prefers}) would be to scale the matrix such that the mean \emph{neutral} substitution rate per unit time is 1. To specify this approach, include the argument \code{neutral\_scaling = True} when defining a Model: \begin{lstlisting} # Construct codon model with neutral scaling codon_model = pyvolve.Model( "codon", {"omega":0.5}, neutral_scaling = True ) \end{lstlisting} \subsection{Mutation-selection models}\label{sec:mutsel_basic} Mutation-selection (MutSel) model \citep{HB98} matrix elements, for the substitution from codon (or nucleotide) $i$ to $j$, are generally given by \begin{equation} q_{ij} = \left\{ \begin{array}{rl} \mu_{ij} \frac{S_{ij}}{1-e^{-S_{ij}}} &\text{single nucleotide change} \\\\ 0 &\text{multiple nucleotide changes} \\ \end{array} \right., \end{equation} where $\mu_{ij}$ is the mutation rate, and $S_{ij}$ is the scaled selection coefficient. The scaled selection coefficient indicates the fitness difference between the target and source state, e.g. $fitness_j - fitness_i$. MutSel mutation rates are \emph{not} constrained to be symmetric (e.g. $\mu_{ij}$ can be different from $\mu_{ji}$). MutSel models are implemented both for codons and nucleotides, and they may be specified \emph{either} with equilibrium frequencies or with fitness values. Note that equilibrium frequencies must sum to 1, but fitness values are not constrained in any way. (The relationship between equilibrium frequencies and fitness values for MutSel models is detailed in refs.\ \citep{HB98,SpielmanWilke2015}). Pyvolve automatically determines whether you are evolving nucleotides or codons based on the provided vector of equilibrium frequencies or fitness values; a length of 4 indicates nucleotides, and a length of 61 indicates codons. Note that, if you are constructing a codon MutSel model based on \emph{fitness} values, you can alternatively specify a vector of 20 fitness values, indicating amino-acid fitnesses (in the order A, C, D, E, ... Y). These fitness values will be directly assigned to codons, such that all synonymous codons will have the same fitness. Basic nucleotide MutSel models can be constructed with, \begin{lstlisting} # Simple nucleotide MutSel model constructed from frequencies, with default (equal) mutation rates nuc_freqs = [0.1, 0.4, 0.3, 0.2] mutsel_nuc_model_freqs = pyvolve.Model("MutSel", {"state_freqs": nuc_freqs}) # Simple nucleotide MutSel model constructed from fitness values, with default (equal) mutation rates nuc_fitness = [1.5, 0.88, -4.2, 1.3] mutsel_nuc_model_fits = pyvolve.Model("MutSel", {"fitness": nuc_fitness}) \end{lstlisting} Basic codon MutSel models can be constructed with, \begin{lstlisting} import numpy as np # imported for convenient example frequency/fitness generation # Simple codon MutSel model constructed from frequencies, with default (equal) mutation rates codon_freqs = np.repeat(1./61, 61) # constructs a vector of equal frequencies, as an example mutsel_codon_model_freqs = pyvolve.Model("MutSel", {"state_freqs": codon_freqs}) # Simple codon MutSel model constructed from codon fitness values, with default (equal) mutation rates codon_fitness = np.random.normal(size = 61) # constructs a vector of normally distributed codon fitness values, as an example mutsel_codon_model_fits = pyvolve.Model("MutSel", {"fitness": codon_fitness}) # Simple codon MutSel model constructed from *amino-acid* fitness values, with default (equal) mutation rates aa_fitness = np.random.normal(size = 20) # constructs a vector of normally distributed amino-acid fitness values, as an example mutsel_codon_model_fits2 = pyvolve.Model("MutSel", {"fitness": aa_fitness}) \end{lstlisting} Mutation rates can be customized with either the \code{"mu"} or the \code{"kappa"} key in the custom-parameters dictionary. Note that mutation rates in MutSel models do not need to be symmetric. However, if you a rate for $A \rightarrow C$ (key \code{"AC"}) and no rate for $C \rightarrow A$ (key \code{"CA"}), then Pyvolve will assume symmetry and assign $C \rightarrow A$ the same rate as $A \rightarrow C$. If \emph{neither} pair is provided (e.g. both "AC" and "CA" are not defined), then both will be given a rate of 1. \subsection{Empirical codon model}\label{sec:ecm} Matrix elements of the empirical codon model (ECM) \citep{ECM} are given by, \begin{equation}\label{eq:ecmrest} q_{ij} = \left\{ \begin{array}{rl} s_{ij} \pi_j \kappa(i,j) \alpha &\text{synonymous change} \\ s_{ij} \pi_j \kappa(i,j) \beta &\text{nonsynonymous change} \\ \end{array} \right., \end{equation} where $s_{ij}$ is the symmetric, empirical matrix indicating the probability of changing from codon $i$ to $j$, $\pi_j$ is the equilibrium frequency of the target codon $j$, $\kappa(i,j)$ is a mutational parameter indicating transition and/or tranversion bias, and $\alpha$ and $\beta$ represent $dS$ and $dN$, respectively. Importantly, because this model is empirically-derived, the parameters $\kappa(i,j)$, $\alpha$, and $\beta$ as used in ECM each represent the transition-tranversion bias, synonymous rate, and nonsynonymous rate, respectively, \emph{relative} to the average level present in the PANDIT database \citep{PANDIT2006}, from which this model was constructed \footnote{Personally, I would not recommend using any of these parameters when simulating (although they have been fully implemented in Pyvolve), as their interpretation is neither straight-forward nor particularly biological.}. The parameter $\kappa(i,j)$ is described in depth in ref.\ \citep{ECM}, specifically in the second half of the Results section \emph{Application of the ECM}. Importantly, there are two versions of this model: \textbf{restricted} and \textbf{unrestricted}. The restricted model restricts instantaneous change to single-nucleotide only, whereas the unrestricted model also allows for double- and triple-nucleotide changes. Pyvolve refers to these models, respectively, as ECMrest and ECMunrest. By default, Pyvolve assumes that $\kappa(i,j)$, $\alpha$, and $\beta$ all equal 1, and Pyvolve uses the \emph{default empirical model} equilibrium frequencies. These frequencies correspond to those published in the original paper publishing ECM. Basic ECM can be constructed by specifying either \code{"ECMrest"} or \code{"ECMunrest"} (case-insensitive) when defining a \code{Model} object, \begin{lstlisting} # Simple restricted ECM ecm_model = pyvolve.Model("ECMrest") # Simple unrestricted ECM ecm_model = pyvolve.Model("ECMunrest") # Specifying "ECM" results in a *restricted ECM* model ecm_model = pyvolve.Model("ECM") \end{lstlisting} As with mechanistic codon models, the $dS$ and $dN$ parameters can be specified with custom model parameter dictionary keys $\alpha$, $\beta$, and/or $\omega$ (but again, these parameters do not correspond to $dN/dS$ in the traditional sense!): \begin{lstlisting} # Restricted ECM with dN/dS parameter of 0.75 ecm_model = pyvolve.Model("ECMrest", {"omega":0.75}) \end{lstlisting} The $\kappa(i,j)$ parameter is specified using the keys \code{"k\_ti"} for transition bias and \code{"k\_tv"}, for transversion bias. Specifically, \code{"k\_ti"} corresponds to \emph{ts}, and \code{"k\_tv"} corresponds to \emph{tv} in equations 9-11 in ref.\ \citep{ECM}. Thus, each of these parameters can be specified as either 0, 1, 2, or 3 (the Pyvolve default is 1). Finally, equilibrium frequencies can be customized with the \code{"state\_freqs"} key in the custom model parameters dictionary (see Section~\ref{sec:freqs} for details on frequency customization). \subsection{Specifying custom rate matrices}\label{sec:custom} Rather than using a built-in modeling framework, you can specify a custom rate matrix. This rate matrix must be square and all rows in this matrix must sum to 0. Pyvolve will perform limited sanity checks on your matrix to ensure that these conditions are met, but beyond this, Pyvolve takes your matrix at face-value. In particular, Pyvolve will not scale the matrix in any manner. Importantly, if you have \emph{separate} state frequencies which have not been incorporated into your rate matrix already, the supplied matrix must be symmetric. If you do not supply state frequencies explicitly, Pyvolve will automatically determine them directly from your provided matrix. When providing a custom matrix, you also have the option to provide a custom \emph{code}, or custom states which are evolved. In this way, you can evolve characters of any kind according to any specified transition matrix. If you do not provide a custom code, Pyvolve checks to make sure that your matrix has dimensions of either $4\times4$, $20\times20$, or $61\times61$ (for nucleotide, amino-acid, or codon evolution, respectively). Otherwise, Pyvolve will check that your provided code and matrix are compatible (in terms of dimensions). Providing a custom code is, therefore, an attractive option for specifying arbitrary models of character evolution. To specify a custom rate matrix, provide the argument \code{"custom"} as the first argument when defining a \code{Model} object, and provide your matrix in the custom-parameters dictionary using the key \code{matrix}. Any custom matrix specified should be either a 2D numpy array or a python list of lists. Below is an example of specifying a custom nucleotide rate matrix: \begin{lstlisting} import numpy as np # import to construct matrix # Define a 4x4 custom rate matrix custom_matrix= np.array([[-1.0, 0.33, 0.33, 0.34], [0.25, -1.0, 0.25, 0.50], [0.10, 0.80, -1.0, 0.10], [0.34, 0.33, 0.33, -1.0]] ) # Construct a model using the custom rate-matrix custom_model = pyvolve.Model("custom", {"matrix":custom_matrix}) \end{lstlisting} Pyvolve automatically assumes that any $4\times4$ matrix indicates nucleotide evolution. As stated above, Pyvolve will extract equilibrium frequencies from this matrix and check that they are acceptable. This frequency vector will be automatically saved to a file called "custom\_matrix\_frequencies.txt", and these values will be used to generate the root sequence during simulation. To provide a custom code, include the additional key \code{"code"}in your dictionary. Note that this key would be ignored for any built-in model. \begin{lstlisting} import numpy as np # import to construct matrix # Define a 3x3 custom rate matrix custom_matrix= np.array([[-0.50, 0.30, 0.20], [0.25, -0.50, 0.25], [0.40, 0.10, 0.50]] ) custom_code = ["0", "1", "2"] # Construct a model using the custom rate-matrix and the custom code custom_model = pyvolve.Model("custom", {"matrix":custom_matrix, "code":custom_code}) \end{lstlisting} The resulting data simulated using the above model will contain characters 0, 1, and 2. Although the above example shows a $3\times3$ matrix, it is certainly possible to specify custom matrices and codes for the "standard" dimensions of 4, 20, and 61. \subsection{Specifying equilibrium frequencies}\label{sec:freqs} Equilibrium frequencies can be specified for a given \code{Model} object with the key \code{"state\_freqs"} in the custom-parameters dictionary. This key's associated value should be a list (or numpy array) of frequencies, summing to 1. The values in this list should be ordered alphabetically. For nucleotides, the list should be ordered ACGT. For amino-acids, the list should be ordered alphabetically, with regards to single-letter amino-acids abbreviations: ACDEFGHIKLMNPQRSTVWY. Finally, for codons, the list should be ordered AAA, AAC, AAG, AAT, ACA, ... TTT, \emph{excluding} stop codons. By default, Pyvolve assumes equal equilibrium frequencies (e.g.\ $0.25$ for nucleotides, $0.05$, for amino-acids, $1/61$ for codons). These conditions are not, however, very realistic, so I strongly recommend that you specify custom equilibrium frequencies for your simulations. Pyvolve provides a convenient class, called \code{StateFrequencies}, to help you with this step, with several child classes: \begin{itemize} \item \code{\textbf{EqualFrequencies}} (default) \begin{itemize} \item Computes equal frequencies \end{itemize} \item \code{\textbf{RandomFrequencies}} \begin{itemize} \item Computes (semi-)random frequencies \end{itemize} \item \code{\textbf{CustomFrequencies}} \begin{itemize} \item Computes frequencies from a user-provided dictionary of frequencies \end{itemize} \item \code{\textbf{ReadFrequencies}} \begin{itemize} \item Computes frequencies from a sequence or alignment file \end{itemize} \item \code{\textbf{EmpiricalModelFrequencies}}\footnote{Note that this is not actually a child class of \code{StateFrequencies}, but its behavior is virtually identical.} \begin{itemize} \item Sets frequencies to default values for a given \emph{empirical} model \end{itemize} \end{itemize} All of these classes should be used with the following setup (the below code uses EqualFrequencies as a representative example): \begin{lstlisting} # Define frequency object f = pyvolve.EqualFrequencies("nucleotide") # or "amino_acid" or "codon", depending on your simulation frequencies = f.compute_frequencies() # returns a vector of equilibrium frequencies \end{lstlisting} The constructed vector of frequencies (named "frequencies" in the example above) can then be provided to the custom model parameters dictionary with the key \code{"state\_freqs"}. In addition, to conveniently save this vector of frequencies to a file, use the argument \code{savefile = <name\_of\_file>} when calling \code{.construct\_frequencies()}: \begin{lstlisting} # Define frequency object f = pyvolve.EqualFrequencies("nucleotide") frequencies = f.compute_frequencies(savefile = "my_frequency_file.txt") # returns a vector of equilibrium frequencies and saves them to file \end{lstlisting} \subsubsection{EqualFrequencies class} Pyvolve uses this class to construct the default equilibrium frequencies. Usage should be relatively straight-forward, according to the example above. \subsubsection{RandomFrequencies class} This class is used to compute "semi-random" equilibrium frequencies. The resulting frequency distributions are not entirely random, but rather are virtually flat distributions with some amount of noise. \subsubsection{CustomFrequencies class} With this class, you can provide a dictionary of frequencies, using the argument \code{freq\_dict}, from which a vector of frequencies is constructed. The keys for this dictionary are the nucleotides, amino-acids (single letter abbreviations!), or codons, and the values should be the frequencies. Any states not included in this dictionary will be assigned a 0 frequency, so be sure the values in this dictionary sum to 1. In the example below, \code{CustomFrequencies} is used to create a vector of amino-acid frequencies in which aspartate and glutamate each have a frequency of 0.25, and tryptophan has a frequency of 0.5. All other amino acids will have a frequency of 0. \begin{lstlisting} # Define CustomFrequencies object f = pyvolve.CustomFrequencies("amino_acid", freq_dict = {"D":0.25, "E":0.25, "W":0.5}) frequencies = f.compute_frequencies() \end{lstlisting} \subsubsection{ReadFrequencies class} The \code{ReadFrequencies} class can be used to compute equilibrium frequencies from a file of sequences and/or multiple sequence alignment. Frequencies can be computed either using all data in the file, or, if the file contains an alignment, using specified alignment column(s). Note that Pyvolve will ignore all ambiguous characters present in this sequence file. When specifying a file, use the argument \code{file}, and to specify the file format (e.g. "fasta" or "phylip"), use the argument \code{format}. Pyvolve uses BioPython to read the sequence file, so consult the BioPython AlignIO module documentation (or this nice \href{http://biopython.org/wiki/AlignIO}{wiki}) for available formats. Pyvolve assumes a default file format of FASTA, so the \code{format} argument is not needed when the file is FASTA. \begin{lstlisting} # Build frequencies using *all* data in the provided file f = pyvolve.ReadFrequencies("amino_acid", file = "a_file_of_sequences.fasta") frequencies = f.compute_frequencies() \end{lstlisting} To read frequencies from a specific column in a multiple sequence alignment, use the argument \code{columns}, which should be a list (\emph{indexed from 1}) of integers giving the column(s) which should be considered in frequency calculations. \begin{lstlisting} # Build frequencies using alignment columns 1 through 5 (inclusive) f = pyvolve.ReadFrequencies("amino_acid", file = "alignment_file.fasta", columns = range(1,6)) frequencies = f.compute_frequencies() # Build frequencies using only phylip-formatted alignment column 15 f = pyvolve.CustomFrequencies("amino_acid", file = "alignment_file.phy", format = "phylip", columns = 15) frequencies = f.compute_frequencies() \end{lstlisting} \subsubsection{EmpiricalModelFrequencies class} The \code{EmpiricalModelFrequencies} class will return the default vector of equilibrium frequencies for a given empirical model [amino-acid models and the codon model ECM, restricted and unrestricted versions (see ref.\ \citep{ECM} for details)]. These default frequencies correspond to the frequencies originally published with each respective empirical model. Provide \code{EmpiricalModelFrequencies} with the name of the desired empirical model to obtain these frequencies: \begin{lstlisting} # Obtain frequencies for the WAG model f = pyvolve.EmpiricalModelFrequencies("WAG") frequencies = f.compute_frequencies() # For the ECM models, use the argument "ECMrest" for restricted, and "ECMunrest" for unrestricted f = pyvolve.EmpiricalModelFrequencies("ECMrest") # restricted ECM frequencies frequencies = f.compute_frequencies() \end{lstlisting} Note that Pyvolve uses these empirical frequencies as the default frequencies, if none are provided, for each respective empirical model! \subsubsection{Restricting frequencies to certain states} When using the classes \code{EqualFrequencies} and \code{RandomFrequencies}, it is possible to specify that only certain states be considered during calculations using the \code{restrict} argument, when defining the object. This argument takes a list of states (nucleotides, amino-acids, or codons) which should have non-zero frequencies. All states not included in this list will have a frequency of zero. Thus, by specifying this argument, frequencies will be distributed \emph{only} among the indicated states. The following example will return a vector of amino-acid frequencies evenly divided among the five specified amino-acids; therefore, each amino acid in the \code{restrict} list will have a frequency of 0.2. \begin{lstlisting} # Compute equal frequencies among 5 specified amino acids f = pyvolve.EqualFrequencies("amino_acid", restrict = ["A", "G", "V", "E", "F"]) frequencies = f.compute_frequencies() \end{lstlisting} Note that specifying this argument will have no effect on the \code{CustomFrequencies}, \code{ReadFrequencies}, or \code{EmpiricalModelFrequencies} classes. \subsubsection{Converting frequencies between alphabets} When defining a StateFrequencies object, you always have to indicate the alphabet (nucleotide, amino acid, or codon) in which frequency calculations should be performed. However, it is possible to have the \code{.construct\_frequencies()} method return frequencies in a different alphabet, using the argument \code{type}. This argument takes a string specifying the desired type of frequencies returned (either "nucleotide", "amino\_acid", or "codon"). This functionality is probably most useful when used with the ReadFrequencies class; for example, you might want to obtain amino-acid frequencies from multiple sequence alignment of codons: \begin{lstlisting} # Define frequency object f = pyvolve.ReadFrequencies("codon", file = "my_codon_alignment.fasta") frequencies = f.compute_frequencies(type = "amino_acid") \end{lstlisting} As another example, you might want to obtain amino-acid frequencies which correspond to equal codon frequencies of $1/61$ each: \begin{lstlisting} f = pyvolve.EqualFrequencies("codon") frequencies = f.compute_frequencies(type = "amino_acid") # returns a vector of amino-acid frequencies that correspond to equal codon frequencies \end{lstlisting} Alternatively, you can also go the other way (amino acids to codons): \begin{lstlisting} f = pyvolve.EqualFrequencies("amino_acid") frequencies = f.compute_frequencies(type = "codon") \end{lstlisting} When converting amino acid to codon frequencies, Pyvolve assumes that there is \emph{no codon bias} and assigns each synonymous codon the same frequency. \subsection{Specifying mutation rates}\label{sec:mu} Nucleotide, mechanistic codon ($dN/dS$), and mutation-selection (MutSel) models all use nucleotide mutation rates as parameters. By default, mutation rates are equal for all nucleotide changes (e.g.\ the Jukes Cantor model \citep{JC69}). These default settings can be customized, in the custom model parameters dictionary, in one of two ways: \begin{enumerate} \item Using the key \code{"mu"} to define custom rates for any/all nucleotide changes \item Using the key \code{"kappa"} to specify a transition-to-transversion bias ratio (e.g.\ the HKY85 mutation model. \citep{HKY85}) \end{enumerate} The value associated with the \code{"mu"} key should itself be a dictionary of mutation rates, with keys "AC", "AG", "AT", etc, such that, for example, the key "AC" represents the mutation rate from A to C. Importantly, nucleotide and codon models use symmetric mutation rates; therefore, if a rate for "AC" is defined, the same value will automatically be applied to the change C to A. Thus, there are a total of 6 nucleotide mutation rates you can provide for a custom nucleotide and/or mechanistic codon model. Note that any rates not specified will be set to 1. Alternatively, MutSel models do not constrain mutation rates to be symmetric, and thus, for instance, the "AC" rate may be different from the "CA" rate. Thus, there are a total of 12 nucleotide mutation rates you can provide for a custom MutSel model. Again, if a rate for "AC" but not "CA" is defined, then the "AC" rate will be automatically applied to "CA". Any unspecified nucleotide rate pairs will be set to 1. \begin{lstlisting} # Example using customized mutation rates to construct a nucleotide model custom_mutation_rates = {"AC":1.5, "AG":0.5, "AT":1.75, "CG":0.6, "CT":1.25, "GT":1.88} my_model = pyvolve.Model("nucleotide", {"mu": custom_mutation_rates}) \end{lstlisting} If, instead, the key \code{"kappa"} is specified, then the mutation rate for all transitions (e.g.\ purine to purine or pyrimidine to pyrimidine) will be set to the specified value, and the mutation rate for all transversions (e.g.\ purine to pyrimidine or vice versa) will be set to 1. This scheme corresponds to the HKY85 \citep{HKY85} mutation model. \begin{lstlisting} # Example using customized kappa to construct a nucleotide model my_model = pyvolve.Model("nucleotide", {"kappa": 3.5}) \end{lstlisting} \section{Mutation rates vs. branch lengths} In the context of Markov models implemented in Pyvolve, mutation rates \textbf{do not} have the same interpretation as they would in a population genetics framework. Here, mutation rates indicate the \textbf{relative probabilities} of mutating between different nucleotides. Mutation rates do not correspond the underlying rate of change across the tree -- this quantity is represented instead by \textbf{branch lengths}. The main purpose of different mutation rates is to set up biases in mutation, for example if you want A$\rightarrow$T to occur at a 5-times higher rate than C$\rightarrow$T. To increase/decrease the rate of overall change, you'll want to change branch lengths. Note that branch lengths can be changed across the whole tree using the \code{scale\_tree} keyword argument when calling \code{pyvolve.read\_tree()} (see Section~\ref{lst:scaletree}). Importantly, here is what your branch lengths represent for each modeling framework - \begin{itemize} \item{Nucleotide and amino-acid models}: Mean number of substitutions per unit time \item{Mechanistic codon models ($dN/dS$)}: By default, these branch lengths mean the mean number of substitutions per unit time, agnostic in terms of nonsynonymous vs.\ synonymous substitutions. To instead force branch lengths to represent the mean number of \emph{neutral} substitutions per unit time, supply the argument \code{neutral\_scaling = True} when defining a \code{Model} instance. \item{Mutation--selection models}: Mean number of \emph{neutral} substitutions per unit time. \end{itemize} \section{Defining Partitions}\label{sec:partitions} Partitions are defined using the \code{Partition()} class, with two required keyword arguments: \code{models}, the evolutionary model(s) associated with this partition, and \code{size}, the number of positions (sites) to evolve within this partition. \begin{lstlisting} # Define a default nucleotide model my_model = pyvolve.Model("nucleotide") # Define a Partition object which evolves 100 positions according to my_model my_partition = pyvolve.Partition(models = my_model, size = 100) \end{lstlisting} In cases of branch homogeneity (all branches evolve according to the same model), each partition is associated with a single model, as shown above. When branch hetergeneity is desired, a list of models used should be provided to the \code{models} argument (as detailed, with examples, in Section~\ref{sec:branchhet}). \subsection{Specifying ancestral sequences} For each partition, you can assign an ancestral sequence which will be automatically used at the root of the phylogeny for the given partition. This can be accomplished using the keyword argument \code{root\_sequence}: \begin{lstlisting} # Define a default nucleotide model my_model = pyvolve.Model("nucleotide") # Define a Partition object with a specified ancestral sequence my_partition = pyvolve.Partition(models = my_model, root_sequence = "GATAGAAC") \end{lstlisting} When providing an ancestral sequence, it is not required to also specify a size for the partition. Pyvolve will automatically determine this information from the provided root sequence. Note that, when ancestral sequences are specified, site heterogeneity is not allowed (even if it was provided to the model used in this partition). Multiple partitions must be specified, each with different rates, to specify root sequences which will experience different rates across sites. \section{Evolving sequences}\label{sec:evolver} The callable class \code{Evolver} is Pyvolve's engine for all sequence simulation. Defining an \code{Evolver} object requires two keyword arguments: \code{partitions}, either the name of a single partition or a list of partitions to evolve, and \code{tree}, the phylogeny along which sequences are simulated. Examples below show how to define an \code{Evolver} object and then evolve sequences. The code below assumes that the variables \code{my\_partition} and \code{my\_tree} were previously defined using \code{Partition} and \code{read\_tree}, respectively. \begin{lstlisting} # Define an Evolver instance to evolve a single partition my_evolver = pyvolve.Evolver(partitions = my_partition, tree = my_tree) my_evolver() # evolve sequences # Define an Evolver instance to evolve several partitions my_multpart_evolver = pyvolve.Evolver(partitions = [partition1, partition2, partition3], tree = my_tree) my_multpart_evolver() # evolve sequences \end{lstlisting} \subsection{Evolver output files}\label{sec:output_files} Calling an \code{Evolver} object will produce three output files to the working directory: \begin{enumerate} \item \textbf{simulated\_alignment.fasta}, a FASTA-formatted file containing simulated data \item \textbf{site\_rates.txt}, a tab-delimited file indicating to which partition and rate category each simulated site belongs (described in Section~\ref{sec:ratefile}) \item \textbf{site\_rates\_info.txt}, a tab-delimited file indicating the rate factors and probabilities associated with each rate category (described in Section~\ref{sec:infofile}) \end{enumerate} In the context of complete homogeneity, in which all sites and branches evolve according to a single model, the files "site\_rates.txt" and "site\_rates\_info.txt" will not contain much useful information. However, when sites evolve under site-wise and/or branch heterogeneity, these files will provide useful information for any necessary post-processing. To change the output file names for any of those files, provide the arguments \code{seqfile} ("simulated\_alignment.fasta"), \code{ratefile} ("site\_rates.txt"), and/or \code{infofile} ("site\_rates\_info.txt") when \emph{calling} an \code{Evolver} object: \begin{lstlisting} # Define an Evolver object my_evolver = pyvolve.Evolver(tree = my_tree, partitions = my_partition) # Evolve sequences with custom file names my_evolver(ratefile = "custom_ratefile.txt", infofile = "custom_infofile.txt", seqfile = "custom_seqfile.fasta" ) \end{lstlisting} To suppress the creation of any of these files, define the argument(s) as either \code{None} or \code{False}: \begin{lstlisting} # Only output a sequence file (suppress the ratefile and infofile) my_evolver = pyvolve.Evolver(tree = my_tree, partitions = my_partition) my_evolver(ratefile = None, infofile = None) \end{lstlisting} The output sequence file's format can be changed with the argument \code{seqfmt}. Pyvolve uses BioPython to write sequence files, so consult the BioPython AlignIO module documentation (or this nice \href{http://biopython.org/wiki/AlignIO}{wiki}) for available formats. \begin{lstlisting} # Save the sequence file as seqs.phy, in phylip format my_evolver = pyvolve.Evolver(tree = my_tree, partitions = my_partition) my_evolver(seqfile = "seqs.phy", seqfmt = "phylip") \end{lstlisting} By default, the output sequence file will contain only the tip sequences. To additionally output all ancestral (including root) sequences, provide the argument \code{write\_anc = True} when calling an \code{Evolver} object. Ancestral sequences will be included with tip sequences in the output sequence file (not in a separate file!). When ancestral sequences are written, the root sequence is denoted with the name "root", and internal nodes are named "internal\_node1", "internal\_node2", etc. To see precisely to which node each internal node name corresponds, it is useful to print the parsed newick tree with the function \code{print\_tree}, as explained in Section~\ref{sec:phylogeny}. \begin{lstlisting} # Output ancestral sequences along with the tip sequences my_evolver = pyvolve.Evolver(tree = my_tree, partitions = my_partition) my_evolver(write_anc = True) \end{lstlisting} \subsection{Sequence post-processing} In addition to saving sequences to a file, \code{Evolver} can also return sequences back to you for post-processing in Python. Sequences can be easily obtained using the method \code{.get\_sequences()}. This method will return a dictionary of sequences, where the keys are IDs and the values are sequences (as strings). Note that you must evolve sequences by calling your \code{Evolver} object before sequences can be returned! \begin{lstlisting} # Return simulated sequences as dictionary my_evolver = pyvolve.Evolver(tree = my_tree, partitions = my_partition) my_evolver() simulated_sequences = my_evolver.get_sequences() \end{lstlisting} By default, \code{.get\_sequences()} will contain only the tip (leaf) sequences. To include ancestral sequences (root and internal node sequences) in this dictionary, specify the argument \code{anc = True}: \begin{lstlisting} simulated_sequences = my_evolver.get_sequences(anc = True) \end{lstlisting} \subsubsection{Interpreting the "site\_rates.txt" output file}\label{sec:ratefile} The output file "site\_rates.txt" has three columns of data: \begin{itemize} \item \textbf{Site\_Index} \begin{itemize} \item Indicates a given position in the simulated data (indexed from 1) \end{itemize} \item \textbf{Partition\_Index} \begin{itemize} \item Indicates the partition associated with this site \end{itemize} \item \textbf{Rate\_Category} \begin{itemize} \item Indicates the rate category index associated with this site \end{itemize} \end{itemize} The values in "Partition\_Index" are ordered, starting from 1, based on the \code{partitions} argument list specified when setting up the \code{Evolver()} instance. Similarly, the values in "Rate\_Category" are ordered, starting from 1, based on the rate heterogeneity lists (see Section~\ref{sec:sitehet} for details) specified when initializing the \code{Model()} objects used in the respective partition. \subsubsection{Interpreting the "site\_rates\_info.txt" output file}\label{sec:infofile} The output file "site\_rates\_info.txt" provides more detailed rate information for each partition. This file has give columns of data: \begin{itemize} \item \textbf{Partition\_Index} \begin{itemize} \item Indicates the partition index (can be mapped back to the Partition\_Index column in "site\_rates.txt") \end{itemize} \item \textbf{Model\_Name} \begin{itemize} \item Indicates the model name (note that, if no name provided, this is None. Also, only relevant for branch het) \end{itemize} \item \textbf{Rate\_Category} \begin{itemize} \item Indicates the rate category index (can be mapped back to the Rate\_Category column in "site\_rates.txt") \end{itemize} \item \textbf{Rate\_Probability} \begin{itemize} \item Indicates the probability of a site being in the respective rate category \end{itemize} \item \textbf{Rate\_Factor} \begin{itemize} \item Indicates either the rate scaling factor (for nucleotide and amino-acid models), or $dN/dS$ value for this rate category for codon models \end{itemize} \end{itemize} %By default, sequences will be output to a fasta-formatted file called "simulated\_alignment.fasta". Two additional tab-delimited files, called "site\_rates.txt" and "site\_rates\_info.txt" are also output. These files provide useful information when heterogeneity (either site or branch) is implemented. The former file indicates to which partition and rate category (if no rate heterogeneity specified, these values will all be 1) each site belongs, and the latter file provides more specific information about each rate category, in particular its associated partition, probability, and value. \subsection{Simulating replicates} The callable \code{Evolver} class makes simulating replicates of given modeling scheme straight-forward: simply define an \code{Evolver} object, and then call this object in a for-loop as many times as needed. \begin{lstlisting} # Simulate 50 replicates my_evolver = pyvolve.Evolver(tree = my_tree, partitions = my_partition) for i in range(50): my_evolver(seqfile = "simulated_replicate" + str(i) + ".fasta") # Change seqfile name to avoid overwriting! \end{lstlisting} \section{Implementing site-wise rate heterogeneity}\label{sec:sitehet} This section details how to implement heterogeneity in site-wise rates within a partition. \subsection{Implementing site-wise heterogeneity for nucleotide and amino-acid models} In the context of nucleotide and amino-acid models, rate heterogeneity is applied by multiplying the rate matrix by scalar factors. Thus, sites evolving at different rates exhibit the same evolutionary patterns but differ in how quickly evolution occurs. Two primary parameters govern this sort of rate heterogeneity: the rate factors used to scale the matrix, and the probability associated with each rate factor (in other words, the probability that a given site is in each rate category). Pyvolve models site-rate heterogeneity discretely, using either a discrete gamma distribution or a user-specified discrete rate distribution. Rate heterogeneity is incorporated into a \code{Model} object with several additional keyword arguments, detailed below. \subsubsection{Gamma-distributed rate categories} Gamma ($\Gamma$) distributed heterogeneity is specified with two-four keyword arguments when initializing a \code{Model} object: \begin{itemize} \item \code{alpha}, the shape parameter of the discrete gamma distribution from which rates are drawn (Note: following convention, $\alpha = \beta$ in these distributions \citep{Yang2006}). \item \code{num\_categories}, the number of rate categories to draw \item \code{pinv}, a proportion of invariant sites. Use this option to simulate according to $\Gamma$ + I heterogeneity. \end{itemize} Examples for specifying $\Gamma$ rate heterogeneity are shown below. \begin{lstlisting} # Gamma-distributed heterogeneity for a nucleotide model. Gamma shape parameter is 0.5, and 6 categories are specified. nuc_model_het = pyvolve.Model("nucleotide", alpha = 0.5, num_categories = 6) # Gamma-distributed heterogeneity for an amino-acid model. Gamma shape parameter is 0.5, and 6 categories are specified. aa_model_het = pyvolve.Model("WAG", alpha = 0.5, num_categories = 6) # Gamma+I heterogeneity for a nucleotide model with a proportion (0.25) invariant sites. Remaining sites are distributed according to a discrete gamma, with 5 categories nuc_model_het = pyvolve.Model("nucleotide", alpha = 0.2, num_categories = 5, pinv = 0.25) \end{lstlisting} \subsubsection{Custom-distributed rate categories} A user-determined heterogeneity distribution is specified with one (or two) arguments when initializing a \code{Model} object: \begin{itemize} \item \code{rate\_factors}, a list of scaling factors for each category \item \code{rate\_probs}, an optional list of probabilities for each rate category. If unspecified, all rate categories are equally probable. This list should sum to 1! \end{itemize} Examples for specifying custom rate heterogeneity distributions are shown below. \begin{lstlisting} # Custom heterogeneity for a nucleotide model, with four equiprobable categories nuc_model_het = pyvolve.Model("nucleotide", rate_factors = [0.4, 1.87, 3.4, 0.001]) # Custom heterogeneity for a nucleotide model, with four categories, each with a specified probability (i.e. rate 0.4 occurs with a probability of 0.15, etc.) nuc_model_het = pyvolve.Model("nucleotide", rate_factors = [0.4, 1.87, 3.4, 0.001], rate_probs = [0.15, 0.25, 0.2, 0.5]) # Gamma-distributed heterogeneity for an amino-acid model, with four equiprobable categories aa_model_het = pyvolve.Model("WAG", rate_factors = [0.4, 1.87, 3.4, 0.001]) \end{lstlisting} If you would like to specify a proportion of invariant sites, simply set one of the rate factors to 0 and assign it a corresponding probability as usual: \begin{lstlisting} # Custom heterogeneity with proportion (0.4) invariant sites nuc_model_het = pyvolve.Model("nucleotide", rate_factors = [0.4, 1.87, 3.4, 0.], rate_probs = [0.2, 0.2, 0.2, 0.4]) \end{lstlisting} \subsection{Implementing site-wise heterogeneity for mechanistic codon models}\label{sec:sitehet_codon} Due to the nature of mechanistic codon models, rate heterogeneity is not modeled with scalar factors, but with a distinct model for each rate (i.e.\, $dN/dS$ value) category. To define a \code{Model} object with $dN/dS$ heterogeneity, provide a \emph{list} of $dN/dS$ values the custom-parameters dictionary, rather than a single rate ratio value. As with standard codon models, you can provide $dN/dS$ values with keys \code{"omega"}, \code{"beta"}, or \code{"alpha"} and \code{"beta"} together (to incorporate both synonymous and nonsynonymous rate variation). By default, each discrete $dN/dS$ category will have the same probability. To specify custom probabilities, provide the argument \code{rate\_probs}, a list of probabilities, when initializing the \code{Model} object. Examples for specifying heterogeneous mechanistic codon models are shown below (note that a GY-style model is shown in the examples, but as usual, both GY-style and MG-style are allowed.) \begin{lstlisting} # Define a heterogeneous codon model with dN/dS values of 0.1, 0.5, 1.0, and 2.5 . Categories are, by default, equally likely. codon_model_het = pyvolve.Model("GY", {"omega": [0.1, 0.5, 1.0, 2.5]}) # Define a heterogeneous codon model with two dN/dS categories: 0.102 (from 0.1/0.98) and 0.49 (from 0.5/1.02). Categories are, by default, equally likely. codon_model_het = pyvolve.Model("GY", {"beta": [0.1, 0.5], "alpha": [0.98, 1.02]}) # Define a heterogeneous codon model with dN/dS values of 0.102 (with a probability of 0.4) and 0.49 (with a probably of 0.6). codon_model_het = pyvolve.Model("GY", {"beta": [0.1, 0.5], "alpha": [0.98, 1.02]}, rate_probs = [0.4, 0.6]) \end{lstlisting} \subsection{Implementing site-wise heterogeneity for mutation-selection models} Due to the nature of MutSel models, site-wise heterogeneity should be accomplished using a series of partitions, in which each partition evolves according to a unique MutSel model. These partitions can then be provided as a list when defining an \code{Evolver} object. \subsection{Implementing site-wise heterogeneity for the Empirical Codon Model} Due to the peculiar features of this model (both empirically-derived transition probabilities and "mechanistic" parameters such as $dN/dS$), site-wise heterogeneity is not supported for these models, at this time. Pyvolve will simply ignore any provided arguments for site-rate heterogeneity with this model. Feel free to email the author to discuss and/or request this feature. \section{Implementing branch (temporal) heterogeneity}\label{sec:branchhet} This section details how to implement branch (also known as temporal) heterogeneity within a partition, thus allowing different branches to evolve according to different models. To implement branch heterogeneity, your provided newick phylogeny should contain \emph{model flags} at particular nodes of interest. Model flags may be specified with either hashtags (\code{\#}) or underscores (\code{\_}), and they may be specified to satisfy one of two paradigms: \begin{itemize} \item Using \textbf{both trailing and leading symbols}, e.g. \code{\_flagname\_} or \code{\#flagname\#} . Specifying a model flag with this format will cause ALL descendents of that node to also follow this model, unless a new model flag is given downstream. In other words, this model will be propagated to apply to all children of that branch. \item Using \textbf{only a leading symbol}, e.g. \code{\_flagname} or \code{\#flagname}. Specifying a model flag with this format will cause ONLY that branch/edge to use the provided model. Descendent nodes will NOT inherit this model flag. Useful for changing model along a single branch, or towards a single leaf. \end{itemize} Model flags must be provided \textbf{AFTER} the branch length. Model flags may be repeated throughout the tree, but the model associated with each model flag will always be the same. Note that these model flag names \textbf{must} have correspondingly named model objects. For example, a tree specified as \\ \texttt{\normalsize{(t4:0.785,(t3:0.380,(t2:0.806,(t5:0.612,t1:0.660):0.762\_m1\_):0.921\_m2\_):0.207);}} will be interpreted as in Figure~\ref{fig:treeflags}. Trees with model flags, just like any other tree, are defined with the function \code{read\_tree}. Some examples of trees with model flags: \begin{lstlisting} # Define a tree with propagating model flags m1 and m2, with a string het_tree = pyvolve.read_tree(tree = "(t4:0.785,(t3:0.380,(t2:0.806,(t5:0.612,t1:0.660):0.762_m1_):0.921_m2_):0.207);") #OR het_tree = pyvolve.read_tree(tree = "(t4:0.785,(t3:0.380,(t2:0.806,(t5:0.612,t1:0.660):0.762\#m1\#):0.921\#m2\#):0.207);") # Print het_tree to see how model flags are applied: pyvolve.print_tree(het_tree) ''' >>> root None None >>> t4 0.785 None >>> internalNode3 0.207 None >>> t3 0.38 None >>> internalNode2 0.921 m2 >>> t2 0.806 m2 >>> internalNode1 0.762 m1 >>> t5 0.612 m1 >>> t1 0.66 m1 ''' # Define a tree with non-propagating model flags m1 and m2 het_tree = pyvolve.read_tree(tree = "(t4:0.785,(t3:0.380,(t2:0.806,(t5:0.612,t1:0.660):0.762_m1):0.921_m2):0.207);") #OR het_tree = pyvolve.read_tree(tree = "(t4:0.785,(t3:0.380,(t2:0.806,(t5:0.612,t1:0.660):0.762#m1):0.921#m2):0.207);") # Print het_tree to see how model flags are applied: pyvolve.print_tree(het_tree) ''' >>> root None None >>> t4 0.785 None >>> internalNode3 0.207 None >>> t3 0.38 None >>> internalNode2 0.921 m2 >>> t2 0.806 None >>> internalNode1 0.762 m1 >>> t5 0.612 None >>> t1 0.66 None ''' \end{lstlisting} \begin{figure}[htpb]%{R}{0.7\textwidth} \includegraphics[width=3.75in]{treeflags_colors.pdf} \caption{\label{fig:treeflags} The newick tree with model flags given by \\ \texttt{\scriptsize{"(t4:0.785,(t3:0.380,(t2:0.806,(t5:0.612,t1:0.660):0.762\_m1\_):0.921\_m2\_):0.207);"}} indicates the model assignments shown.} \end{figure} All model flags specified in the newick phylogeny must have corresponding models. To link a model to a model flag, specify a given model's name using the keyword argument \code{name} when initializing a \code{Model} object. This name must be identical to a given model flag, \emph{without} the leading and trailing symbols (e.g.\ the name "m1" corresponds to the flag \_m1\_ and/or \#m1\#). The model at the root of the tree will not have a specific model flag, but nonetheless a model must be used at the root (obviously), and indeed at all other nodes which are not assigned a model flag (not that all branches on the tree which are not assigned a model flag will evolve according to the model used at the root). To specify a model at the root of the tree, simply create a model, with a name, and indicate this name when defining your partition. Examples for defining models with names are shown below (for demonstrative purposes, nucleotide models with extreme state frequency differences are used here): \begin{lstlisting} # Define the m1 model, with frequencies skewed for AT-bias m1_model = pyvolve.Model("nucleotide", {"state_freqs":[0.4, 0.1, 0.1, 0.4]}, name = "m1") # Define the m2 model, with frequencies skewed for GC-bias m2_model = pyvolve.Model("nucleotide", {"state_freqs":[0.1, 0.4, 0.4, 0.1]}, name = "m2") # Define the root model, with default equal nucleotide frequecies root_model = pyvolve.Model("nucleotide", name = "root") \end{lstlisting} Alternatively, you can assign/re-assign a model's name with the \code{.assign\_name()} method: \begin{lstlisting} # (Re-)assign the name of the root model root_model.assign_name("new_root_model_name") \end{lstlisting} Finally, when defining the partition that uses all of these models, provide all \code{Model} objects in list to the \code{models} argument. In addition, you \emph{must} specify the name of the model you wish to use at the root of the tree with the keyword argument \code{root\_model\_name}. \begin{lstlisting} # Define partition with branch heterogeneity, with 50 nucleotide positions temp_het_partition = pyvolve.Partition(models = [m1_model, m2_model, root_model], size = 50, root_model_name = root_model.name) \end{lstlisting} \section{Implementing branch-site heterogeneity} Simulating according to so-called "branch-site" models, in which there are both site-wise and branch heterogeneity, is accomplished using the same strategies shown for each individual aspect (branch, Section~\ref{sec:branchhet} and site, Section~\ref{sec:sitehet}). However, there is a critical caveat to these models: all models within a given partition \emph{must} have the same number of rate categories. Furthermore, the rate probabilities must be the same across models within a partition; if different values for \code{rate\_probs} are indicated, then the probabilities provided for the \emph{root model} will be applied to all subsequent branch models. (Note that this behavior is identical for other simulation platforms, like Indelible \citep{Fletcher2009}.) The example below shows how to specify a branch-site heterogeneous nucleotide model with two models, root and model1 (note that this code assumes that the provided phylogeny contained the flag \code{\_model1\_}), when the rate categories are \emph{not} equiprobable. \begin{lstlisting} # Shared rate probabilities. Must be explicitly specified for all models (not just the root model)! shared_rate_probs = [0.25, 0.3, 0.45] # Construct a nucleotide model with 3 rate categories root = Model("nucleotide", name = "root", rate_probs = shared_rate_probs, rate_factors = [1.5, 1.0, 0.05]) # Construct a second nucleotide model with 3 rate categories model1 = Model("nucleotide", name = "model1", rate_probs = shared_rate_probs, rate_factors = [0.06, 2.5, 0.11]) # Construct a partition with these models, defining the root model nameas "root" part = Partition(models = [root, model1], root_model_name = "root", size = 50) \end{lstlisting} %\section{Insertions and Deletions} % %At this time, Pyvolve does not support insertions and deletions ("indels"). The main reason for this is that there is currently no obvious way to specify the evolution of inserted sequences. Standard simulation approaches will evolve inserted sequences like any other region: the inserted sequence will be drawn from a specified equilibrium frequency distribution, and then the inserted region will evolve according to a specified rate matrix. Typically, these parameters are taken directly from the model used in the insertion region. This setup is not, however, ideal... %\noindent This is my first python example: %\pythonexternal{script.py} \bibliographystyle{plain} \bibliography{citations} \end{document}
{ "alphanum_fraction": 0.7621698088, "avg_line_length": 62.2534979424, "ext": "tex", "hexsha": "1f8d7f8bb62ea042c5f8ecc984e41c4837267441", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-01-22T08:09:47.000Z", "max_forks_repo_forks_event_min_datetime": "2019-01-22T08:09:47.000Z", "max_forks_repo_head_hexsha": "9f1075300a9dc9ba3ce6d4232b2a73d010fe30d2", "max_forks_repo_licenses": [ "BSD-2-Clause-FreeBSD" ], "max_forks_repo_name": "rabiashahid123/Sample", "max_forks_repo_path": "user_manual/pyvolve_manual.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9f1075300a9dc9ba3ce6d4232b2a73d010fe30d2", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-2-Clause-FreeBSD" ], "max_issues_repo_name": "rabiashahid123/Sample", "max_issues_repo_path": "user_manual/pyvolve_manual.tex", "max_line_length": 1063, "max_stars_count": 1, "max_stars_repo_head_hexsha": "9f1075300a9dc9ba3ce6d4232b2a73d010fe30d2", "max_stars_repo_licenses": [ "BSD-2-Clause-FreeBSD" ], "max_stars_repo_name": "rabiashahid123/Sample", "max_stars_repo_path": "user_manual/pyvolve_manual.tex", "max_stars_repo_stars_event_max_datetime": "2019-01-22T08:09:40.000Z", "max_stars_repo_stars_event_min_datetime": "2019-01-22T08:09:40.000Z", "num_tokens": 19924, "size": 75638 }
% !TEX root = ../main.tex % !TEX spellcheck = en_GB \chapter{Discussion} The original idea for this project was an auto tuner, which should help people with their singing, by changing the tones. The idea for this project originates from listening to todays big idols, both on the live stage and their studio recordings. An impressive complex and optimized algorithm must have been made for auto tuning to work on live performances. Therefore a big challenge as an auto tuner seemed like a brilliant idea. The project started with making the requirements. After making the requirements and a guidance section with the supervisor, it was clear that this was to ambitious. Hence the project was scaled down to a pitch shifter, which firstly should be able to change the frequency of a pure sine as the incoming signal and later on, be able to handle the harmonics in a tone as well as changing frequency. The first method to change pure sine to another frequency was generally to change the sample frequency, fs. But before changing the fs, the frequency of the input signal had to be found. Since the frequency area that this project works around is as seen in \cref{tab:cmajor} very close to each other a high resolution of the FFT was needed. And to get a high frequency resolution a lot of samples is needed. But this brings another predicament since with the high fs, the amount of samples will both fill the memory but also use a lot of calculation time. Therefore decimation before making the FFT, was decided this could give a high frequency resolution and doesn't fill the memory. Both of these methods huge flaw is the time delay of approximately one second, and that is only the frequency determination that has been made at that point. This method with decimation, fft and change of fs was manageable in Matlab, where the algorithm was written. But at the implementation stage it was noticed that the fs, on the BF533, only had two possible values, thus a new solution was needed. The new method was the use of Hilbert transformation to find the frequency with a high frequency resolution and a short amount of processing time. A sinus generator from the $math.h$ library and a circular buffer to minimize the amount of sine waves to compute. The brilliant and astonishing thing about the Hilbert transform, is the extremely high frequency resolution and minimal amount of samples needed. It can be done quick and only with 512 samples, and even less samples if the use of a window was implemented as shown in \cref{sec:test}. The cons of the Hilbert transform is the fact that it can only find the precise frequency of a pure sine wave meaning, if there are several sines mixed together it will find the average. This could probably be handled in some way, if a good description of the harmonics for an instrument as well as each tone is described in the code. Through the Hilbert transformation quantization errors also occurs, and result in a small deviation of the found frequency compared to the algorithm in Matlab. After the Hilbert transformation, the sine wave generator is needed. It was implemented in Matlab and a version on the Blackfin to measure the cycles usage. A problem with the sine generator is, if a tone is the input signal, a generator for all the harmonics are also needed, and could escalate to become very heavy on the computation or memory. To avoid computing a new sine every time the idea of reusing the data in a circular buffer was made up. Since most of this method has not been implemented on the Blackfin, it is still very unclear how the sound will be, the total time delay and how it will work when it is upgraded to handle harmonics as well. Through the whole process the synergy between algorithm writing in Matlab and afterwards implementation on the Blackfin has been very effective. The synergy has also been used efficient to debug, to find the error in the implemented algorithm, by loading output files and see if they match the Matlab code. This method was very efficient since it was very clear to find the exact place something went wrong, and handle it and in the end an outcome allowing the frequency to be determined.
{ "alphanum_fraction": 0.8003331747, "avg_line_length": 102.487804878, "ext": "tex", "hexsha": "f3601b0c036a6d6d428fa1de70fad9ecc5c478c9", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7ed401e1a9d7b34120f953d1afe5266d57f9e7a9", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "lsangild/ETISB", "max_forks_repo_path": "Report/Report/discussion.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7ed401e1a9d7b34120f953d1afe5266d57f9e7a9", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "lsangild/ETISB", "max_issues_repo_path": "Report/Report/discussion.tex", "max_line_length": 231, "max_stars_count": null, "max_stars_repo_head_hexsha": "7ed401e1a9d7b34120f953d1afe5266d57f9e7a9", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "lsangild/ETISB", "max_stars_repo_path": "Report/Report/discussion.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 859, "size": 4202 }
\clearpage \subsection{For Loop} % (fold) \label{sub:for_loop} As has been shown in previous chapters, Computers can only perform simple actions. They cannot perform an action on all of the elements in our arrays. For example, a computer cannot sum \textbf{all} of the values in an array. What you need to do is think of these tasks so that they can be performed \textbf{for \emph{each}} value in the array. So the sum would become, for each of the numbers in the array, add the number to a running total. When this has been performed for each element in the array you have the sum of all of the elements. The for loop is a \nameref{sub:pre_test_loop} that repeats a block of code a number of times. You can think of it like a counting loop, with it counting from a start value to an end value. The for loop has a \textbf{control variable} that has the number of the current loop as its value. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{./topics/arrays/diagrams/For} \caption{The for loop can be used to loop through the elements of an array} \label{fig:for-loop} \end{figure} \mynote{ \begin{itemize} \item A for loop is an \textbf{action}, a kind of statement you can use to command the computer to perform an action. \item The key is to think about processing \textbf{each} element in an array, rather than thinking about \emph{all} elements of an array. \item The for loop can then provide the infrastructure to repeat this code \emph{for each} element in the array. \item The for loop is designed to work well with the array. The values in the \emph{control variable} can be used to access the individual elements of the array. \item When processing the elements of an array you have it loop from the lowest index value (0) to the highest index value (n - 1). \end{itemize} } % subsection for_loop (end)
{ "alphanum_fraction": 0.7587506731, "avg_line_length": 71.4230769231, "ext": "tex", "hexsha": "4b95521786feb8e7c2b81ab8223771c40ab3246d", "lang": "TeX", "max_forks_count": 6, "max_forks_repo_forks_event_max_datetime": "2022-03-24T07:42:53.000Z", "max_forks_repo_forks_event_min_datetime": "2020-06-02T03:18:37.000Z", "max_forks_repo_head_hexsha": "8f3040983d420129f90bcc4bd69a96d8743c412c", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "macite/programming-arcana", "max_forks_repo_path": "topics/arrays/concepts/for.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "bb5c0d45355bf710eff01947e67b666122901b07", "max_issues_repo_issues_event_max_datetime": "2021-12-29T19:45:10.000Z", "max_issues_repo_issues_event_min_datetime": "2021-12-29T19:45:10.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "thoth-tech/programming-arcana", "max_issues_repo_path": "topics/arrays/concepts/for.tex", "max_line_length": 542, "max_stars_count": 1, "max_stars_repo_head_hexsha": "bb5c0d45355bf710eff01947e67b666122901b07", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "thoth-tech/programming-arcana", "max_stars_repo_path": "topics/arrays/concepts/for.tex", "max_stars_repo_stars_event_max_datetime": "2021-08-10T04:50:54.000Z", "max_stars_repo_stars_event_min_datetime": "2021-08-10T04:50:54.000Z", "num_tokens": 474, "size": 1857 }
\chapter{\label{chapter2} Designing the Language} The grammar design has been a complex, delicate and continuous task of the project, since it represents the true definition of a language in formal terms and it revealed to be of crucial importance for all the subsequent phases. Lots of on the fly adjustments and corrections leaded to the definition reported and discussed in the following section. The Coco/R software -- that turned out to be a quite powerful tool -- was used to generate the tokenizer, the parser and the type checker \cite{cocor}. \section{BNF Grammar} The whole grammar is described by means of the Backus-Naur Form (BNF, \cite{bnf}), despite the fact that the Coco/R metasyntax is much closer to the Extended Backus-Naur Form (EBNF, \cite{ebnf}), since the authors considered it less readable. We first introduce the used tokens, which are the following ones:\\ \begin{lstlisting} [caption=The \fwap tokens.] TOKENS ident = letter {letter | digit}. url = ap "http://" {UrlInLine} ap. number = digit {digit}. string = '"' {AnyButDoubleQuote | "\\\""} '"'. \end{lstlisting} Then, we provide all the needed productions. Even though the grammar is not LL(1), the Coco/R parser generator was able to handle LL(k) situations. \begin{grammar} <Fun> ::= { <ProcDecl> } <ProcDecl> ::= 'fun' <Ident> '(' <FormalsDeclList> ')' <FRType> '\{' <Bolck> '\}' \alt 'fun' 'main' '(' ')' '\{' <Bolck> '\}' <AProcDecl> ::= 'fun' '(' <FormalsDeclList> ')' <FRType> '\{' <Bolck> '\}' <Block> ::= <VarDecl> <Block> | <Stat> <Block> | $\varepsilon$ <FormalsDeclList> ::= <FormalsDeclTail> \alt <Ident> <Type> <FormalsDeclTail> <FormalsDeclTail> ::= ',' <Ident> <Type> | $\varepsilon$ <Stat> ::= <Ident> '=' 'async' '\{' 'return' <Ident> '(' <ActualSyncList> ')' '\}' ';' \alt <Ident> '=' 'dasync' '\{' <Ident> ',' 'return' <Ident> '(' <ActualSyncList> ')' '\}'';' \alt <Ident> '=' 'dasync' '\{' <URL> ',' 'return' <Ident> '(' <ActualSyncList> ')' '\}'';' \alt <Ident> '=' <CompleteExpr> ';' \alt <Ident> '=' <AProcDecl> \alt <Ident> '=' 'readln' '(' ')'';' \alt 'if' <CompleteExpr> '\{' <Bolck> '\}' <Else> \alt 'while' <CompleteExpr> '\{' <Bolck> '\}' \alt 'println' '(' <CompleteExpr> ')' ';' \alt 'println' '(' <String> ')' ';' \alt <Return>. <Else> :: = 'else' '\{' <Bolck> '\}' | $\varepsilon$ <Return> ::= 'return' <CompleteExpr> \alt 'return' <AProcDecl> <VarDecl> ::= <VarDeclList> ';' 'var' <Ident> <Type> '=' 'readln' '('')'';' \alt 'var' <Ident> <Type> '=' <CompleteExpr> ';' \alt 'var' <Ident> <Type> '=' <AProcDecl> ';' \alt 'var' <Ident> <Type> '=' <URL> ';' \alt 'var' <Ident> <Type> '=' 'async' '\{' 'return' <Ident> '(' <ActualSyncList> ')''\}' ';' \alt 'var' <Ident> <Type> '=' 'dasync' '\{'<Ident> ',' 'return' <Ident> '(' <ActualSyncList> ')' '\}' ';' \alt 'var' <Ident> <Type> '=' 'dasync' '\{'<URL> ',' 'return' <Ident> '(' <ActualSyncList> ')' '\}' ';' <VarDeclList> ::= 'var' <Ident> <VarDeclTail> <VarDeclTail> ::= <Type> | ',' <Ident> <VarDeclTail> | $\varepsilon$ <ActualSyncList> ::= <ActualSyncTail> \alt <CompleteExpr> <ActualSyncTail> <ActualSyncTail> ::= ',' <CompleteExpr> | $\varepsilon$ <CompleteExpr> ::= <Expr> <CompleteExprTail> <CompleteExprTail> ::= <BoolOp> <Expr> <CompleteExprTail> | $\varepsilon$ <Expr> ::= <SimpExpr> <ExprTail> <ExprTail> ::= <RelOp> <SimpExpr> <ExprTail> | $\varepsilon$ <SimpExpr> ::= <Term> <SimpExprTail> <SimpExprTail> ::= <AddOp> <Term> <SimpExprTail> | $\varepsilon$ <Term> ::= <Factor> <TermTail> <TermTail> ::= <MulOp> <Factor> <TermTail> | $\varepsilon$ <Factor> ::= <Ident> \alt <Ident> '(' <ActualsList> ')' \alt 'number' \alt '-'<Factor> \alt 'true' \alt 'false' \alt '(' <CompleteExpr> ')' <ActualsList> ::= <ActualsListTail> \alt <CompleteExpr> <ActualsListTail> \alt <AProcDecl> <ActualsListTail> <ActualsListTail> ::= ',' <CompleteExpr> <ActualsListTail> \alt ',' <AProcDecl> <ActualsListTail> \alt $\varepsilon$ <FRType> ::= 'fun' '(' <TypeList> ')' <FRType> \alt 'int' \alt 'bool' <TypeList> ::= <TypeListTail> \alt <Type> <TypeListTail> <TypeListTail> ::= ',' <Type> <TypeListTail> | $\varepsilon$ <Type> ::= 'fun' | 'int' | 'bool' | 'url' <AddOp> ::= '+' | '-' <RelOp> ::= '<' | '>' | '==' | '!=' | '$\leq$' | '$\geq$' <BoolOp> ::= '$\& \&$ ' | '$\|$ ' <MulOp> ::= '*' | '/' <Ident> ::= ident <URL> ::= url \end{grammar} The Coco/R input, provided with all the semantic annotations, is contained within the file \textit{GramWithSemantics.ATG}. \section{\label{typecheck}Symbol Table and Type Checking} The \texttt{SymTable.cs} class represents the \fwap symbol table which - being the language scope static by design choice - allows both type and environment checking. In fact, \texttt{SymTable.cs} can register the name of any variable and declared function with all their type information (i.e. also the return type for the functions). In order to manage those associations, the \texttt{Obj} class has been coded; an \texttt{Obj.cs} instance can be \begin{itemize} \item a variable \item a declared function \item a scope \end{itemize} distinguished by the \texttt{kind} field. All possible kinds are listed below. \begin{lstlisting}[caption=Labels for \texttt{Node}'s.] public enum Kinds {var,fundec,scope} \end{lstlisting} \texttt{Obj.cs}'s are then inserted as terminal within the Abstract Syntax Tree (AST, see Chapter\ref{chapter3}) that allows the interpreter and the compiler to access the type information of each variable. Type checking is executed while parsing the \fwap code, accordingly to the semantics rules (i.e. pieces of code) defined within the \textit{GramWithSemantics.ATG} file. In particular \begin{itemize} \item the variable declarations/assignments (e.g. \texttt{var x bool = 35;} is not valid) \item the formal vs. actual parameters for a function call of a declared procedure \item the return type of declared functions \end{itemize} are statically checked. On the other hand, because of the language syntax, it was not possible to type-check nor the return type of the anonymous functions nor the actual parameters coherence with respect to the formal ones, after the assignment of the function itself to a \texttt{fun} variable. In addition to that, the parser is able to check whether all the program flows within a function end with a \texttt{return} statement, accordingly to the language syntax. Eventually, the \texttt{async\{\}} and \texttt{dasync\{\}} primitives are guaranteed to respect the following restrictions: \begin{enumerate} \item they must contain one and only one function call of a declared function, since calling anonymous function may lead to race conditions among threads \item the called function must be \texttt{println()} and \texttt{readln()} free, for trivial reasons \item the called function must not return a function, for the same reasons in (1). \end{enumerate} No control is provided on whether the function object of a \texttt{dasync\{\}} comprehend other function calls. Thus, it is highly recommended to ensure that the function is self-contained before inserting it in a \texttt{dasync\{\}} block.
{ "alphanum_fraction": 0.6465053763, "avg_line_length": 44.5508982036, "ext": "tex", "hexsha": "fe6e157be85789ffb1d19840acc446067a858377", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "6bdfbedfa0dc8fec7e25b81665624c6aedc93e3d", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "MCSN-project2014/APproject", "max_forks_repo_path": "docs/chapters/design.tex", "max_issues_count": 25, "max_issues_repo_head_hexsha": "6bdfbedfa0dc8fec7e25b81665624c6aedc93e3d", "max_issues_repo_issues_event_max_datetime": "2015-01-14T15:11:28.000Z", "max_issues_repo_issues_event_min_datetime": "2015-01-01T18:07:39.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "MCSN-project2014/APproject", "max_issues_repo_path": "docs/chapters/design.tex", "max_line_length": 468, "max_stars_count": 1, "max_stars_repo_head_hexsha": "6bdfbedfa0dc8fec7e25b81665624c6aedc93e3d", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "MCSN-project2014/APproject", "max_stars_repo_path": "docs/chapters/design.tex", "max_stars_repo_stars_event_max_datetime": "2015-01-06T21:30:55.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-06T21:30:55.000Z", "num_tokens": 2205, "size": 7440 }
% \pagebreak \subsection{Experimental setups} Two setups were adopted for the experiments. The main differences between the setups are the stopping criterion and the batch size. In particular, \textit{Setup B} is used when BERT is involved since \textit{Setup A} would be too expensive. \paragraph{Setup A} \begin{itemize} \item \textbf{number of epochs:} 100 for FIGER, 75 for BBN\footnote{BBN is smaller than FIGER and converges faster} \item \textbf{number of examples per epoch:} 10,240 (20 batches\footnote{using $n$ batches per epoch means performing $n$ backward operations per epoch} of size 512) \item \textbf{data shuffle:} each time the whole dataset has been seen \item \textbf{optimizer:} Adam with fixed learning rate set to 0.0005 \item \textbf{inference:} threshold = 0.5 \end{itemize} \paragraph{Setup B} \begin{itemize} \item \textbf{number of epochs:} undefined; determined by an early stopping strategy on the \textit{dev loss} with \textit{patience = 5} \item \textbf{number of examples per epoch:} 10,240 (160 batches of size 64) \item \textbf{data shuffle:} every time the whole dataset has been seen \item \textbf{optimizer:} Adam with fixed learning rate set to 0.0005 \item \textbf{inference:} threshold = 0.5 \end{itemize}
{ "alphanum_fraction": 0.7436293436, "avg_line_length": 58.8636363636, "ext": "tex", "hexsha": "c30d5daf08d51afc006df305bf04e5cfce9d844e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "6211ff86af247aace530912c4eca9019365d606e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "christianbernasconi96/MasterThesis", "max_forks_repo_path": "project/experiments/experimental_setups.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6211ff86af247aace530912c4eca9019365d606e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "christianbernasconi96/MasterThesis", "max_issues_repo_path": "project/experiments/experimental_setups.tex", "max_line_length": 240, "max_stars_count": null, "max_stars_repo_head_hexsha": "6211ff86af247aace530912c4eca9019365d606e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "christianbernasconi96/MasterThesis", "max_stars_repo_path": "project/experiments/experimental_setups.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 376, "size": 1295 }
\appendix \chapter{Appendix} \section{Sum of reciprocal vertices depending only on $v_1$} \label{appx:sum_reciprocal_vertices} One condition deduced from theorem~\ref{theo:1} is the product condition~\ref{eq:condition_max}, which specifies the validity of the cycle-alpha's upper limit. This condition requires the sum $\frac{1}{kv_1}+\frac{1}{kv_2}+\frac{1}{kv_3}+\ldots$ to be limited. In order to formulate this sum independently from the successive vertices $v_2,v_3,\ldots$, we substitute these as follows: \begin{flalign} v_1&=v_1\notag\\ v_2&=\frac{kv_1+1}{2^{\alpha_1}}\notag\\ v_3&=\frac{k^2v_1+k+2^{\alpha_1}}{2^{\alpha_1+\alpha_2}}\notag\\ v_4&=\frac{k^3v_1+k^2+k\cdot2^{\alpha_1}+2^{\alpha_1+\alpha_2}}{2^{\alpha_1+\alpha_2+\alpha_3}}\label{eq:sum_v_4}\\ \vdots\notag\\ v_{n+1}&=\frac{k^nv_1+\sum_{j=1}^{n}k^{j-1}2^{\alpha_1+\ldots+\alpha_n-\sum_{l>n-j}\alpha_l}}{2^{\alpha_1+\ldots+\alpha_n}}\label{eq:sum_v_n_plus_1} \end{flalign} \par\medskip The sum of the reciprocal vertices can be expressed as a term that depends from $v_1$ and from the number of contracted edges, id est the number of dvisions by two, between two successive vertices $\alpha_1,\alpha_2,\alpha_3,\ldots$: \begin{equation*} \sum_{i=1}^{n+1}\frac{1}{kv_i}=\frac{1}{k}\left(\frac{1}{v_1}+\sum_{i=1}^{n}\frac{1}{v_{i+1}}\right)=\frac{1}{k}\left(\frac{1}{v_1}+\sum_{i=1}^{n}\frac{2^{\alpha_1+\ldots+\alpha_i}}{k^iv_1+\sum_{j=1}^{i}k^{j-1}2^{\alpha_1+\ldots+\alpha_n-\sum_{l>i-j}\alpha_l}}\right) \end{equation*} \section{The product formula depending only on $v_1$} \label{appx:product_formula_depending_v1} In a similar way to deduce the sum of reciprocal vertices depending only on $v_1$ as performed in \ref{appx:sum_reciprocal_vertices}, we evolve the product formula depending only on $v_1$: \begin{flalign} \prod_{i=1}^{n+1}\left(1+\frac{1}{kv_i}\right)&=1+\frac{2^{\alpha_1+\ldots+\alpha_n}+k\cdot2^{\alpha_1+\ldots+\alpha_{n-1}}+\ldots+k^{n-1}\cdot2^{\alpha_1}+k^n}{k^{n+1}v_1}\label{eq:prod_sum_v_n_plus_1}\\ &=1+\frac{2^{\alpha_1+\ldots+\alpha_n}+k\cdot\sum_{j=1}^{i}k^{j-1}2^{\alpha_1+\ldots+\alpha_n-\sum_{l>i-j}\alpha_l}}{k^{n+1}v_1}\label{eq:prod_sum_v_n_plus_1_inserted}\\ &=1+\frac{2^{\alpha_1+\ldots+\alpha_n}+k\cdot\left(v_{n+1}\cdot2^{\alpha_1+\ldots+\alpha_n}-k^nv_1\right)}{k^{n+1}v_1}\notag\\ &=\frac{2^{\alpha_1+\ldots+\alpha_n}\left(1+kv_{n+1}\right)}{k^{n+1}v_1}\label{eq:prod_sum_v_n_plus_1_simplified} \end{flalign} We inserted the sum used in equation~\ref{eq:sum_v_n_plus_1} into the above-given equation~\ref{eq:prod_sum_v_n_plus_1} and then obtained equation~\ref{eq:prod_sum_v_n_plus_1_inserted}. Let us divide this product by the last factor and consider the product in the condition for cycle-alpha's upper limit, which iterates to $n$ instead of $n+1$: \begin{equation} \label{eq:prod_sum_v_n_simplified} \prod_{i=1}^{n}\left(1+\frac{1}{kv_i}\right)=\frac{\prod_{i=1}^{n+1}\left(1+\frac{1}{kv_i}\right)}{\frac{kv_{n+1}+1}{kv_{n+1}}}=\frac{2^{\alpha_1+\ldots+\alpha_n}\cancel{\left(1+kv_{n+1}\right)}kv_{n+1}}{k^{n+1}v_1\cancel{\left(kv_{n+1}+1\right)}}=\frac{2^{\alpha_1+\ldots+\alpha_n}v_{n+1}}{k^nv_1} \end{equation} The above-shown equation~\ref{eq:prod_sum_v_n_simplified} for the product in the condition for cycle-alpha's upper limit becomes simplified, when we replaced the numerator by equation~\ref{eq:prod_sum_v_n_plus_1_simplified}. \section{Simplying the product for $k=3$} \label{appx:product_simplification_k3} Below we will show the simplification of the product in the condition for alpha's upper limit, which has been performed by equation~\ref{eq:product_simplification_k3}: \[ \prod_{i=1}^{n}\frac{3^i(v_1+1)-2^i}{3^i(v_1+1)-3*2^{i-1}} =\frac{1}{v_1}-\frac{1}{v_1}\left(\frac{2}{3}\right)^n+1 \] In fact, this product is a telescoping product. We factor out $\frac{1}{3^n}$, then shift the index in the product of the denominator by one to start with $i=0$, and use the product's telescopic property to cancel equal factors in numerator and denominator: \begin{flalign*} &\prod_{i=1}^{n}\frac{3^i(v_1+1)-2^i}{3^i(v_1+1)-3*2^{i-1}} =\frac{1}{3^n}\prod_{i=1}^{n}\frac{3^i(v_1+1)-2^i}{3^{i-1}(v_1+1)-2^{i-1}} =\frac{1}{3^n}\frac{\prod_{i=1}^{n}\left(3^i(v_1+1)-2^i\right)}{\prod_{i=1}^{n}\left(3^{i-1}(v_1+1)-2^{i-1}\right)}\\ =&\frac{1}{3^n}\frac{\prod_{i=1}^{n}\left(3^i(v_1+1)-2^i\right)}{\prod_{i=0}^{n-1}\left(3^i(v_1+1)-2^i\right)} =\frac{1}{3^n}\frac{3^n(v_1+1)-2^n}{(v_1+1)-1} =\frac{3^nv_1+3^n-2^n}{3^nv_1} =\frac{1}{v_1}-\frac{1}{v_1}\left(\frac{2}{3}\right)^n+1 \end{flalign*} \section{Proving the product simplification for $k=3$ inductively} \label{appx:proof_product_simplification_k3} Using induction, we prove the simplification below that has been made by equation~\ref{eq:product_simplification_k3}: \[ \prod_{i=1}^{n}\frac{3^i(v_1+1)-2^i}{3^i(v_1+1)-3*2^{i-1}} =\frac{1}{v_1}-\frac{1}{v_1}\left(\frac{2}{3}\right)^n+1 \] The base case $n=1$ is readily comprehensible and obviously correct: \[ \prod_{i=1}^{1}\frac{3^i(v_1+1)-2^i}{3^i(v_1+1)-3*2^{i-1}} =\frac{3(v_1+1)-2}{3(v_1+1)-3} =\frac{1}{3v_1}+1 =\frac{1}{v_1}-\frac{1}{v_1}\left(\frac{2}{3}\right)+1 \] The induction step is explained below, and here we arrive at a true statement too: \begin{flalign*} \prod_{i=1}^{n+1}\frac{3^i(v_1+1)-2^i}{3^i(v_1+1)-3*2^{i-1}}&=\frac{3^{n+1}(v_1+1)-2^{n+1}}{3^{n+1}(v_1+1)-3*2^n}\prod_{i=1}^{n}\frac{3^i(v_1+1)-2^i}{3^i(v_1+1)-3*2^{i-1}}\\ &=\frac{3^{n+1}(v_1+1)-2^{n+1}}{3^{n+1}(v_1+1)-3*2^n}\left(\frac{1}{v_1}-\frac{1}{v_1}\left(\frac{2}{3}\right)^n+1\right)\\ &=\frac{3^{n+1}(v_1+1)-2^{n+1}}{3^{n+1}(v_1+1)-3*2^n}\cdot\frac{3^n-2^n+3^nv_1}{3^nv_1}\\ &=\frac{3^{n+1}(v_1+1)-2^{n+1}}{\cancel{3^{n+1}(v_1+1)-3*2^n}}\cdot\frac{\cancel{3*(3^n-2^n+3^nv_1)}}{3*3^nv_1}\\ &=\frac{1}{v_1}-\frac{1}{v_1}\left(\frac{2}{3}\right)^{n+1}+1 \end{flalign*} \section{The condition for an Engel expansion's limited growth} \label{appx:condition_limited_growth} The steps for transforming the inequality~\ref{eq:condition_limited_growth} as the condition for limiting an Engel expansion's growth (see section~~\ref{sec:condition_limited_growth}) are given below: \begin{flalign*} 0&<1+v_1-\frac{3^nv_1}{2^{m+n}}-\frac{3^{n-1}}{2^{m+n}}-\frac{3^{n-1}2^{m+1}}{2^{m+n}}\\ 0&<1+v_1-\frac{3^{n-1}}{2^{m+n}}\left(3v_1+1\right)-\frac{3^{n-1}}{2^{n-1}}\\ 0&<2^{n-1}+2^{n-1}v_1-\frac{3^{n-1}}{2^{m+1}}\left(3v_1+1\right)-3^{n-1}\\ 0&<3\cdot2^{n-1}+3\cdot2^{n-1}v_1-3\cdot\frac{3^{n-1}}{2^{m+1}}\left(3v_1+1\right)-3^n-2\cdot2^{n-1}+2\cdot2^{n-1}\\ 0&<2^{n-1}\left(3v_1+1\right)-3\cdot\frac{3^{n-1}}{2^{m+1}}\left(3v_1+1\right)-3^n+2^n\\ 0&<\left(3v_1+1\right)\left(2^{n-1}-3\cdot\frac{3^{n-1}}{2^{m+1}}\right)-3^n+2^n\\ 3^n-2^n&<\left(3v_1+1\right)\left(2^{n-1}-3\cdot\frac{3^{n-1}}{2^{m+1}}\right) \end{flalign*} Now we reshape the inequality further so that we isolate $v_1$ to the right side of this inequality: {\setlength{\jot}{1.2em} \begin{flalign*} \frac{3^n-2^n}{2^{n-1}-3\cdot\frac{3^{n-1}}{2^{m+1}}}-1&<3v_1\\ \frac{3^n2^{m+1}-2^n2^{m+1}}{2^{m+1}2^{n-1}-3^n}-1&<3v_1\\ \frac{3^n2^{m+1}-2\cdot3^n-2^n2^{m+1}+2\cdot3^n}{2^{m+n}-3^n}-1&<3v_1\\ \frac{3\cdot3^{n-1}2^{m+1}-2\cdot3\cdot3^{n-1}-2\cdot2^{n-1}2^{m+1}+2\cdot3^n}{3\cdot\left(2^{m+n}-3^n\right)}-\frac{1}{3}&<v_1\\ \frac{\cancel3\cdot3^{n-1}2^{m+1}-2\cdot\cancel3\cdot3^{n-1}}{\cancel3\cdot\left(2^{m+n}-3^n\right)}-\frac{2\cdot\cancel{\left(2^{m+n}-3^n\right)}}{3\cdot\cancel{\left(2^{m+n}-3^n\right)}}-\frac{1}{3}&<v_1\\ \frac{3^{n-1}2^{m+1}-2\cdot3^{n-1}}{2^{m+n}-3^n}-1&<v_1 \end{flalign*}} %\section{An alternative proof for alpha's upper limit for $H_{C,1}$} %\label{appx:proof_k1} %We demonstrate that condition~\ref{eq:condition_max} is true for $k=1$. What makes this case so special and therefore so manageable is that the equation in theorem~\ref{theo:2} constantly yields $2^1$, whatever value we use for $n$. By setting $k=1$, the condition becomes reduced to: %\begin{equation} %\label{eq:condition_k1} %\prod_{i=1}^{n}\frac{v_i+1}{v_i}<2^2 %\end{equation} %One can see instantly that the condition~\ref{eq:condition_k1} above is met for $n=v_1=1$. This trivial cycle only includes the sole vertex $v_1=1$. The fact which causes a worst case sequence $v_n,v_{n-1},\ldots,v_2,v_1$ describing a path from $v_n$ to $v_1$ is precisely that between two successive nodes a division by two was only made once: %\begin{equation} %\label{eq:worst_case_1} %\arraycolsep=1.4pt %\begin{array}{llll} %v_n&=2^{n-1}&\cdot\ (v_1-1)+1&\\ %v_{n-1}&=2^{n-2}&\cdot\ (v_1-1)+1&\\ %\vdots\\ %v_2&=2^1&\cdot\ (v_1-1)+1&=2v_1-1\\ %v_1&=2^0&\cdot\ (v_1-1)+1&=v_1 %\end{array} %\end{equation} %One example for such a sequence is $v_4=17,v_3=9,v_2=5,v_1=3$. It shall be mentioned that the sequence $v_1,2\cdot v_1-1,4\cdot v_1-3,\ldots$ is an increasing one for any $v_1>1$, which means $v_1<v_2<\ldots<v_{n-1}<v_n$. Why might such a sequence be referred to as worst case? Ultimately, it is because one needs to show that the product stays below the upper limit $2^2=4$. The smaller the values (labels) of the vertices, the larger the product. If we allowed additional divisions by $2$, the sequence would increase more steeply, the vertices' values would be larger and the product would consequently be smaller. %Setting the worst case sequence $v_n=2^{n-1}(v_1-1)+1$ into the product \ref{eq:condition_k1} leads to the following product: %\begin{equation} %\label{eq:condition_k1_v1} %\prod_{i=1}^{n}\frac{2^{i-1}(v_1-1)+2}{2^{i-1}(v_1-1)+1} %\end{equation} %As previously mentioned, we have to consider the worst case scenario, which results in the maximum product. We provoke the worst case if a vertex's value is as small as possible, which we achieve with the sequence $1,3,5,9,17,\ldots$ that is composed from two partial sequences, namely the one-element sequence $v_1=1$ and the sequence defined by \ref{eq:worst_case_1} starting with $v_1=3$. As product we then receive the composed product given below which must remain below the limit 4: %\[ %\prod_{i=1}^{1}\frac{v_i+1}{v_i}\prod_{i=1}^{n}\frac{2^{i-1}(v_1-1)+2}{2^%{i-1}(v_1-1)+1}=2\prod_{i=1}^{n}\frac{2^i+2}{2^i+1}<4 %\] %The first sub-product refers to \ref{eq:condition_k1} and comprises only a single iteration. We insert the value $v_1=1$ yielding a final result of $2$. The second sub-product is sourced from \ref{eq:condition_k1_v1} and has been simplified by setting $v_1=3$. We further facilitate this second sub-product as shown below: %\begin{equation*} % \prod_{i=1}^{n}\frac{2^i+2}{2^i+1}=2^n\prod_{i=1}^{n}\frac{2^{i-1}+1}{2^i+1}=2^n\frac{(2^0+1)\cancel{(2^1+1)}\cancel{(2^2+1)}\cdots\cancel{(2^{n-1}+1)}}{\cancel{(2^1+1)}\cancel{(2^2+1)}\cdots\cancel{(2^{n-1}+1)}(2^n+1)}=\frac{2^{n+1}}{2^n+1} %\end{equation*} %The upper limit of this second sub-product is $2$ and consequently the entire product composed by both sub-products therefore converges from below towards $4$, which leads to our condition~\ref{eq:condition_k1} being fulfilled even in the worst case: %\[ %\prod_{i=1}^{\infty}\frac{2^i+2}{2^i+1}=\lim_{n\to\infty}\frac{2^{n+1}}{2^n+1}=2 %\] \section{Further worst-case studies} \label{sec:worstcase_k3} Regarding worst case scenarios, a distinction must be made between two basic cases: \begin{itemize} \item The vertex $v_{n+1}$ becomes a maximum. This kind of worst case we have dealt with in chapter~\ref{ch:maximizing_target_node} and we used for proving cycle-alpha's upper limit in $H_{C,1}$ with section~\ref{sec:alphas_upper_limit_k_1}. \item The product in condition~\ref{eq:condition_max} and consequently the sum of reciprocal vertices, formulated in \ref{appx:sum_reciprocal_vertices}, becomes a maximum. \end{itemize} Trying to find a worst case that maximizes the product in condition~\ref{eq:condition_max} means to search for a sequence of odd numbers that rises as high as possible. One could try the ascending sequence of odd integers $v_i=2i-1$ (beginning at $v_1=1$), but will find that for this case the product will not converge against a limit value. This sequence (beginning at $v_1=1$) allow us to transform the product contained in condition~\ref{eq:condition_max} into a limit analyzable function using the Pochhammer’s symbol (sometimes referred to as the \textit{rising factorial} or \textit{shifted factorial}), which is denoted by $(x)_n$ and defined as follows \cite{Ref_Zwillinger_Kokoska}, \cite[p.~679]{Ref_Brychkov} and \cite[p.~1005]{Ref_Trott}: \[ (x)_n=x(x+1)(x+2)\cdots(x+n-1)=\prod_{i=0}^{n-1}(x+i)=\prod_{i=1}^{n}(x+i-1)=\frac{\Gamma(x+n)}{\Gamma(x)} \] Setting $v_i=2i-1$ into the product expressed by condition~\ref{eq:condition_max} and setting $x=\frac{k+1}{2k}$ into Pochhammer’s symbol $(x)_n$ interestingly makes it possible for us to perform the following transformation: \begin{equation} \label{eq:pochhammer} \prod_{i=1}^{n}\left(1+\frac{1}{kv_i}\right) =\frac{\prod_{i=1}^{n}(kv_i+1)}{\prod_{i=1}^{n}kv_i} =\frac{\prod_{i=1}^{n}\left(k(2i-1)+1\right)}{k^n\prod_{i=1}^{n}(2i-1)} =\frac{2^{2n}n!}{(2n)!}\cdot\frac{\Gamma\left(\frac{k+1+2kn}{2k}\right)}{\Gamma\left(\frac{k+1}{2k}\right)} \end{equation} \begin{example} One simple example that is easy to recalculate may be provided by choosing $k=3$ and $n=4$: \[ \left(1+\frac{1}{3*1}\right)\left(1+\frac{1}{3*3}\right)\left(1+\frac{1}{3*5}\right)\left(1+\frac{1}{3*7}\right)=1,6555=\frac{2^8*4!}{8!}\cdot\frac{\Gamma(\frac{14}{3})}{\Gamma(\frac{4}{6})} \] \end{example} The product in the numerator in equation~\ref{eq:pochhammer} will be transformed into a form that allows us to use the Pochhammer’s symbol: \[\prod_{i=1}^{n}\left((2i-1)k+1\right)=2^nk^n\prod_{i=1}^{n}\frac{(2i-1)k+1}{2k}=2^nk^n\prod_{i=1}^{n}\frac{k+1+2ki-2k}{2k}=2^nk^n\prod_{i=1}^{n}\left(\frac{k+1}{2k}+i-1\right)\] This product can be written now as $2^nk^n(x)_n$, whwereby $x=\frac{k+1}{2k}$: \[\prod_{i=1}^{n}\left((2i-1)k+1\right)=2^nk^n\frac{\Gamma\left(\frac{k+1+2kn}{2k}\right)}{\Gamma\left(\frac{k+1}{2k}\right)}\] We recall the basic fact that the product of even integers is given by $\prod_{i=1}^{n}2i=2^n\cdot n!$ and the product of odd integers is $\prod_{i=1}^{n}\left(2i-1\right)=1\cdot3\cdot5\cdot7\ldots=\frac{(2n)!}{2^n\cdot n!}$. Thus we can transform the product in the denominator in equation~\ref{eq:pochhammer} as follows: \[\prod_{i=1}^{n}kv_i=k^n\prod_{i=1}^{n}v_i=k^n\prod_{i=1}^{n}(2i-1)=k^n\frac{(2n)!}{2^nn!}\] \par\medskip This product is divergent, it does not converge to a limiting value. Thankfully, the ascending sequence of natural odd numbers overshoots the worst-case scenario. According to this scenario we would not have contracted a single edge between two successive nodes.
{ "alphanum_fraction": 0.6890152034, "avg_line_length": 77.670212766, "ext": "tex", "hexsha": "08c4d60f841a7103f7d7f166f37c8c823e984beb", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-05-06T20:44:07.000Z", "max_forks_repo_forks_event_min_datetime": "2021-05-06T20:44:07.000Z", "max_forks_repo_head_hexsha": "d8a5137af508be19da371fff787c114f1b5185c3", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "Sultanow/collatz", "max_forks_repo_path": "01 Graph Theory/TeX/v4.1/chapter/07_appendix.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d8a5137af508be19da371fff787c114f1b5185c3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "Sultanow/collatz", "max_issues_repo_path": "01 Graph Theory/TeX/v4.1/chapter/07_appendix.tex", "max_line_length": 751, "max_stars_count": 2, "max_stars_repo_head_hexsha": "d8a5137af508be19da371fff787c114f1b5185c3", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "Sultanow/collatz", "max_stars_repo_path": "01 Graph Theory/TeX/v4.1/chapter/07_appendix.tex", "max_stars_repo_stars_event_max_datetime": "2021-04-01T15:54:55.000Z", "max_stars_repo_stars_event_min_datetime": "2021-04-01T15:12:10.000Z", "num_tokens": 6007, "size": 14602 }
\documentclass[t,usenames,dvipsnames]{beamer} \usetheme{Copenhagen} \setbeamertemplate{headline}{} % remove toc from headers \beamertemplatenavigationsymbolsempty \usepackage{amsmath, tikz, xcolor, array, graphicx} \usetikzlibrary{arrows.meta} \everymath{\displaystyle} \title{Polar Form of Conics} \author{} \date{} \AtBeginSection[] { \begin{frame} \frametitle{Objectives} \tableofcontents[currentsection] \end{frame} } \begin{document} \begin{frame} \titlepage \end{frame} \section{Analyze the graphs of conic sections in polar form.} \begin{frame}{Intro} Given a fixed line $L$, a point $F$ not on $L$, and a positive number $e$, a \alert{conic section} is the set of all points $P$ such that \vspace{10pt} \[ \frac{\text{the distance from $P$ to $F$}}{\text{the distance from $P$ to $L$}} = e \] \pause \newline\\ The line $L$ is called the \alert{directrix} of the conic section, the point $F$ is called a \alert{focus} of the conic section, and the constant $e$ is called the \alert{eccentricity} of the conic section. \end{frame} \begin{frame}{Eccentricity, Focus, and Directrix Line} The conic section has eccentricity $e$, a focus $F$ at the origin and directrix line $x = -d$: \newline\\ \begin{center} \begin{tikzpicture} \draw [->, >=stealth] (-3,0) -- (4,0) node [below, right] {$x$}; \draw [->, >=stealth] (0,-1) -- (0,4) node [above, right] {$y$}; \coordinate (P) at (50:3.5); \coordinate (O) at (0,0); \draw [fill=black] (P) circle (1pt) node [right] {$P(r, \theta)$}; \node at (O) [below right] {$O=F$}; \draw [dashed] (O) -- (P) node [midway, above left] {$r$}; \draw [<->, >=stealth, color=red] (-2.25,4) -- (-2.25,-0.5) node [below] {$x=-d$}; \draw [<->, >=stealth, dashed, color=blue] (P) -- (0,2.68) node [midway, above] {$r\cos\theta$}; \draw [<->, >=stealth, dashed, color=red] (-2.25,2.68) -- (0,2.68) node [midway, above] {$d$}; \draw [->, >=stealth] (0:1) arc (0:50:1) node [midway, right] {$\theta$}; \node at (-2.25,3.75) [red,left] {$L$}; \end{tikzpicture} \end{center} \end{frame} \begin{frame}{General Equation} From which we get \[ e = \frac{\text{the distance from $P$ to $F$}}{\text{the distance from $P$ to $L$}} = \frac{r}{d+r\cos\theta} = e \] \pause So that $r = e(d + r\cos\theta)$, and solving for $r$ gives us \begin{align*} \onslide<3->{r &= ed +er\cos\theta} \\[8pt] \onslide<4->{r - er\cos\theta &= ed} \\[8pt] \onslide<5->{r(1-e\cos\theta) &= ed} \\[8pt] \onslide<6->{r &= \frac{ed}{1-e\cos\theta}} \end{align*} \end{frame} \begin{frame}{Example 1} Examine the graphs of each of the following for different values of $d$, but with $e = 1$. \newline\\ (a) \quad $r = \frac{ed}{1+e\cos\theta}$ \pause \quad $\longrightarrow \quad r = \frac{d}{1+\cos\theta}$ \newline\\ \pause \begin{itemize} \item Parabola (opens left or right) \pause \newline\\ \item Vertex at $\left(\frac{1}{2}d, 0\right)$ \pause \newline\\ \item $d > 0$ opens left \pause \newline\\ \item $d < 0$ opens right \pause \newline\\ \item $y$-intercepts at $\left(0, \pm d\right)$ \end{itemize} \end{frame} \begin{frame}{Example 1} (b) \quad $r = \frac{ed}{1-e\cos\theta}$ \pause \quad $\longrightarrow \quad r = \frac{d}{1-\cos\theta}$ \newline\\ \pause \begin{itemize} \item Parabola (opens left or right) \pause \newline\\ \item Vertex at $\left(-\frac{1}{2}d, 0\right)$ \pause \newline\\ \item $d > 0$ opens right \pause \newline\\ \item $d < 0$ opens left \pause \newline\\ \item $y$-intercepts at $\left(0, \pm d\right)$ \end{itemize} \end{frame} \begin{frame}{Example 1} (c) \quad $r = \frac{ed}{1+e\sin\theta}$ \pause \quad $\longrightarrow \quad r = \frac{d}{1+\sin\theta}$ \newline\\ \pause \begin{itemize} \item Parabola (opens up or down) \pause \newline\\ \item Vertex at $\left(0, \frac{1}{2}d\right)$ \pause \newline\\ \item $d > 0$ opens down \pause \newline\\ \item $d < 0$ opens up \pause \newline\\ \item $x$-intercepts at $\left(\pm d, 0\right)$ \end{itemize} \end{frame} \begin{frame}{Example 1} (d) \quad $r = \frac{ed}{1-e\sin\theta}$ \pause \quad $\longrightarrow \quad r = \frac{d}{1-\sin\theta}$ \newline\\ \pause \begin{itemize} \item Parabola (opens up or down) \pause \newline\\ \item Vertex at $\left(0, -\frac{1}{2}d\right)$ \pause \newline\\ \item $d > 0$ opens up \pause \newline\\ \item $d < 0$ opens down \pause \newline\\ \item $x$-intercepts at $\left(\pm d, 0\right)$ \end{itemize} \end{frame} \begin{frame}{Follow-up to Example 1} Notice each of the previous graphs in Example 1 were parabolas. This is the case when $e = 1$. The directrix lines were either $x = \pm d$ or $y = \pm d$, and the focal diameter is $2d$. \end{frame} \begin{frame}{Example 2} Examine the graphs of each of the following for different values of $d$, but with $0 < e < 1$ and $e > 1$. \newline\\ (a) \quad $r = \frac{ed}{1+e\cos\theta}$ \newline\\ \pause For $0 < e < 1$: \pause \newline\\ Ellipse (wide) \end{frame} \begin{frame}{Example 2} (a) \quad $r = \frac{ed}{1+e\cos\theta}$ \newline\\ \pause For $e > 1$: \pause \newline\\ Hyperbola (opening left and right) \end{frame} \begin{frame}{Example 2} Examine the graphs of each of the following for different values of $d$, but with $0 < e < 1$ and $e > 1$. \newline\\ (b) \quad $r = \frac{ed}{1-e\cos\theta}$ \newline\\ \pause For $0 < e < 1$: \pause \newline\\ Ellipse (wide) \end{frame} \begin{frame}{Example 2} (b) \quad $r = \frac{ed}{1-e\cos\theta}$ \newline\\ \pause For $e > 1$: \pause \newline\\ Hyperbola (opening left and right) \end{frame} \begin{frame}{Example 2} Examine the graphs of each of the following for different values of $d$, but with $0 < e < 1$ and $e > 1$. \newline\\ (c) \quad $r = \frac{ed}{1+e\sin\theta}$ \newline\\ \pause For $0 < e < 1$: \pause \newline\\ Ellipse (tall) \end{frame} \begin{frame}{Example 2} (c) \quad $r = \frac{ed}{1+e\sin\theta}$ \newline\\ \pause For $e > 1$: \pause \newline\\ Hyperbola (opening up and down) \end{frame} \begin{frame}{Example 2} Examine the graphs of each of the following for different values of $d$, but with $0 < e < 1$ and $e > 1$. \newline\\ (d) \quad $r = \frac{ed}{1-e\sin\theta}$ \newline\\ \pause For $0 < e < 1$: \pause \newline\\ Ellipse (tall) \end{frame} \begin{frame}{Example 2} (d) \quad $r = \frac{ed}{1-e\sin\theta}$ \newline\\ \pause For $e > 1$: \pause \newline\\ Hyperbola (opening up and down) \end{frame} \begin{frame}{Properties From Example 2} In the previous example, the graphs in which $0 < e < 1$ were ellipses. \\[18pt] \pause Major axis length is $\frac{2ed}{1-e^2}$ \\[18pt] \pause Minor axis length is $\frac{2ed}{\sqrt{1-e^2}}$. \end{frame} \begin{frame}{Properties From Example 2} If $e > 1$, the graph is a hyperbola. \\[18pt] \pause Transverse axis length $\frac{2ed}{e^2-1}$ \\[18pt] \pause Conjugate axis length $\frac{2ed}{\sqrt{e^2-1}}$. \end{frame} \begin{frame}{Example 3} Identify the conic for each. \newline\\ (a) \quad $r = \frac{4}{1-\sin\theta}$ \newline\\ \pause $e = 1 \longrightarrow$ Parabola (opens up). \newline\\ \pause Vertex: $(0, -2)$ \newline\\ \pause Goes through $(\pm 4, 0)$ \end{frame} \begin{frame}{Example 3} (b) \quad $r = \frac{12}{3-\cos\theta}$ \pause \quad $\longrightarrow \quad r = \frac{4}{1-1/3\cos\theta}$ \\[18pt] \pause $e = \frac{1}{3} \longrightarrow$ Ellipse (wide) \\[15pt] \pause $\frac{1}{3}d = 4 \longrightarrow d = 12$ \\[15pt] \pause Major axis length: $\frac{2(1/3)(12)}{1-(1/3)^2} = 9$ \\[15pt] \pause Minor axis length: $\frac{2(1/3)(12)}{\sqrt{1-(1/3)^2}} = 6\sqrt{3}$ \end{frame} \begin{frame}{Example 3} (c) \quad $r = \frac{6}{1+2\sin\theta}$ \newline\\ \pause $e = 2$: Hyperbola (opens up and down) \newline\\ \pause $2d = 6 \longrightarrow d = 3$ \newline\\ \pause Transverse axis length: $\frac{2(2)(3)}{2^2-1} = 4$ \\[15pt] \pause Conjugate axis length: $\frac{2(2)(3)}{\sqrt{2^2-1}} = 4\sqrt{3}$ \end{frame} \begin{frame}{Polar Form of Rotated Conics} For constants $\ell > 0, \, e \geq 0, \text{ and } \phi,$ the graph of \[ r = \frac{\ell}{1-e\cos(\theta-\phi)} \] is a conic section with eccentricity $e$ and one focus at $(0,0)$. \end{frame} \begin{frame}{Polar Form of Rotated Conics} If $e = 0$, the graph is a circle centered at $(0,0)$ with radius $\ell$. \end{frame} \begin{frame}{Polar Form of Rotated Conics} If $e \neq 0$, the conic has a focus at $(0,0)$ and the directrix contains the point with polar coordinates $(-d,\phi)$ where $d = \frac{\ell}{e}$. \begin{itemize} \item If $0 < e < 1$, graph is an ellipse with major axis length $\frac{2ed}{1-e^2}$ and minor axis length $\frac{2ed}{\sqrt{1-e^2}}$. \\[15pt] \pause \item If $e=1$, graph is a parabola with focal diameter $2d$. \\[15pt] \pause \item If $e > 1$, graph is a hyperbola with transverse axis length $\frac{2ed}{e^2-1}$ and conjugate axis length $\frac{2ed}{\sqrt{e^2-1}}$ \end{itemize} \end{frame} \end{document}
{ "alphanum_fraction": 0.6092473118, "avg_line_length": 37.0517928287, "ext": "tex", "hexsha": "58db6b940fc16423bbbd1f4dda1923a49c3a2ed9", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-08-26T15:49:06.000Z", "max_forks_repo_forks_event_min_datetime": "2020-08-26T15:49:06.000Z", "max_forks_repo_head_hexsha": "3639d0202fc691738d26f922d9e84f0a0d62b42a", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "BryanBain/Trig_BEAMER", "max_forks_repo_path": "Polar_Form_of_Conics(BEAMER).tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3639d0202fc691738d26f922d9e84f0a0d62b42a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "BryanBain/Trig_BEAMER", "max_issues_repo_path": "Polar_Form_of_Conics(BEAMER).tex", "max_line_length": 206, "max_stars_count": null, "max_stars_repo_head_hexsha": "3639d0202fc691738d26f922d9e84f0a0d62b42a", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "BryanBain/Trig_BEAMER", "max_stars_repo_path": "Polar_Form_of_Conics(BEAMER).tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3470, "size": 9300 }
% Determines the input encoding. \usepackage[% utf8, % latin1 ]{inputenc} % --------------------------------------------------------------------- % Determines the output encoding. \usepackage[T1]{fontenc} % --------------------------------------------------------------------- % Determines language settings. \usepackage[% english % You may change this to 'ngerman' in order to write a % german report. ]{babel} % Provides stretchable tables. \usepackage{tabularx} % Provides image loading. \usepackage{graphicx} % --------------------------------------------------------------------- % Provides the algorithm environment \usepackage[ruled,% linesnumbered]{algorithm2e} % --------------------------------------------------------------------- % Provides simple line spacings. \usepackage{setspace} % --------------------------------------------------------------------- % Provides colors in LaTeX. \usepackage{xcolor} % --------------------------------------------------------------------- % Provides nicer tables than the standard tables. \usepackage{booktabs} \usepackage{float} \usepackage{listings} \usepackage{amsmath} %\usepackage{caption} \usepackage{bytefield} \usepackage{fullpage} \usepackage{enumitem} \usepackage{tikz-timing}[2009/05/15] %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%% % %%%%% Custom Macros % %%%%% % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Create an inline command for shell commands. \newcommand{\shell}[1]{\texttt{#1}} % Create an environment for a shell commands. \newenvironment{shellenv}% {\VerbatimEnvironment% \begin{Sbox}\begin{minipage}{0.97\textwidth}\begin{Verbatim}% }% {\end{Verbatim}\end{minipage}\end{Sbox}% \setlength{\fboxsep}{6pt}\shadowbox{\TheSbox}}% % Create an inline command for files. \newcommand{\file}[1]{\texttt{#1}} % Create a command for command parameters. \newcommand{\parameter}[1]{$<$#1$>$} \newcommand{\instr}[1]{\texttt{#1}} \definecolor{lightGray}{RGB}{240,240,240} \lstnewenvironment{instrenv}{\lstset{backgroundcolor=\color{lightGray},frame=single,basicstyle=\ttfamily}}{} \newcommand{\orion}{\textsc{Or10n}\xspace} \newcommand{\riscv}{\mbox{RISC-V}\xspace} \newcommand{\rvcore}{\textsc{RI5CY}\xspace} \newcommand{\zerocore}{\textsc{zero-riscy}\xspace} \newcommand{\pulpino}{\textsc{PULPino}\xspace} \newcommand{\pulp}{\textsc{PULP}\xspace} \newcommand\signal[1]{{\ttfamily\bfseries #1}} \newcommand\sprDesc[4]{% \textbf{SPR Address:} \texttt{#1}\\% \textbf{Reset Value:} \texttt{#2}\\% \begin{figure}[H] \centering #4 \caption{#3} \end{figure}} \newcommand{\memsection}[4]{ \bytefieldsetup{bitheight=#3\baselineskip} % define the height of the memsection \bitbox[]{10}{ \texttt{#1} % print end address \\ \vspace{#3\baselineskip} \vspace{-2\baselineskip} \vspace{-#3pt} % do some spacing \texttt{#2} % print start address } \bitbox{16}{#4} % print box with caption } \newcommand\regDesc[5]{% \subsection{#3} \textbf{Address:} \texttt{#1}\\% \textbf{Reset Value:} \texttt{#2}\\% \begin{figure}[H] \centering #4 \end{figure} \begin{enumerate}[leftmargin=15mm] #5 \end{enumerate}} \newcommand\regItem[3]{% \item[\texttt{#1}] \textbf{#2}: #3 }
{ "alphanum_fraction": 0.5224456073, "avg_line_length": 27.0970149254, "ext": "tex", "hexsha": "69c69d794f4e5d6e73bb197ad1bd8feb852e7966", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2020-04-24T22:28:28.000Z", "max_forks_repo_forks_event_min_datetime": "2019-01-18T15:34:58.000Z", "max_forks_repo_head_hexsha": "e0111a8b63bb7ceccbb5ea0107a89c9ed80d3bad", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "scale-lab/PVTsensors", "max_forks_repo_path": "Microcontroller/FPGA/doc/datasheet/preamble/preamble.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e0111a8b63bb7ceccbb5ea0107a89c9ed80d3bad", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "scale-lab/PVTsensors", "max_issues_repo_path": "Microcontroller/FPGA/doc/datasheet/preamble/preamble.tex", "max_line_length": 109, "max_stars_count": 14, "max_stars_repo_head_hexsha": "e0111a8b63bb7ceccbb5ea0107a89c9ed80d3bad", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "scale-lab/PVTsensors", "max_stars_repo_path": "Microcontroller/FPGA/doc/datasheet/preamble/preamble.tex", "max_stars_repo_stars_event_max_datetime": "2021-12-19T11:31:26.000Z", "max_stars_repo_stars_event_min_datetime": "2019-01-18T02:16:54.000Z", "num_tokens": 962, "size": 3631 }
\documentclass[master=ecws,masteroption=ai]{kulemt} \setup{title={Learning a Disease Embedding using Generalized Word2Vec Approaches. }, author={Milan van der Meer}, promotor={Prof.\,dr.\ R. Wuyts\and A. Vapirev}, assessor={Prof. dr. ir. H. Blockeel\and R. van Lon }, assistant={Dr. E. D'Hondt}} % The following \setup may be removed entirely if no filing card is wanted \setup{filingcard, translatedtitle=Een Ziekte Inbedding leren met het gebruik van Veralgemeende Word2Vec Methoden., udc=681.3, shortabstract={Due to the increased usage of EHRs, a new research area emerged, namely the area of Electronic Health Record Analytics. Several research groups are working on utilizing EHRs to find medical patterns with methods like querying, statistics, data mining, and artificial intelligence. \\ In the field of machine learning, a limited amount of research is done on EHRs, mainly using out-of-the box tools. \\ The focus point of this thesis is: applying advanced machine learning algorithms to find patterns in EHRs. \\ An EHR of a patient can be seen as a time series, namely a sequence of EHR events. We make the analogy between sentences of words and sequences of EHR events. Based on this analogy, we propose novel techniques which are generalizations of the Word2Vec approach, a technique typically used in linguistic analysis. \\ We call this approach a generalized Word2Vec approach and it can be applied on medical data to find the correlations between different diagnoses. \\ To make sure the generalized Word2Vec methods can be applied to large-scale medical data, we applied the generalization concept also on DeepWalk. \\ A shortcoming of Word2Vec is that it is unable to handle unseen instances once it has built his lookup table. We tackle this problem by combining a k-nearest neighbors method with Word2Vec and make an estimation of the correlation to other diagnoses for the unseen instance. \\ To enable this, we use several preprocessing methods. One of these methods is a newly proposed disease code mapping between two standards namely MedDRA and ICD-10. This mapping is used to categorize diagnoses and to make our validation process possible. With this categorization of the data we enable more general EHR events during training. \\ After building the model, we execute two different experiments by generating clusters based on our model. The reason behind the clusters is that we can compare those clusters with to the results of the currently largest study on Danish EHRs. We quantify this comparison using a matching percentage. \\ We check the influence of a parameter on the matching percentage by inspecting $504$ parameter settings. From all these parameter settings, we deduce general trends. \\ After checking the individual parameters, we take the parameter setting with the highest average matching percentage for each approach and compare the different approaches. \\ The results from both experiments are that each approach has a maximum matching percentage of $60$\%. It is still difficult to quantify how well a match is of around $60$\% but we conclude this is good enough to show the potential of our approaches. \\ We conclude that both our knn Word2Vec and DeepWalk have the same performance as the basic generalized Word2Vec. We can now handle unseen EHR events and train our model on $50$\% of the dataset size using DeepWalk without losing accuracy. \\ Within the limitations of our validation method, we conclude that our models do match the Danish results well depending on the experiment. Especially since we use several estimations such as the disease code mapping, different datasets, and categorization.}} % Uncomment the next line for generating the cover page %\setup{coverpageonly} % Uncomment the next \setup to generate only the first pages (e.g., if you % are a Word user. %\setup{frontpagesonly} % Choose the main text font (e.g., Latin Modern) \setup{font=lm} % If you want to include other LaTeX packages, do it here. \usepackage{graphicx} \usepackage{float} \usepackage{mathtools} \usepackage{enumitem} \usepackage[normalem]{ulem} \useunder{\uline}{\ul}{} \usepackage{algorithm} \usepackage{algpseudocode} \usepackage{amsmath} \usepackage{caption} \usepackage{subcaption} \usepackage[justification=centering]{caption} \usepackage[final]{pdfpages} % Finally the hyperref package is used for pdf files. % This can be commented out for printed versions. \usepackage[pdfusetitle,colorlinks,plainpages=false]{hyperref} \hypersetup{% colorlinks = true, linkcolor = black } %%%%%%% % The lipsum package is used to generate random text. % You never need this in a real master thesis text! \IfFileExists{lipsum.sty}% {\usepackage{lipsum}\setlipsumdefault{11-13}}% {\newcommand{\lipsum}[1][11-13]{\par And some text: lipsum ##1.\par}} %%%%%%% %\includeonly{chap-n} \begin{document} \begin{preface} I start with expressing my gratitude towards Roel for giving me the opportunity to work on a truly interesting subject. I also appreciate that you were not just an invisible promoter, but a promoter who showed a true interest in the subject and curiosity for the results of the research. \\ I also know that I am a lucky student who fell with his "butt in the butter" as Ellie took the time and effort to proofread my thesis a couple of times. Together with Roel, you really provided me with a lot of support and made it possible to finish my thesis in a way I'm proud of. Thank you. \\ On a more personal note, I want to thank the people who stood by my side. My parents who gave me the opportunities in life and also prepared me for those opportunities. My friends, the CW kneusjes, my kotgenoten, and of course Siemen, the guy who somehow still manages to stay around. I also want to thank my future wife, just to cover all bases. \\ \noindent Milan out. \end{preface} \tableofcontents* \begin{abstract} In today's medical world, more and more data is stored using Electronic Health Records. Each time a patient goes to a hospital, doctor or receives lab results, those events are stored in the patient's Electronic Health Record (EHR). The medical world and governments are interested in EHRs as they can for example provide new insights into disease trajectories, drug treatments, medical costs, or the link between demographics and certain diseases. \\ Due to the increased use of EHRs, a new research area has emerged, namely the area of Electronic Health Record Analytics. EHR analytics is an active research field as a lot of different problems need to be solved. At the moment various research groups are working on utilizing EHRs to find medical patterns with several methods like querying, statistics, data mining, and artificial intelligence approaches. \\ In the field of machine learning algorithms, a limited amount of research is done on EHRs, mainly using out-of-the box tools. \\ The focus point of this thesis is: applying advanced machine learning algorithms to find patterns in EHRs. \\ An EHR of a patient can be seen as a time series, namely a sequence of EHR events such as visits to the doctor. In this thesis we make the analogy between sentences of words and sequences of EHR events. Based on this analogy, we propose novel techniques which are generalizations of the Word2Vec approach, a technique typically used in linguistic analysis. \\ We call this approach a generalized Word2Vec approach and it can be applied on medical data to find the correlations between different diagnoses. \\ To make sure the generalized Word2Vec methods can be applied to large-scale medical data, we applied the generalization concept also on DeepWalk. DeepWalk makes it possible to generate a smaller dataset from the original dataset and then apply a Word2Vec approach on this smaller dataset. \\ Besides the exploration on generalizing Word2Vec approaches, we also improve Word2Vec by tackling one of its shortcomings. This shortcoming of Word2Vec is that it is unable to handle unseen instances once the model has been built. We combine a k-nearest neighbors method with Word2Vec and make an estimation of the correlation to other diagnoses for the unseen instance. \\ To enable this, we use several preprocessing methods. One of these methods is a newly proposed disease code mapping between two standards namely MedDRA and ICD-10. This mapping is used to categorize diagnoses and to make our validation process possible. With this categorization of the data we enable more general EHR events during training. \\ We explain how we build a model of our proposed methods. We introduce the OSIM2 dataset on which we train a model using our generalized Word2Vec, knn Word2Vec, and generalized DeepWalk. The OSIM2 dataset is a simulated dataset generated by an organization named OMOP. To make it possible to apply our methods on the OSIM2 dataset, we use the above mentioned disease code mapping. \\ After building the model, we execute two different experiments by generating clusters based on our model. The reason behind the clusters is that we can compare those clusters to the results of the currently largest study on Danish EHRs. We quantify this comparison using a matching percentage. \\ We check the influence of a parameter on the matching percentage by inspecting $504$ parameter settings. From all these parameter settings, we deduce general trends. \\ After checking the individual parameters, we take the parameter setting with the highest average matching percentage for each approach and compare the different approaches. \\ The results from both experiments are that each approach has a maximum matching percentage of $60$\%. It is still difficult to quantify how well a match is of around $60$\% but we conclude this is good enough to show the potential of our approaches. \\ We conclude that both our knn Word2Vec and DeepWalk have the same performance as the basic generalized Word2Vec. This means that we can handle unseen EHR events and train our model on $50$\% of the dataset size using DeepWalk without losing accuracy. \\ Within the limitations of our validation method, we conclude that our models do match the Danish results well depending on the experiment. Especially since we use several estimations such as the disease code mapping, different datasets, and categorization. \end{abstract} % A list of figures and tables is optional %\listoffigures %\listoftables % If you only have a few figures and tables you can use the following instead \listoffiguresandtables % The list of symbols is also optional. % This list must be created manually, e.g., as follows: \chapter{List of Abbreviations and Symbols} \section*{Abbreviations} \begin{flushleft} \renewcommand{\arraystretch}{1.1} \begin{tabularx}{\textwidth}{@{}p{12mm}X@{}} EHR & Electronic Health Record \\ ICD & International Classification of Diseases \\ WHO & World Health Organization \\ MedDRA & Medical Dictionary for Regulatory Activities \\ THIN & The Health Improvement Network \\ ML & Machine Learning \\ CBOW & Continuous Bag-of-Words \\ knn & k-Nearest Neighbors \\ kd & k-dimensional \\ MLP & Multilayer Perceptrons \\ RNN & Recurrent Neural Network \\ LSTM & Long-Short Term Memory \\ DL4J & DeepLearning for Java \\ MSLR & Thomson Reuters MarketScan Lab Database \\ OMOP & Observational Medical Outcomes Partnership \\ OSIM2 & Observational Medical Dataset Simulator Generation 2 \\ \end{tabularx} \end{flushleft} % Now comes the main text \mainmatter \include{introduction} \include{Context/Context} \include{Background/Background} \include{Implementation/Implementation} \include{conclusion} \include{FutureWork/FutureWork} % If you have appendices: \appendixpage* % if wanted \appendix \phantomsection\addcontentsline{toc}{chapter}{A\hspace{3 mm}Paper} \includepdf[pages=-]{report.pdf} \phantomsection\addcontentsline{toc}{chapter}{B\hspace{3 mm}Poster} \includepdf[pages=-]{poster.pdf} %test \cite{WHO_ICD} \backmatter % The bibliography comes after the appendices. % You can replace the standard "abbrv" bibliography style by another one. \nocite{*} \bibliographystyle{abbrv} \bibliography{references} \end{document} %%% Local Variables: %%% mode: latex %%% TeX-master: t %%% End:
{ "alphanum_fraction": 0.7839551329, "avg_line_length": 69.1179775281, "ext": "tex", "hexsha": "9c59bcf732e0ffdc59c4bdfb100275e47378d076", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-07-26T13:26:14.000Z", "max_forks_repo_forks_event_min_datetime": "2021-07-26T13:26:14.000Z", "max_forks_repo_head_hexsha": "305ab63ce3f627b3ffadae8864b927b9d0b9ca29", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Milanvdm/MedicalLSTM", "max_forks_repo_path": "Reports/Thesis/thesis.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "305ab63ce3f627b3ffadae8864b927b9d0b9ca29", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Milanvdm/MedicalLSTM", "max_issues_repo_path": "Reports/Thesis/thesis.tex", "max_line_length": 451, "max_stars_count": 6, "max_stars_repo_head_hexsha": "305ab63ce3f627b3ffadae8864b927b9d0b9ca29", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Milanvdm/MedicalLSTM", "max_stars_repo_path": "Reports/Thesis/thesis.tex", "max_stars_repo_stars_event_max_datetime": "2017-04-17T15:25:01.000Z", "max_stars_repo_stars_event_min_datetime": "2015-12-27T06:32:31.000Z", "num_tokens": 2776, "size": 12303 }
\subsubsection{Training} \label{sec:methods_pipeline_training} The training stage involves tweaking both the primary model and the secondary model. The primary model has the following hyperparameters: \begin{itemize} \item Fast window period: the number of periods to average the price signal for the fast moving average signal. \item Slow window period: the number of periods to average the price signal for the slow moving average signal. \item Stop loss ratio: the price ratio when a label occurs that will determine the lower price barrier to determine the triple-barrier frontier. \item Profit taking ratio: the price ratio when a label occurs that will determine the higher price barrier to determine the triple-barrier frontier. \item Volatility time window: when applying the triple barrier method, a volatility series is computed out of daily returns. Volatility series is the result of an exponentially weighted moving average which takes the time window long samples and computes the standard deviation of it. This series is used together with the minimum return. \item Minimum return: the primary model generates labels which involve really low returns and can be considered noisy samples. This threshold works as a filter to discard these labels before they become part of the primary model output. \end{itemize} On the other hand, we have the secondary model which is a classifier. This classifier will be used to determine the size of the bets. In section \ref{sec:methods_pipeline_secondary_model} it was mentioned that it will be a Bagging classifier whose parameters are: \begin{itemize} \item Number of estimators: the number of trees to train ($B$) and then take the majority vote for each new estimation. \item Number of samples per estimator: the fraction of samples in the dataset to sample with or without bootstrap. This value is set to the average uniqueness of all labels. \item Use bootstrap: whether to use or not bootstrap sampling. This value is set to true. On top of it, when fitting, the particular label uniqueness is used. \item Maximum of features to use: out of all predictors, how many features per estimator to use. No bootstrap is used at the predictor level, just a sampling. \end{itemize} Unless explicitly mentioned, all hyperparameters are optimized. For the primary model, the following criteria is used: \begin{itemize} \item For certain combinations of hyperparameters, there might not be enough labels and metalabels to train the secondary model training process. Also, the primary model might output a too imbalanced set of labels and metalabels which end up causing trouble when training because of lack of one class of metalabel. The generated model out of that hyperparameters are discarded. \item For certain combinations of hyperparameters, there might be very little amount of samples. These combinations are discarded. \item Using a quite simple secondary model of the same nature but without optimized hyperparameters, we choose the combination of primary model hyperparameters that yield better secondary performance. \end{itemize} Performance in the secondary model is evaluated with negative logarithmic loss. Typically, classifiers use F1-score because they provide a good balance between precision and recall (its the harmonic mean between both). In this case, we are mostly interested in selecting models based on the predicted probabilities because that is used to size bets. Secondly, once the primary model hyperparameters have been selected, a full model optimization for the secondary model is done. Again, negative logarithmic loss is used to determine the best model. Following Lopez de Prado recommendations in chapter 7 of \cite{lopez_de_prado}, purged $K$-fold cross validation with embargo is used to train the secondary model. \paragraph{Purge:} when the classification output at time $t$, i.e. metalabel, depends on the value of the predictor(s) at two or more sample values, we have an inter-time dependency of the output with several of the predictors that might lead to leakage if they are not properly purged. The purge strategy consists on removing the predictor and classification outputs that are concurrent in adjacent training and test folds. There are three situations that would make two labels concurrent: \begin{itemize} \item Label $i$ starts inside the triple barrier period of label $j$. \item Evaluation of metalabel $i$ ends inside the triple barrier period of label $j$. \item Evaluation of label $j$ starts and ends inside the triple barrier period of label $i$. \end{itemize} In addition to this concurrency effect, there is another effect that produces information leakage between train-test splits. Suppose a label that is assigned to a test fold and it is generated from data that is spread in multiple labels both in train and test folds even though there is no label concurrency involved, there will be leakage. In this case purge is also required to mitigate leakage. \paragraph{Embargo:} when there is no clear time span that is required to ban from the adjacent training and test folds to avoid leakage due to the nature of the classification output generation process, the embargo technique can be used. It consist of removing a percentage of samples in the train fold that are right after the test fold beginning. Note that there is no need to remove samples from the end of the test fold when it is adjacent to another training fold because those samples will simply not contribute to train the model for the first set of labels. The percentage is usually low, e.g. 1\%, and provides enough data pruning to run the training stage without noticeable leakage.
{ "alphanum_fraction": 0.7730329523, "avg_line_length": 55.0740740741, "ext": "tex", "hexsha": "862d16d8998415f79e39884d2b9e0a1ba31be8d6", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3871e88884a90e5c9dd80d71b20b811485007273", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "agalbachicar/swing_for_the_fences", "max_forks_repo_path": "doc/methods/pipeline/training.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3871e88884a90e5c9dd80d71b20b811485007273", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "agalbachicar/swing_for_the_fences", "max_issues_repo_path": "doc/methods/pipeline/training.tex", "max_line_length": 112, "max_stars_count": null, "max_stars_repo_head_hexsha": "3871e88884a90e5c9dd80d71b20b811485007273", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "agalbachicar/swing_for_the_fences", "max_stars_repo_path": "doc/methods/pipeline/training.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1239, "size": 5948 }
\chapter{Simultaneous games}
{ "alphanum_fraction": 0.7741935484, "avg_line_length": 7.75, "ext": "tex", "hexsha": "5a80857d658217200e2f130535908bc4380510ac", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/ai/gameTheory/00-00-Chapter_name.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/ai/gameTheory/00-00-Chapter_name.tex", "max_line_length": 28, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/ai/gameTheory/00-00-Chapter_name.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 9, "size": 31 }
\section{Assessment} The course is tested with two exams: A series of \glspl{assignment} which have to be handed in, but won't be graded. There will be an \gls{oral}, which is based on the \glspl{assignment} and a written exam. The final grade is determined as follows: \\ \texttt{if \gls{exam}-grade $ >= 75\% $ then return \gls{oral}-grade else return 0} \paragraph*{Motivation for grade} A professional software developer is required to be able to program code which is, at the very least, \textit{correct}. In order to produce correct code, we expect students to show: \begin{inparaenum}[\itshape i\upshape)] \item a foundation of knowledge about how a programming language actually works in connection with a simplified concrete model of a computer; \item fluency when actually writing the code. \end{inparaenum} The quality of the programmer is ultimately determined by his actual code-writing skills, therefore the written exam will require you to write code; this ensures that each student is able to show that his work is his own and that he has adequate understanding of its mechanisms. \subsection{Theoretical examination \modulecode} The general shape of a \gls{exam} for \texttt{\modulecode} is made up of a short series of highly structured open questions. In each exam the content of the questions will change, but the structure of the questions will remain the same. For the structure (and an example) of the theoretical exam, see the appendix. \subsection{Practical examination \modulecode} There is an assignment for each pattern covered in the lessons that is mandatory. The assignment asks you to implement a GUI system in the fashion of Windows Form with an immediate drawing library, such as Monogame. In the assignment you have to show that you can use effectively the design patterns learnt during the course. Your GUI library should allow to create at least clickable buttons and text labels organized in a single panel. All assignments are to be presented at the end of the practical assessment to the teacher upon request At the practicum check you will be asked to solve exercises on design patterns based on the assignment. The maximum score for this part is up to 10 points. The teachers still reserve the right to check the practicums handed in by each student, and to use it for further evaluation. The university rules on fraude and plagiarism (Hogeschoolgids art. 11.10 -- 11.12) also apply to code.
{ "alphanum_fraction": 0.7936117936, "avg_line_length": 81.4, "ext": "tex", "hexsha": "dfae24d9b2f12565996442bfed1d279b20e89a5a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "4198e0aa83c565eb33418927366192383daa3ed4", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "hogeschool/INFSEN01-2", "max_forks_repo_path": "Modulewijzer/Toetsing.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "4198e0aa83c565eb33418927366192383daa3ed4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "hogeschool/INFSEN01-2", "max_issues_repo_path": "Modulewijzer/Toetsing.tex", "max_line_length": 540, "max_stars_count": 2, "max_stars_repo_head_hexsha": "4198e0aa83c565eb33418927366192383daa3ed4", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "hogeschool/INFSEN01-2", "max_stars_repo_path": "Modulewijzer/Toetsing.tex", "max_stars_repo_stars_event_max_datetime": "2017-06-30T14:24:14.000Z", "max_stars_repo_stars_event_min_datetime": "2017-06-06T08:04:30.000Z", "num_tokens": 546, "size": 2442 }
\section{Modelling precision} In this section, we try to build a basic model of precision errors that we can use to obtain a rough but reliable estimate of a program's precision. \subsection{The issue with ``absolute or relative'' error}\label{ss:abs-rel} When the output of a problem is some real value like a distance or an area, the problem statement often specifies a constraint such as: ``The answer should be accurate to an absolute or relative error of at most $10^{-5}$.'' While considering the relative accuracy of an answer can be a useful and convenient way to specify the required precision of an answer in some cases (for example in tasks where only addition and multiplication of positive values are performed), we think that for most geometry problems it is unsuitable. The reason for this is the need to subtract\footnote{Strictly speaking, we mean both subtraction of values of the same sign, and addition of values of opposite signs.} large values of similar magnitude. For example, suppose that we are able to compute two values with relative precision $10^{-6}$, such as $A=1000 \pm 10^{-3}$ and $B = 999 \pm 10^{-3}$. If we compute their difference, we obtain $A-B = 1 \pm 2 \times 10^{-3}$. The absolute error remains of a comparable size, being only multiplied by 2, but on the other hand relative error increases drastically from $10^{-6}$ to $2\times 10^{-3}$ because of the decrease in magnitude. This phenomenon is called \emph{catastrophic cancellation}. In fact, whenever a certain relative error can affect big numbers, catastrophic cancellation can cause the corresponding absolute error to appear on very small values. The consequence is that if a problem statement has a certain tolerance on the relative error of the answer, and a correct solution has an error close to it for the biggest possible values, then the problem statement also needs to specify a tolerance on the corresponding absolute error in case catastrophic cancellation happens. And since that tolerance on absolute error is at least as tolerant as the tolerance on relative error for all possible values, it makes it redundant. This is why we think that tolerance on ``absolute or relative error'' is misleading at best.\footnote{In fact, working with relative error tolerances would make sense if this ``relative error'' was defined based on the magnitude of the input coordinates rather than on the magnitude of the answer, as we will see starting from section~\ref{ss:biggest-mag}. For example, if all input coordinates are bounded by $M$, it would make sense to require an absolute precision of $M^2\times 10^{-6}$ on an area. But since the answer can remain very small even if the magnitude of the input grows, requiring a fixed relative precision on it is usually too constraining for test cases with inputs of large magnitude.} Catastrophic cancellation shows that relative precision is not a reliable way to think about precision whenever subtractions are involved --- and that includes the wide majority of geometry problems. In fact, the most common geometric operations (distances, intersections, even dot/cross products) all involve subtractions of values which could be very similar in magnitude. Examples of this appear in two of the case studies of section~\ref{s:case-studies}: in problem ``Keeping the Dogs Apart'' and when finding the solution of a quadratic equation. Another example occurs when computing areas of polygons made of imprecise points. Even when the area ends up being small, the imprecision on it can be large if there were computations on large values in intermediate steps, which is the case when the coordinates have large magnitudes. \centerFig{modelling0} Because of this, we advise against trying to use relative error to build precision guarantees on the global scale of a whole algorithm, and we recommend to reason about those based on absolute error instead, as we describe below. \subsection{Precision guarantees from IEEE 754} %\emph{Note: This section is not meant to be a full description of how floating-point numbers work, but only a reminder of some useful guarantees. If you are completely unfamiliar with how floating-point numbers work or want to know more details, a good reference is \cite{goldberg-floating}.} Nearly all implementations of floating-point numbers obey the specifications of the IEEE 754 standard. This includes \lstinline|float| and \lstinline|double| in Java and C++, and \lstinline|long double| in C++. The IEEE 754 standard gives strong guarantees that ensure floating-point numbers will have similar behavior even in different languages and over different platforms, and gives users a basis to build guarantees on the precision of their computations. The basic guarantees given by the standard are: \begin{enumerate} \item decimal values entered in the source code or a file input are represented by the closest representable value; \item the five basic operations ($+,-,\times,/,\sqrt{x}$) are performed as if they were performed with infinite precision and then rounded to the closest representable value. \end{enumerate} There are several implications. First, this means that integers are represented exactly, and basic operations on them ($+,-,\times$) will have exact results, as long as they are small enough to fit within the significant digits of the type: $\geq 9\times 10^{15}$ for \lstinline|double|, and $\geq 1.8 \times 10^{19}$ for \lstinline|long double|. In particular, \lstinline|long double| can perform exactly all the operations that a 64-bit integer type can perform. Secondly, if the inputs are exact, the relative error on the result of any of those five operations ($+,-,\times,/,\sqrt{x}$) will be bounded by a small constant that depends on the number of significant digits in the type.\footnote{This assumes the magnitudes do not go outside the allowable range ($\approx 10^{\pm 308}$ for \lstinline|double| and $\approx 10^{\pm 4932}$ for \lstinline|long double|) which almost never happens for geometry problems.} This constant is $< 1.2 \times 10^{-16}$ for \lstinline|double| and $< 5.5 \times 10^{-20}$ for \lstinline|long double|. It is called the \emph{machine epsilon} and we will often write it $\epsilon$. \subsection{Considering the biggest possible magnitude}\label{ss:biggest-mag} We explained earlier why we need to work with absolute error, but since IEEE 754 gives us guarantees in terms of relative errors, we need to consider the biggest magnitude that will be reached during the computations. In other words, if all computations are precise up to a relative error of $\epsilon$, and the magnitude of the values never goes over $M$, then the absolute error of an operation is at most $M\epsilon$. This allows us to give good guarantees for numbers obtained after a certain number of $+$ and $-$ operations: a value that is computed in $n$ operations\footnote{Note that when we say a value is ``computed in $n$ operations'' we mean that it is computed by a single formula that contains $n$ operations, and not that $n$ operations are necessary to actually compute it. For example $(a+b)+(a+b)$ is considered to be ``computed in 3 operations'' even though we can implement this with only 2 additions.} will have an absolute error of at most $nM\epsilon$ compared to the theoretical result. %\footnote{By this we mean, a value that is computed by a single formula involving only $+$ and $-$ operations, and at most $n$ of them.} %To clarify what we mean here by ``computed in $n$ operations'', we define it this way: %\begin{itemize} %\item exact inputs are ``computed in $0$ operations''; %\item if $a$ is ``computed in $n_a$ operations'' and $b$ is ``computed in $n_b$ operations'' then the sum $a+b$ and subtraction $a-b$, is ``computed in $n_a+n_b+1$ operations''. %\end{itemize} We can prove the guarantee by induction: let's imagine we have two intermediate results $a$ and $b$ who were computed in $n_a$ and $n_b$ operations respectively. By the inductive hypothesis their imprecise computed values $a'$ and $b'$ respect the following conditions. \[|a'-a| \leq n_aM\epsilon \qquad |b'-b| \leq n_bM\epsilon\] The result of the floating-point addition of $a'$ and $b'$ is $\round(a'+b')$ where $\round()$ is the function that rounds a real value to the closest representable floating-point value. We know that $|\round(x)-x| \leq M\epsilon$, so we can find a bound on the error of the addition: \begin{align*} |\round(a'+b') &- (a+b)|\\ &= \left|\left[\round(a'+b') - (a'+b')\right] + \left[(a'+b') - (a+b)\right]\right|\\ &\leq |\round(a'+b') - (a'+b')| + |(a'+b') - (a+b)|\\ &\leq M\epsilon + |(a'-a) + (b'-b)| \\ &\leq M\epsilon + |a'-a| + |b'-b| \\ &\leq M\epsilon + n_aM\epsilon + n_bM\epsilon \\ &= (n_a + n_b + 1) M \epsilon \end{align*} where the first two steps follow from the triangle inequality. Since the sum is ``computed in $n_a+n_b+1$ operations'', the bound of $(n_a + n_b + 1) M \epsilon$ that is obtained is small enough. The proof for subtraction is very similar. \subsection{Incorporating multiplication} The model above gives good guarantees but is very limited: it only works for computations that use only addition and subtraction. Multiplication does not give guarantees of the form $nM\epsilon$. However, we can still say interesting things if we take a closer look the different types of values we use in geometry: \begin{itemize} \item Adimensional ``0D'' values: e.g. angles, constant factors;% Their magnitude is typically limited by some small constant (e.g. $2\pi$ for angles). \item 1D values: e.g. coordinates, lengths, distances, radii;% Their magnitude is limited by a constant $M$, which can be deduced from the input limits. \item 2D values: e.g. areas, dot products, cross products;% Since they are based on products of 1D numbers, their magnitude is typically within a small constant factor of $M^2$. \item 3D values: e.g. volumes, mixed products.% For the same reasons, their magnitude is typically within a small constant factor of $M^3$. \end{itemize} Usually, the problem statement gives guarantees on the magnitude of coordinates, so we can find some constant $M$ so that all 1D values that will be computed in the code have a magnitude less than $M$. And since 2D and 3D values are usually created by products of 1D values, we can usually say that 2D values are bounded in magnitude by $M^2$ and 3D values by $M^3$ (we may need to multiply $M$ by a constant factor). It turns out that computations made of $+,-,\times$ and in which all $d$-dimensional values are bounded in magnitude by $M^d$ have good precision guarantees. In fact, we can prove that the absolute error of a $d$-dimensional number computed in $n$ operations is at most $M^d\left((1+\epsilon)^n-1\right)$, which assuming $n\epsilon \ll 1$ is about $nM^d\epsilon$. The proof is similar in spirit to what we did with only $+$ and $-$ earlier. Since it is a bit long, we will not detail it here, but it can be found in section~\ref{sec:proof-precision}, along with a more precise definition of the precision guarantees and its underlying assumptions. Note that this does \emph{not} cover multiplication by an adimensional factor bigger than 1: this makes sense, since for example successive multiplication by 2 of a small value could make the absolute error grow out of control even if the magnitude remains under $M^d$ for a while. In other cases, this formula $nM^d\epsilon$ gives us a quick and reliable way to estimate precision errors. \subsection{Why other operations do not work as well}\label{ss:other-operations} Now that we have precision guarantees for $+,-,\times$ operations, one might be tempted to try and include division as well. However, if that was possible, then it would be possible to give strong precision guarantees for line intersection, and we saw in subsection~\ref{ss:numerically-unstable} that this is not the case. The core of the problem is: if some value $x$ is very close to zero, then a small absolute error on $x$ will create a large absolute error on $1/x$. In fact, if $x$ is smaller than its absolute error, the computed value $1/x$ might be arbitrarily big, both in the positive or negative direction, and might not exist. This is why it is hard to give guarantees on the results of a division whose operands are already imprecise. An operation that also has some problematic behavior is $\sqrt{x}$. If $x$ is smaller than its absolute error, then $\sqrt{x}$ might or might not be defined in the reals. However, if we ignore the issue of existence by assuming that the theoretical and actual value of $x$ are both nonnegative, then we \emph{can} say some things on the precision. Because $\sqrt{x}$ is a concave increasing function, a small imprecision on $x$ will have the most impact on $\sqrt{x}$ near 0. \centerFig{modelling1} Therefore for a given imprecision $\delta$, the biggest imprecision on $\sqrt{x}$ it might cause is $\sqrt{\delta}$. This is usually pretty bad: if the argument of the square root had an imprecision of $nM^2\epsilon$ then in the worst case the result will have an imprecision of $\sqrt{n}M\sqrt{\epsilon}$, instead of the $nM\epsilon$ bound that we have for $+,-,\times$ operations. For example let us consider a circle $\mathcal{C}$ of radius tangent to a line $l$. If $\mathcal{C}$ gets closer to $l$ by $10^{-6}$, then the intersection points will move by about \[\sqrt{1^2 - (1-10^{-6})^2} \approx \sqrt{2\times 10^{-6}} = \sqrt{2} \times 10^{-3}\] away from the tangency point, as pictured below. \centerFig{modelling2} Note that here we have only shown that $1/x$ and $\sqrt{x}$ perform poorly on imprecise inputs. Please bear in mind that on exact inputs, the IEEE 754 guarantees that the result is the closest represented floating-point number. So when the lines and circles are defined by integers, line intersections and circle-line intersections have a relative precision error proportional to $\epsilon$ and thus an absolute error proportional to $M\epsilon$.
{ "alphanum_fraction": 0.7626489688, "avg_line_length": 132.1981132075, "ext": "tex", "hexsha": "ee19844d244b8e546e3f19ae4f0e28086d7bebe1", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0dd73fae656613e28b5aaf5880c5dad529316270", "max_forks_repo_licenses": [ "Unlicense", "MIT" ], "max_forks_repo_name": "cbarnson/UVa", "max_forks_repo_path": "archives/codelibraries/cp-geo-master/precision/modelling.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0dd73fae656613e28b5aaf5880c5dad529316270", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense", "MIT" ], "max_issues_repo_name": "cbarnson/UVa", "max_issues_repo_path": "archives/codelibraries/cp-geo-master/precision/modelling.tex", "max_line_length": 1353, "max_stars_count": 2, "max_stars_repo_head_hexsha": "0dd73fae656613e28b5aaf5880c5dad529316270", "max_stars_repo_licenses": [ "Unlicense", "MIT" ], "max_stars_repo_name": "cbarnson/UVa", "max_stars_repo_path": "archives/codelibraries/cp-geo-master/precision/modelling.tex", "max_stars_repo_stars_event_max_datetime": "2020-08-05T02:08:35.000Z", "max_stars_repo_stars_event_min_datetime": "2019-09-07T17:00:26.000Z", "num_tokens": 3380, "size": 14013 }
\documentclass[11pt]{article} \usepackage{geometry} \geometry{a4paper,left=2.5cm,right=2.5cm,top=2.5cm,bottom=2.5cm} \usepackage{natbib} \usepackage{color} \definecolor{mygreen}{RGB}{28,172,0} % color values Red, Green, Blue \definecolor{mylilas}{RGB}{170,55,241} \usepackage{epsfig} \usepackage{amssymb,amsmath} \usepackage{enumerate} \usepackage{enumitem} \usepackage[utf8]{inputenc} \usepackage{hyperref} \usepackage{mathtools} \newcommand{\ssd}{\text{ssd}} \newcommand{\sS}{\mathsf{S}} \newcommand{\tot}{\text{tot}} \begin{document} \input{symbols} \section*{Layered quasigeostrophic model} The $\nmax$-layer quasigeostrophic (QG) potential vorticity is \begin{align} {q_1} &= \lap\psi_1 + \frac{f_0^2}{H_1} \left(\frac{\psi_{2}-\psi_1}{g'_{1}}\right)\,, \qquad & i =1\com \nonumber \\ {q_n} &= \lap\psi_n + \frac{f_0^2}{H_n} \left(\frac{\psi_{n-1}-\psi_n}{g'_{n-1}} - \frac{\psi_{i}-\psi_{n+1}}{g'_{i}}\right)\,, \qquad &i = 2,\nmax-1 \com \nonumber \\ {q_\nmax} &= \lap\psi_\nmax + \frac{f_0^2}{H_\nmax} \left(\frac{\psi_{\textsf{N}-1}-\psi_\nmax}{g'_{\nmax-1}}\right) + \frac{f_0}{H_\nmax}h_b (x,y)\,, \qquad & i =\nmax\,, \end{align} where $q_n$ is the n'th layer QG potential vorticity, and $\psi_n$ is the streamfunction, $f_0$ is the inertial frequency, n'th $H_n$ is the layer depth, and $h_b$ is the bottom topography. (Note that in QG $h_b/H_\nmax << 1$.) Also the n'th buoyancy jump (reduced gravity) is \begin{equation} g'_n \equiv g \frac{\rho_{n}-\rho_{n+1}}{\rho_n}\com \end{equation} where $g$ is the acceleration due to gravity and $\rho_n$ is the layer density. The dynamics of the system is given by the evolution of PV. We introduce a background flow that can vary in the horizontal. The streamfunction associated with this flow can be denoted with $\Psi_n(x,y)$ for each layer and geostrophy yields its corresponding velocity $\vec{V_n} = (U_n(x,y),V_n(x,y))$ where $\Psi_{ny} = - U_n$ and $\Psi_{nx} = V_n$. We can perturb the stream function in each layer into a background flow and deviations from that flow as, \begin{align} \psi_n^{\tot} = \Psi_n + \psi_n. \end{align} %FJP: typo in Q_x somewhere With this basic decomposition we can than write out the corresponding decompositions in velocity \begin{align} \label{eq:Uequiv} u_n^{{\tot}} = U_n - \psi_{n y}\com \nonumber \\ v_n^{\tot} = V_n + \psi_{n x} \com \end{align} and \begin{equation} q_n^{\tot} = Q_n + \delta_{n\nmax}\frac{f_0}{H_\nmax}h_b + q_n \com \end{equation} where $Q_n + \delta_{n\nmax}\frac{f_0}{H_\nmax}h_b$ is n'th layer background PV, we obtain the evolution equations \begin{align} \label{eq:qg_dynamics} {q_n}_t + \mathsf{J}(\psi_n,q_n + \delta_{n \nmax} \frac{f_0}{H_\nmax}h_b )& + U_n ({q_n}_x + \delta_{n \nmax} \frac{f_0}{H_\nmax}h_{bx}) + V_n ({q_n}_y + \delta_{n \nmax} \frac{f_0}{H_\nmax}h_{by})+ \nonumber \\ & {Q_n}_y {\psi_n}_x - {Q_n}_x {\psi_n}_y = \ssd - r_{ek} \delta_{n\nmax} \lap \psi_n \com \qquad n = 1,\nmax\com \end{align} where $\ssd$ is stands for small scale dissipation, which is achieved by an spectral exponential filter or hyperviscosity, and $r_{ek}$ is the linear bottom drag coefficient. The Dirac delta, $\delta_{nN}$, indicates that the drag is only applied in the bottom layer. % FJP: is topography counted twice? \subsection*{Linear Stability Analysis} In order to study the stability of a jet in the context of our $n$-layer QG model we focus our attention on basic states that consist of zonal flows. i.e. $\Psi_n(y)$ only. If we assume that the quadratic quantities we can then linearize to obtain in the conservative limit over a flat bottom, \begin{align} \label{eq:qglin_dynamics} {q_n}_t + U_n {q_n}_x + {Q_n}_y {\psi_n}_x = 0, \end{align} for $n = 1, \cdots, N$. We assume that the perturbations are normal modes in the zonal direction and time, $$ \psi_n = \Re[ \hat \psi_n e^{i(kx - \omega t)} ]. $$ This implies that the PV will be modified appropriately and we denote it with $\hat q_n$. We substitute this into the linear equations and then divide by the exponential to obtain, \begin{align} %\label{eq:qglin_dynamics} c {\hat q_n} = U_n {\hat q_n} + {Q_n}_y {\hat \psi_n} , \end{align} where the basic state only depends on $y$, and layer of course, and we have introduced the phase speed $c=\omega/k$. Note that the actual PVs are \begin{align} {\hat q_1} &= (\partial_{yy} - k^2) \hat \psi_1 + \frac{f_0^2}{H_1} \left(\frac{\hat \psi_{2}-\hat \psi_1}{g'_{1}}\right)\,, \qquad & i =1\com \nonumber \\ {\hat q_n} &= (\partial_{yy} - k^2)\psi_n + \frac{f_0^2}{H_n} \left(\frac{\hat \psi_{n-1}- \hat \psi_n}{g'_{n-1}} - \frac{\hat \psi_{i}-\hat \psi_{n+1}}{g'_{i}}\right)\,, \qquad &i = 2,\nmax-1 \com \nonumber \\ {\hat q_\nmax} &= (\partial_{yy} - k^2)\hat \psi_\nmax + \frac{f_0^2}{H_\nmax} \left(\frac{\hat \psi_{\textsf{N}-1} - \hat \psi_\nmax}{g'_{\nmax-1}}\right)\,, \qquad & i =\nmax\,, \end{align} \section*{Special case: one-layer model} In the one-layer case we have \begin{align} %\label{eq:qglin_dynamics} c {\hat q_1} = U_1 {\hat q_1} + {Q_1}_y {\hat \psi_1} , \end{align} \begin{align} {\hat q_1} = \left[ \partial_{yy} - k^2 - \frac{f_0^2}{g'_1 H_1} \right] \hat \psi_1. \end{align} \section*{Special case: two-layer model} In the two-layer case we have \begin{align} %\label{eq:qglin_dynamics} c {\hat q_n} = U_n {\hat q_n} + {Q_n}_y {\hat \psi_n} , \end{align} \begin{align} {\hat q_1} &= \left[ \partial_{yy} - k^2 - \frac{f_0^2}{g'_1 H_1}\right] \hat \psi_1 + \frac{f_0^2}{g'_1 H_1} \hat \psi_{2}, \\ {\hat q_2} &= \frac{f_0^2}{g'_1 H_2}\hat \psi_1+ \left[ \partial_{yy} - k^2 - \frac{f_0^2}{g'_1 H_2} \right] \hat \psi_2 . \end{align} \end{document} \section*{Special case: two-layer model} With $\nmax = 2$, an alternative notation for the perturbation of potential vorticities can be written as \begin{align} q_1 &= \lap \psi_1 + F_1 (\psi_2 - \psi_1) \nonumber\\ q_2 &= \lap \psi_2 + F_2 (\psi_1 - \psi_2)\com \end{align} where we use the following definitions where \begin{equation} F_1 \equiv \frac{k_d^2}{1 + \delta^2}\,, \qquad \:\:\text{and} \qquad F_2 \equiv \delta \,F_1\,, \end{equation} with the deformation wavenumber \begin{equation} k_d^2 \equiv \, \frac{f_0^2}{g} \frac{H_1+H_2}{H_1 H_2} \per \end{equation} With this notation, the ``stretching matrix'' is simply \begin{equation} \sS = \begin{bmatrix} - F_1 \qquad \:\:\:\:F_1\\ F_2 \qquad - + F_2 \end{bmatrix}\per \end{equation} The inversion relationship in Fourier space is \begin{equation} \begin{bmatrix} \hat{\psi}_1\\ \hat{\psi}_2\\ \end{bmatrix} = \frac{1}{\text{det} \: \sB} \begin{bmatrix} -(\kappa^2 + F_2) \qquad \:\:\:\:-F_1\\ \:\:\:\: -F_2 \qquad - (\kappa^2 + F_1) \end{bmatrix} \begin{bmatrix} \hat{q}_1\\ \hat{q}_2\\ \end{bmatrix}\com \end{equation} where \begin{equation} \qquad \text{det}\, \sB = \kappa^2\left(\kappa^2 + F_1 + F_2\right)\,. \end{equation} \end{document}
{ "alphanum_fraction": 0.6702788614, "avg_line_length": 38.8820224719, "ext": "tex", "hexsha": "ed50d37ec153dbddf8d2587ae0560fc97fb5be1c", "lang": "TeX", "max_forks_count": 5, "max_forks_repo_forks_event_max_datetime": "2020-10-21T18:21:13.000Z", "max_forks_repo_forks_event_min_datetime": "2015-11-10T10:30:06.000Z", "max_forks_repo_head_hexsha": "753788608a0d227b5c8dc8b863d85bfc3a907310", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "nishantsule/PyRsw", "max_forks_repo_path": "docs/equations/notation_layered_stability.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "753788608a0d227b5c8dc8b863d85bfc3a907310", "max_issues_repo_issues_event_max_datetime": "2018-01-09T04:49:37.000Z", "max_issues_repo_issues_event_min_datetime": "2015-11-26T16:14:28.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "nishantsule/PyRsw", "max_issues_repo_path": "docs/equations/notation_layered_stability.tex", "max_line_length": 457, "max_stars_count": 16, "max_stars_repo_head_hexsha": "753788608a0d227b5c8dc8b863d85bfc3a907310", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "nishantsule/PyRsw", "max_stars_repo_path": "docs/equations/notation_layered_stability.tex", "max_stars_repo_stars_event_max_datetime": "2021-09-13T08:47:38.000Z", "max_stars_repo_stars_event_min_datetime": "2015-11-10T21:45:20.000Z", "num_tokens": 2720, "size": 6921 }
%------------------------- % Resume in Latex % Author : Sourabh Bajaj % License : MIT %------------------------ \documentclass[letterpaper,11pt]{article} \usepackage{amssymb} \usepackage{latexsym} \usepackage[empty]{fullpage} \usepackage{titlesec} \usepackage{marvosym} \usepackage[usenames,dvipsnames]{color} \usepackage{verbatim} \usepackage{enumitem} \usepackage{makecell} \usepackage[hidelinks]{hyperref} \usepackage{fancyhdr} \usepackage[english]{babel} \pagestyle{fancy} \fancyhf{} % clear all header and footer fields \fancyfoot{} \renewcommand{\headrulewidth}{0pt} \renewcommand{\footrulewidth}{0pt} % Adjust margins \addtolength{\oddsidemargin}{-0.5in} \addtolength{\evensidemargin}{-0.5in} \addtolength{\textwidth}{1in} \addtolength{\topmargin}{-.5in} \addtolength{\textheight}{1.0in} \urlstyle{same} \raggedbottom \raggedright \setlength{\tabcolsep}{0in} % Sections formatting \titleformat{\section}{ \vspace{-4pt}\scshape\raggedright\large }{}{0em}{}[\color{black}\titlerule \vspace{-5pt}] %------------------------- % Custom commands \newcommand{\resumeItem}[1]{ \item\small{ {#1} } } \newcommand{\resumeSubheading}[4]{ \vspace{-1pt}\item \begin{tabular*}{0.97\textwidth}[t]{l@{\extracolsep{\fill}}r} \textbf{#1} & #2 \\ \textit{\small#3} & \textit{\small #4} \\ \end{tabular*}\vspace{-5pt} } \newcommand{\courseworkSubheading}[2]{ \vspace{-1pt}\item \begin{tabular*}{0.97\textwidth}[t]{l@{\extracolsep{\fill}}r} \textbf{#1} & #2 \\ \end{tabular*}\vspace{-5pt} } \newcommand{\resumeSubItem}[2]{\resumeItem{#1}{#2}\vspace{-4pt}} \renewcommand{\labelitemii}{$\circ$} \newcommand{\resumeSubHeadingListStart}{\begin{itemize}[leftmargin=*]} \newcommand{\resumeSubHeadingListEnd}{\end{itemize}} \newcommand{\resumeItemListStart}{\begin{itemize}} \newcommand{\resumeItemListEnd}{\end{itemize}\vspace{-5pt}} %------------------------------------------- %%%%%% CV STARTS HERE %%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} %----------HEADING----------------- \begin{tabular*}{\textwidth}{l@{\extracolsep{\fill}}r} \textbf{{}{\Large Yiran Su}} & {6805 Wood Hollow Dr}\\ {\href{mailto:[email protected]}{$\lozenge$ [email protected]} \href{https://www.linkedin.com/in/su-yiran-a2a146129/}{$\lozenge$ LinkedIn account} $\lozenge$ (512)-999-5939} & {Austin, TX} \end{tabular*} %-----------EDUCATION----------------- \section{Education} \resumeSubHeadingListStart \resumeSubheading {University of Texas at Austin}{Austin, TX, USA} {\makecell[tl]{\textbf{M.S.} in Engineering, \textbf{Software Engineering \& System} track, ECE Dept. \textbf{GPA}: 3.73}} {Aug. 2019 - May. 2021} \resumeSubheading {Sun Yat-sen University, School of Data and Computer Science}{Guangzhou, China} {\makecell[tl]{\textbf{B.E.} in Network Engineering~~~ \textbf{Overall GPA}: 3.85/5.00, \textbf{Junior GPA}: 4.25/5.00\\ \textbf{Course Highlights}: C++ programming, Data Structure and Algorithms, \\ Operating System, Computer Network, Web Programming, Mobile Internet Programming Project}} {Aug. 2015 - Jun. 2019} \resumeSubHeadingListEnd %-----------SKILLS----------------- \section{Skills} \textbf{Programming Language~}{C++, Python, Java, HTML5, CSS, JavaScript, Kotlin, Shell, SQL}\\ \textbf{Framework and Tools~}{React Native, Flask, PyTorch, Tensorflow, Docker, MongoDB, Kubernetes} %%-----------PUBLICATION----------------- \section{Publications} Manor, L., \textbf{Su, Y.}, et al. "\textbf{What is FAFSA? Interpreting non-technical jargon in domain-specific text}", COLING 2020, submitted. %-----------AWARDS AND HONORS----------------- %\section{Awards And Honors} %\resumeSubHeadingListStart %\resumeSubItem{Innovative Design Award in the 2018 International Aerial Robotics Competition (2018 IARC)}{} %\resumeSubItem{Honorable Award in the 2018 Mathematical Contest in Modeling (2018 MCM)}{} %\resumeSubItem{The First Prize Scholarship of Sun Yat-sen University(Top 5\%)}{} %\resumeSubHeadingListEnd %-----------WORK EXPERIENCE----------------- \section{Intern Experience} \resumeSubHeadingListStart \resumeSubheading {Coherent Logix Inc.}{Austin, USA} {Video, CV and Deep Learning Group}{May. 2020 - Aug. 2020} \resumeItemListStart \resumeItem {Explored \textbf{nerual network quantization} topics that convert a floating-point nerual network to an integer-based nerual network, in order to lower required calculation resource.} \resumeItem {Applied \textbf{16 bits} Quantization Aware Training (\textbf{QAT}) and Post-training Quantization (\textbf{PQ}) for ResNet, SqueezeNet and MobileNet with \textbf{Tensorflow 2} (In progress).} \resumeItemListEnd \resumeSubheading {Tencent Inc.}{Shenzhen, China} {Perceptual Intelligence Group}{Sept. 2018 - Mar. 2019} \resumeItemListStart \resumeItem {Developed a \textbf{pattern-based natural language parsing framework} in \textbf{Python} for a task-oriented Arena of Valor \textbf{chatbot} "Lu Bu (Lv, Bu)", while \textbf{reducing} the average latency by \textbf{27\%} to \textbf{less than 90 ms}.} \resumeItem {Deployed the above framework on a \textbf{Tornado} Server, which handled more than \textbf{100k related requests} per day. } \resumeItem {Designed an \textbf{automatic} user log analyzer (\textbf{Python}) for the chatbot which is able to evaluate high-frequency request, customer stickiness and new feature performance.} \resumeItemListEnd \resumeSubheading {Graduate Teaching Assistant}{Austin, USA} {EE 422C Software Design and Implementation (\textbf{Java}) II }{Jan. 2020 - May. 2020} %\resumeItemListStart %\resumeItem %{Designed and graded assignments for more than 140 students. Held recitations for more than 40 students.} %\resumeItemListEnd \resumeSubHeadingListEnd %-----------PROJECT EXPERIENCE----------------- \section{Project Experience} \resumeSubHeadingListStart \resumeSubheading {Consistency Regularization (CR) in Natural Language Processing}{Austin, TX} {Research project for LIN 393: Computational Linguistic at University of Texas at Austin}{Sept 2019 - Dec 2019} \resumeItemListStart \resumeItem {Embeded the semi-supervised learning concept \textbf{consistency regularization} into supervised learning NLP tasks, in order to make the supervised model \textbf{more robust to it's predictions}. } %\resumeItem %{Proposed \textbf{random delete} (fast perturbation solution) and \textbf{paraphrase substitution} (semantical perturbation solution)to perturb input data. } \resumeItem {Implemented the new \textbf{TextCNN-CR} model with \textbf{PyTorch}. Scored \textbf{77.06\%} in accuracy on MR (Movie Review Data) binary classification dataset, compared with the 75.33\% accuracy of original TextCNN model.} \resumeItemListEnd \resumeSubheading {International Aerial Robotics Competition}{Guangzhou \& Beijing, China} { \textbf{Innovative Design Award}, Computer Vision Group member}{Sept. 2017 - Aug. 2018} \resumeItemListStart \resumeItem {Designed an object \textbf{detection} and \textbf{location} system for an aerial robot.} \resumeItem {Constructed an \textbf{SVM} ground robot detector in \textbf{C++} for our system by writing a \textbf{self-implemented} Histogram of Oriented Gradient descriptor (\textbf{HOG descriptor}) and applying \textbf{OpenCV}’s related packages.} \resumeItemListEnd \resumeSubHeadingListEnd % %--------PROGRAMMING SKILLS------------ %\section{Programming Skills} % \resumeSubHeadingListStart % \item{ % \textbf{Languages}{: Scala, Python, Javascript, C++, SQL, Java} % \hfill % \textbf{Technologies}{: AWS, Play, React, Kafka, GCE} % } % \resumeSubHeadingListEnd %------------------------------------------- \end{document}
{ "alphanum_fraction": 0.6943281309, "avg_line_length": 38.443902439, "ext": "tex", "hexsha": "26aeb04c3918fe566fb3e03431831bb30775e484", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5e6637cf3d04d4eb94eeef1bb29a02b7283ea62a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "YiranCdr/CV-tex", "max_forks_repo_path": "MLE/yiransu_resume_mle.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5e6637cf3d04d4eb94eeef1bb29a02b7283ea62a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "YiranCdr/CV-tex", "max_issues_repo_path": "MLE/yiransu_resume_mle.tex", "max_line_length": 259, "max_stars_count": null, "max_stars_repo_head_hexsha": "5e6637cf3d04d4eb94eeef1bb29a02b7283ea62a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "YiranCdr/CV-tex", "max_stars_repo_path": "MLE/yiransu_resume_mle.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2312, "size": 7881 }
\documentclass{pset_template} \title{Boolean Algebra} \date{February 1, 2019} \editorOne{Alexander Sun} \editorTwo{Sanjit Bhat} \lectureNum{1} \contestMonth{February} \begin{document} \maketitle \section{Introduction} Boolean algebra is the branch of algebra in which the variables store truth value. All variables are true(1) or false(0). There are 3 main operations that create the base for boolean algebra: $\textbf{AND(conjunction)}$, $\textbf{OR(disjunction)}$, and $\textbf{NOT(negation)}$. \bigskip \noindent \section{Basic Operations} \textbf{AND}, denoted x $\land$ y or x AND y or x $\cdot$ y, satisfies x $\land$ y = 1 if x = y = 1 , else x $\land$ y = 0 \noindent \textbf{OR}, denoted x $\lor$ y or x OR y or x + y, satisfies x $\lor$ y = 0 if x = y = 0 , else x $\lor$ y = 1 \noindent \textbf{NOT}, denoted $\neg$x or NOT x or $\sim$x or $\bar{x}$, satisfies $\neg$x = 0 if x = 1 and $\neg$x = 1 if x = 0, reverses truth values of operation \begin{center} \begin{tabular}{ |c|c|c|c| } \hline x & y & x$\land$y & x$\lor$y \\ \hline 0 & 0 & 0 & 0 \\ \hline 1 & 0 & 0 & 1 \\ \hline 0 & 1 & 0 & 1 \\ \hline 1 & 1 & 1 & 1 \\ \hline \end{tabular} \end{center} \begin{center} \begin{tabular}{ |c|c| } \hline x & $\neg$x \\ \hline 0 & 1 \\ \hline 1 & 0 \\ \hline \end{tabular} \end{center} \section{Secondary Operations} \textbf{Material Implication}, denoted x $\rightarrow$ y = $\neg$x$\lor$y, if x = 1, then x $\rightarrow$ y = y, if x = 0, then x $\rightarrow$ y = 1 \noindent \textbf{Exclusive Or}, denoted x $\oplus$ y or x XOR y, x $\oplus$ y = 1 if x = 1 $\&$ y = 0 or x = 0 $\&$ y = 1, else x $\oplus$ y = 0, true when values are different \noindent \textbf{Equivalence}, denoted x $\equiv$ y, x $\equiv$ y = 1 if x =1 $\&$ y =1, or if x = 0 $\&$ y = 0, complement of XOR, true when values are the same \noindent \textbf{Dual}, the dual is found by replacing all OR's with AND's and all AND's with OR's, and all 1's with 0's and all 0's with 1's \noindent \textbf{Complement}, found by negating each individual value and replaving all OR's with AND's and all AND's with OR's and all 1's with 0's and all 0's with 1's \bigskip \noindent Memorizing boolean algebra laws is extremely beneficial to being able to solve problems quickly and efficiently. Here is a link to page with almost every law. Most are derivable, but should still be memorized. http://www.uiltexas.org/files/academics/UILCS-BooleanIdentities.pdf Demorgan's Rule is crucial to simplifying boolean algebra problems. It states: $$ \bar{A} + \bar{B} = \overline{AB}$$ or $$ \bar{A} * \bar{B} = \overline{A+B}$$ \bigskip \noindent With these basic rules memorized, all boolean algebra problems should be simple to work through \section{Exercises} \begin{enumerate} \item Simplify completely:(ACSL 2001-2002) $( A + B )\oplus A B$ \item Simplify the following expression: F = BC + $\overline{BC}$ + BA \item Simplify the Boolean expression (A+B+C)$\overline{(D+E)}$ + (A+B+C)(D+E): \end{enumerate} For more practice resources on boolean algebra, read through \href{http://www.categories.acsl.org/wiki/index.php?title=Boolean_Algebra}{the ACSL Wiki page}. \end{document}
{ "alphanum_fraction": 0.675625, "avg_line_length": 30.4761904762, "ext": "tex", "hexsha": "50614d3526cca0ce331806f20d0a10dc6b910ca3", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ab9bf7e5526cc5863c0173ab518138dada2dc1ef", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "sanjit-bhat/AB-ACSL", "max_forks_repo_path": "boolean-algebra.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ab9bf7e5526cc5863c0173ab518138dada2dc1ef", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "sanjit-bhat/AB-ACSL", "max_issues_repo_path": "boolean-algebra.tex", "max_line_length": 278, "max_stars_count": 1, "max_stars_repo_head_hexsha": "ab9bf7e5526cc5863c0173ab518138dada2dc1ef", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "sanjit-bhat/AB-ACSL", "max_stars_repo_path": "boolean-algebra.tex", "max_stars_repo_stars_event_max_datetime": "2020-06-12T03:01:29.000Z", "max_stars_repo_stars_event_min_datetime": "2020-06-12T03:01:29.000Z", "num_tokens": 1095, "size": 3200 }
\hypertarget{haskell}{% \section{Haskell - Introduction}\label{haskell}} \begin{itemize} \tightlist \item Functional programming is style of programming in which the basic method of computation is the application of functions to arguments; \item A functional language is one that supports and encourages the functional style. \end{itemize} e.g.~in a computational language like java, summing integers is like: \begin{lstlisting}[language=Haskell] int total = 0; for (int i = 1; i <= 10; i++) total = total + i; \end{lstlisting} in Haskell, it is \begin{lstlisting}[language=Haskell] sum [1..10] \end{lstlisting} \hypertarget{first-steps}{% \subsection{First Steps}\label{first-steps}} Haskell has a compiler and an interpreter as well \hypertarget{function-application}{% \subsubsection{Function Application}\label{function-application}} \begin{lstlisting}[language=Haskell] f a b + c*d --Apply the function f to a and b, and add the result to the product of c and d. \end{lstlisting} Moreover, function application is assumed to have higher priority than all other operators. \begin{figure}[H] \centering \includegraphics[width=0.3\textwidth]{figures/haskellMath.png} \caption{Examples} \end{figure} \hypertarget{haskell-scripts}{% \subsubsection{Haskell Scripts}\label{haskell-scripts}} \begin{itemize} \tightlist \item New functions are defined within a script, a text file comprising a sequence of definitions \item By convention, Haskell scripts usually have a .hs suffix on their filename. This is not mandatory, but is useful for identification purposes \item To start a script, type in ghci test.hs (test.hs is the script name) \item To use commands within the script, one can type `command-name` \end{itemize} \begin{lstlisting}[language=Haskell] average ns = sum ns `div` length ns \end{lstlisting} \clearpage
{ "alphanum_fraction": 0.7629310345, "avg_line_length": 25.7777777778, "ext": "tex", "hexsha": "ffc65f7530886bbbebdcb85b614f05c31aadeae7", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-09-15T07:10:24.000Z", "max_forks_repo_forks_event_min_datetime": "2020-09-15T07:10:24.000Z", "max_forks_repo_head_hexsha": "cc67637785237d630f077a863edcd5f49aa52b59", "max_forks_repo_licenses": [ "Beerware" ], "max_forks_repo_name": "nortismo/mse-documentations", "max_forks_repo_path": "TSM_AdvPrPa/Summary/02_Haskell01.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "cc67637785237d630f077a863edcd5f49aa52b59", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Beerware" ], "max_issues_repo_name": "nortismo/mse-documentations", "max_issues_repo_path": "TSM_AdvPrPa/Summary/02_Haskell01.tex", "max_line_length": 80, "max_stars_count": null, "max_stars_repo_head_hexsha": "cc67637785237d630f077a863edcd5f49aa52b59", "max_stars_repo_licenses": [ "Beerware" ], "max_stars_repo_name": "nortismo/mse-documentations", "max_stars_repo_path": "TSM_AdvPrPa/Summary/02_Haskell01.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 500, "size": 1856 }
\documentclass{InsightArticle} \usepackage[dvips]{graphicx} \usepackage{float} \usepackage[hang]{subfigure} \usepackage[dvips, bookmarks, bookmarksopen, backref, colorlinks,linkcolor={blue},citecolor={blue},urlcolor={blue}, ]{hyperref} \title{A Patch-Based Inpainting Framework} % % NOTE: This is the last number of the "handle" URL that % The Insight Journal assigns to your paper as part of the % submission process. Please replace the number "1338" with % the actual handle number that you get assigned. % \newcommand{\IJhandlerIDnumber}{3250} % Increment the release number whenever significant changes are made. % The author and/or editor can define 'significant' however they like. \release{0.00} % At minimum, give your name and an email address. You can include a % snail-mail address if you like. \author{David Doria} \authoraddress{Rensselaer Polytechnic Institute, Troy NY} \begin{document} \IJhandlefooter{\IJhandlerIDnumber} \ifpdf \else % % Commands for including Graphics when using latex % \DeclareGraphicsExtensions{.eps,.jpg,.gif,.tiff,.bmp,.png} \DeclareGraphicsRule{.jpg}{eps}{.jpg.bb}{`convert #1 eps:-} \DeclareGraphicsRule{.gif}{eps}{.gif.bb}{`convert #1 eps:-} \DeclareGraphicsRule{.tiff}{eps}{.tiff.bb}{`convert #1 eps:-} \DeclareGraphicsRule{.bmp}{eps}{.bmp.bb}{`convert #1 eps:-} \DeclareGraphicsRule{.png}{eps}{.png.bb}{`convert #1 eps:-} \fi \maketitle \ifhtml \chapter*{Front Matter\label{front}} \fi \begin{abstract} \noindent This document describes a system to fill a hole in an image by copying patches from elsewhere in the image. These patches should be a good continuation of the image outside the hole boundary into the hole. The implementation is very generic, allowing the develop to select or easily add new methods for the patch priority order, patch comparison function, and other parameters of the algorithm. The ``basic'' algorithm is called ClassicalImpageInpainting and is based on the algorithm described in ``Object Removal by Exemplar-Based Inpainting'' (Criminisi et. al.). The code is available here: https://github.com/daviddoria/PatchBasedInpainting \end{abstract} \IJhandlenote{\IJhandlerIDnumber} \tableofcontents \section{Introduction} This document describes a system to fill a hole in an image by copying patches from elsewhere in the image. These patches should be a good continuation of the hole boundary into the hole. The patch copying is done in an order which attempts to preserve linear structures in the image. \section{Dependencies} This code makes heavy use of multiple libraries: \begin{itemize} \item VTK $>=$ 6.0 \item ITK $>=$ 4.2 \item Boost $>=$ 1.51 \item CMake $>=$ 2.8.6 \item Qt $>=$ 4.8 \end{itemize} The code is also organized into git submodules. These are included if you clone with: \begin{verbatim} git clone --recursive https://github.com/daviddoria/PatchBasedInpainting.git \end{verbatim} or \begin{verbatim} git clone https://github.com/daviddoria/PatchBasedInpainting.git git submodule update --init --recursive \end{verbatim} The required submodules are: \begin{itemize} \item Mask - This submodule contains a set of classes for indicating a hole (to inpaint) in an image. It has functions to do things such as count the number of hole/valid pixels in a region, computed the gradient of a region without considering the values at masked pixels, etc. \item ITKHelpers (via Mask) - This submodule contains functions we commonly need to operate on ITK data structures. For example, it includes functions to extract the pixel values at specified pixel locations, extract channels of an image, etc. \item Helpers (via Mask\\ITKHelpers) - This submodule contains helper functions for the c++ language. It includes functions to do things such as writing a vector to a stream, check if a vector contains a specified value, compute the maximum of each channel of a vector container, etc. It also includes a Statistics class to compute averages, variances, etc. of multicomponent containers. \item BoostHelpers - This submodule allows us to do things such as inspect the values in a property map, output the values in a queue, etc. \item CMakeHelpers - This submodule allows us to use the submodule hierarchy in a much more straightforward fashion. \item ITKQtHelpers - This submodule allows us to convert back and forth between ITK and Qt data structures (images), which is needed to display patches from an itk::Image in a QGraphicsView. \item ITKVTKCamera - This submodule allows us to easily compensate for the difference in coordinate system between ITK and VTK. \item ITKVTKHelpers - This submodule allows us to convert back and forth between ITK and VTK data structures (images). For example, when we want to display an itk::Image in a vtkRenderer, we must first convert it to a vtkImageData. \item QtHelpers - This submodule provides functions to simplify some tasks for QImage (setting an image to a constant value, etc) and working with colors in Qt. \item VTKHelpers - This submodule contains many functions that we repeatedly need for VTK, including things like constructing a transparent image, outlining a region of an image, etc. \end{itemize} \section{Terminology} Throughout this document, the ``source region'' is the portion of the image which is known (is not part of the hole) at the beginning. The ``target region'' is the current hole to be filled. \section{Basic Algorithm Overview} The inputs to the algorithm consist of an image and a binary mask the same size as the image. We use a custom Mask class to describe the hole (https://github.com/daviddoria/Mask). Throughout this paper, we have colored the region in the input image corresponding to the hole bright green. This color irrelevant - we have done this only to make it obvious to tell if any part of the hole remains after inpainting (it should not), and for easier debugging to ensure these pixels are not used in any part of the computation. In practice, the input image need not be modified. \section{Algorithm Synthetic Demonstration} Figure \ref{fig:SyntheticDemonstration} shows a synthetic demonstration of the algorithm. The image consists of a black region (top) and a gray region (bottom). This simple example is used for testing because we know the result to expect - the dividing line between the black region and gray region should be continued smoothly. \begin{figure}[H] \centering \subfigure[Image to be filled. The region to be filled is shown in bright green.] { \includegraphics[width=0.3\linewidth]{images/BlackWhite} \label{fig:SyntheticDemonstration:ExampleInputImage} } \subfigure[The mask of the region to inpaint.] { \includegraphics[width=0.3\linewidth]{images/BlackWhiteMask} \label{fig:SyntheticDemonstration:ExampleInputMask} } \subfigure[The result of the inpainting.] { \includegraphics[width=0.3\linewidth]{images/BlackWhiteResult} \label{fig:SyntheticDemonstration:ExampleInputOutput} } \caption{Synthetic Demonstration} \label{fig:SyntheticDemonstration} \end{figure} \section{Realistic Demonstration} Figure \ref{fig:RealisticDemonstration} shows a real example of the algorithm. This result shows the typical quality of inpainting that the algorithm produces. \begin{figure}[H] \centering \subfigure[Image to be filled. The region to be filled is shown in bright green.] { \includegraphics[width=0.3\linewidth]{images/Bungee} \label{fig:RealisticDemonstration:ExampleInputImage} } \subfigure[The mask of the region to inpaint.] { \includegraphics[width=0.3\linewidth]{images/BungeeMask} \label{fig:RealisticDemonstration:ExampleInputMask} } \subfigure[The result of the inpainting. This took about 30 seconds on a P4 3GHz processor with a 206x308 image and a patch radius = 5.] { \includegraphics[width=0.3\linewidth]{images/BungeeResult} \label{fig:RealisticDemonstration:ExampleInputOutput} } \caption{Realistic Demonstration} \label{fig:RealisticDemonstration} \end{figure} \section{Algorithm Structure} An overview of the algorithm is: \begin{itemize} \item Initialize: \begin{itemize} \item Read an image and a binary mask. Non-zero pixels in the mask describe the hole to fill. \item Set the size of the patches which will be copied. (Typically an 11x11 patch (patch radius = 5) is used). \item Locate all patches of the image which are completely inside the image and completely in the source region. These are stored as an $std::vector<itk::ImageRegion<2> >$ named SourcePatches. \end{itemize} \item Main loop: \begin{itemize} \item Compute the priority of every pixel on the hole boundary (see Section \ref{subsec:AlgorithmDetails:Priorities}) \item Determine the boundary pixel with the highest priority. We will call this the target pixel. The region centered at the target pixel and the size of the patches is called the target patch. \item Find the source Pptch which best matches the portion of the target patch in the source region. \item Copy the corresponding portion of the source patch into the target region of the target patch. \item Repeat until the target region consists of zero pixels. \end{itemize} \end{itemize} %%%%%%%%%%%%%%%%%% \section{Priority Functions} \label{sec:PriorityFunctions} The priority function is used to determine which target patch to fill next. We provide several such functions. \subsection{PriorityCriminisi} The priority term described in \cite{criminisi} is is given by the product of a Confidence term $C(p)$ and a Data term $D(p)$. This priority function attempts to both continue linear structures sooner rather than later, and fill patches where a larger number of the pixels in the patch are already filled. \subsection{PriorityConfidence} This priority function is the confidence term from the Criminisi priority function. Using this function essentially makes the algorithm fill patches from the outside of the hole and work its way inward. \subsection{PriorityRandom} This priority function selects a random target node to fill next. It is probably best to only use this ordering for debugging. %%%%%%%%%%%%%%%% \section{Patch Difference Functions} \label{sec:PatchMatching} Patch comparisons can be done between corresponding pixels or using non-pixel specific metrics. Several patch difference functions are provided. \subsection{ImagePatchDifference} This is the ``standard'' patch difference function that computes a sum of differences of corresponding pixels in two patches. It is templated on the comparison to be performed on each pair of corresponding pixels. \subsection{GMHDifference} This function computes the difference between the gradient magnitude histogram of the valid region of the target patch and the gradient magnitude histogram of the entire source patch. %%%%%%%%%%%%%%%% \section{Pixel Difference Functions} \label{sec:PatchMatching} The ImagePatchDifference class described above requires a function to compute the difference between pixels. Several such distance functions are provided, and the most notable ones are described here. \subsection{SumSquaredPixelDifference (SSD)} This function computes the sum of squared differences between every pixel in the valid region of the target patch and its corresponding pixel in the source patch. This is the standard difference function used in patch-based inpainting (e.g. \cite{criminisi}). This function is generic for any pixel type that has an operator[], but is specialized for pixels of type itk::CovariantVector. We also provide an explicitly unrolled version of this function, as since it is at the heart of the algorithm and the computational bottleneck, we have tried to do everything possible to ensure it runs as fast as possible. \subsection{SumAbsolutePixelDifference (SAD)} This function computes the sum of absolute differences between ND pixels. This function is generic for any pixel type that has an operator[], but is specialized for pixels of type itk::CovariantVector. We also provide an explicitly unrolled version of this function, as since it is at the heart of the algorithm and the computational bottleneck, we have tried to do everything possible to ensure it runs as fast as possible. \subsection{WeightedSumSquaredPixelDifference} This function computes a weighted sum of squared differences between ND pixels. \subsection{HSVSSD} The standard SSD function is acceptable if the image is represented in a color space (like RGB) where each channel is ``non-wrapping''. That is, the values in the upper range of the channel (255) should be significantly different from the values in the lower range of the channel (0). This is not the case with the H channel of the HSV color space, so we must treat its cyclic nature specially. In this class, we treat the S and V channels as ``standard'' channels, and use a special difference functor for the H channel that takes into account that we are measuring an angular distance and that wrapping over the upper range (1) back into the lower range (0) does not indicate an enormous difference, but rather we handle it correctly. %%%%%%%%%%%%%%% \section{Drivers} As you will have noticed, the code is very heavily templated. A substantial amount of code is required to setup the objects to pass to the algorithm. To prevent code duplication when wanting to use the same algorithm in two contexts (for example, we may want a version of ClassicalImpageInpainting that displays its progress as it goes along and another that does not), we have separated this setup functionality into what we call a \emph{driver}. This allows us to separate the data preparation from this algorithm setup. For example, We have root/ClassicalImageInpainting.cpp and root/ClassicalImageInpaintingBasicViewer.cpp both of which use the ClassicalImpageInpainting driver. %%%%%%%%%%%%%%% \section{Interactivity} We have drivers for each of our algorithms that are called by executables with matching names that are purely terminal programs. That is, they require no GUI context to be created (they do not display anything) and do not require any GUI input from the user (they do not ask questions using QDialog, etc). In these cases, the algorithm (i.e. InpaintingAlgorithm) can be invoked as a normal function call. However, in the case that we want to either display the intermediate progress (using BasicViewer) or prompt the user to do something like select a patch (for example, TopPatchesDialog), we must run the algorithm in a different thread so that the GUI stays responsive during the algorithm. As an example, refer to ClassicalImageInpaintingBasicViewer. Here the last line calls InpaintingAlgorithm, but is wrapped in a QtConcurrent::run call (which also requires boost::bind since the InpaintingAlgorithm is a function template). This starts the algorithm in a separate thread, and returns from the driver function to hit the app.exec() at the end of main() which triggers the GUI thread loop to start. Because of this behavior, and objects allocated on the stack (locals) in the driver function will immediately go out of scope, and nothing will work (everything will crash because the objects are no longer valid). To prevent this, we create everything and pass everything as a std::shared_ptr. The reference count of the shared pointers get increased as they are accepted by the Algorithm in the QtConcurrent::run() call, so they persist until they are no longer needed. %%%%%%%%%%%%%%% \section{Visitors} In this code we heavily use the \emph{visitor pattern}. The idea is that the algorithm is specified as a very non-specific routine which can be filled in by the developer specifying the particular action that should be taken at each step in the algorithm. This really aligns code with the definition of ``algorithm'' that might be used outside of a programming context. As a simple example, consider the following algorithm: \begin{verbatim} void Algorithm(VisitorType visitor) { while(not done) { currentObject = ...; visitor.Initialize(currentObject); visitor.Process(currentObject); visitor.Finalize(currentObject); } } \end{verbatim} Here, as long as your visitor knows how to initialize, process, and finalize an object, it can be used with this algorithm. For example, \begin{verbatim} struct MyVisitor { void Initialize(Object currentObject) { cout << "init."; } void Process(Object currentObject) { cout << "process."; } void Finalize(Object currentObject) { cout << "finalize."; } }; ... MyVisitor myVisitor; Algorithm(myVisitor); ... \end{verbatim} Will run the algorithm with the above dummy visitor, which will simply write out the name of each function as it is called. \subsection{Acceptance Visitors} These visitors, used with InpaintingAlgorithmWithVerification, take a source and target patch and determine if they are acceptable, usually by comparing some difference function to a threshold. For example, GMHAcceptanceVisitor uses the GMHDifference functor and then compares that value to a specified threshold. If the distance is below the threshold, the acceptance visitor returns true (acceptable), and otherwise returns false (not acceptable). \subsection{Information Visitors} \subsection{Descriptor Visitors} %%%%%%%%%%%%%%% \section{Utilities} One of the main components of our inpainting algorithm is a priority queue that tracks which pixels are on the current hole boundary and maintains their priority. This problem is harder than it sounds. We have created an indirect priority queue that is a wrapper around boost::heap::binomial_heap. The trouble is that we often will have nodes in the queue that get invalidated (when the boundary changes, pixels currently in the queue are no longer valid boundary pixels), so we should not process those nodes when they get to the top of the queue. We also must provide a mechanism for changing the priority values associated with nodes that are already in the queue, which we do by storing those values in a property map and using a boost::indirect_cmp to sort the queue. The result is an easy to use queue that does what is expected of a boundary node queue for inpainting. This directory also contains other helper functions (PatchHelpers) we need for some inpainting algorithms. %%%%%%%%%%%%%%% \section{Implementation Details} \label{sec:ImplementationDetails} \subsection{Isophotes} An isophotes is simply a gradient vector rotated by 90 degrees. It indicates the direction of ``same-ness'' rather than the direction of maximum difference. There is a small trick, however, to computing the isophotes. We originally tried to compute the isophotes using the following procedure: \begin{itemize} \item Convert the RGB image to a grayscale image. \item Blur the grayscale image. \item Compute the gradient using itkGradientImageFilter. \item Rotate the resulting vectors by 90 degrees. \item Keep only the values in the source region. \end{itemize} This procedure produces the gradient magnitude map shown in Figure \ref{fig:ErroneousGradient}. \begin{figure}[H] \centering \includegraphics[width=0.3\linewidth]{images/ErroneousGradient} \caption{Result of naively computing the image gradient.} \label{fig:ErroneousGradient} \end{figure} The high values of the gradient magnitude surrounding the target region are very troublesome. The resulting gradient magnitude image using this technique is sensitive to the choice of the pixel values in the target region, which we actually want to be a completely arbitrary choice (it should not affect anything). More importantly, the gradient plays a large part in the computation of the pixel priorities, and this computation is greatly disturbed by these erroneous values. Simply ignoring these boundary isophotes is not an option because the isophotes on the boundary are exactly the ones that are used in the computation of the Data term. To fix this problem, we immediately dilate the mask specified by the user. This allows us to compute the isophotes as described above, but now we have image information on both sides of the hole boundary, leading to a valid gradient everywhere we need it to be. Figure \ref{fig:ErrorneousGradientCorrection} shows the procedure for fixing this problem. \begin{figure}[H] \centering \subfigure[Image to be filled. The target region is shown in green.] { \includegraphics[width=0.3\linewidth]{images/BlackWhite} \label{fig:ErrorneousGradientCorrection:InputImage} } \subfigure[The dilated mask.] { \includegraphics[width=0.3\linewidth]{images/BlackWhiteDilatedMask} \label{fig:ErrorneousGradientCorrection:InputDilated} } \subfigure[The gradient magnitude with pixels in the new (enlarged) target region set to zero.] { \includegraphics[width=0.3\linewidth]{images/MaskedGradientMagnitude} \label{fig:ErrorneousGradientCorrection:Output} } \caption{Procedure for fixing the erroneous gradient problem.} \label{fig:ErrorneousGradientCorrection} \end{figure} As you can see, this gradient magnitude image is exactly what we would expect. \subsection{Boundary Normals} There are two things to be careful with when computing the boundary normals: computing the normals only on the one pixel thick boundary, and using the correct side of the masked region as the boundary. \subsubsection{Computing boundary normals only on the one pixel thick boundary} If we compute the normals directly on the binary mask, the set of resulting vectors are too discretized to be of use. Therefore, we first blur the mask. However, the gradient of the blurred mask results in non-zero vectors in the gradient image in many more pixels (a ``thick'' boundary) than the gradient of the original mask (a single pixel ``thin'' boundary). Therefore, we must mask the gradient of the blurred mask to keep only the pixels which would have been non-zero in the original mask gradient. \subsubsection{Using the correct side of the masked region as the boundary} There are two potential boundaries that can be extracted from a masked region - the ``inner'' boundary and the ``outer'' boundary. As shown in Figure \ref{fig:BoundarySide}, the inner boundary (red) is composed of pixels originally on the white (masked) side of the blob, and the outer boundary (green) is composed of pixels originally on the black (unmasked) side of the blob. It is important that we use the outer boundary, because we need the boundary to be defined at the same pixels that we have image information, which is only in the source (black/unmasked) region. \begin{figure}[H] \centering \subfigure[The inner boundary.] { \includegraphics[width=0.3\linewidth]{images/BlackWhiteMask} \label{fig:BoundarySide:Mask} } \subfigure[Outer boundary (green) and inner boundary (red).] { \includegraphics[width=0.3\linewidth]{images/BothBoundaries} \label{fig:BoundarySide:BothBoundaries} } \caption{Inner vs Outer Boundary of a Region} \label{fig:BoundarySide} \end{figure} %%%%%%%%%%%%%%% \begin{thebibliography}{9} \bibitem{criminisi} A. Criminisi, P. Perez, K. Toyama, \emph{Object Removal by Exemplar-Based Inpainting}. Computer Vision and Pattern Recognition 2003 \end{thebibliography} \end{document}
{ "alphanum_fraction": 0.783197832, "avg_line_length": 60.225388601, "ext": "tex", "hexsha": "6acb8b28d7cadb55928fb97280ed6454c6862f09", "lang": "TeX", "max_forks_count": 18, "max_forks_repo_forks_event_max_datetime": "2022-02-24T20:02:10.000Z", "max_forks_repo_forks_event_min_datetime": "2015-01-11T15:10:23.000Z", "max_forks_repo_head_hexsha": "d308fe62045e241a4822bb855df97ee087420d9b", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "jingtangliao/ff", "max_forks_repo_path": "Documentation/PatchBasedInpainting.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "d308fe62045e241a4822bb855df97ee087420d9b", "max_issues_repo_issues_event_max_datetime": "2019-04-24T14:45:46.000Z", "max_issues_repo_issues_event_min_datetime": "2019-04-24T09:56:15.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "jingtangliao/ff", "max_issues_repo_path": "Documentation/PatchBasedInpainting.tex", "max_line_length": 1170, "max_stars_count": 39, "max_stars_repo_head_hexsha": "d308fe62045e241a4822bb855df97ee087420d9b", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "jingtangliao/ff", "max_stars_repo_path": "Documentation/PatchBasedInpainting.tex", "max_stars_repo_stars_event_max_datetime": "2021-10-01T18:11:46.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-01T07:59:51.000Z", "num_tokens": 5288, "size": 23247 }
\chapter{Results} In the previous chapter we explained the implementation details of our motion segmentation pipeline. In particular we described the purpose of every individual pipeline component and how they are implemented. \\ \\ In summary, our pipeline consists of the following main stages: generate optical flow estimations, track motion trajectories using these flow fields, compute similarities between every trajectory pair with respect to certain measures and last, perform a segmentation by clustering the trajectories according to their similarities. Every such stages offers various implementation variants, from which a user can select from. A complete list of all pipeline mode abbreviations is provided in Table $\ref{tab:pipeline_abbreviations}$. In the following \enquote{a specific stage implementation} is called \textit{pipeline mode} and \enquote{a set of pipeline modes} is called \textit{pipeline combinations}. \begin{table}[H] \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{\textbf{Pipeline Abbreviations}} \\ \hline \textbf{Flow Methods (Sec. $\ref{sec:generate_of}$)} & \textbf{Similarity Measures (Sec. $\ref{sec:affinity_matrix_impl}$)} & \textbf{Segmentation Techniques (Sec. $\ref{sec:sparse_motion_segmentation}$)} \\ \hline \begin{tabular}[c]{@{}c@{}}Horn and Schnunk (\textbf{HS})\\ (Page $\pageref{sec:hs_flows}$)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Product of Distances (\textbf{PD})\\ (Eq. $\ref{eq:prod_combination}$)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Spectral Clustering (\textbf{SC})\\ (Sec. $\ref{sec:spectral_clustering_impl}$)\end{tabular} \\ \hline \begin{tabular}[c]{@{}c@{}}Large Displacement Optical Flows (\textbf{LDOF})\\ (Page $\pageref{sec:ldof_flows}$)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Product of Distances with Depths (\textbf{PED})\\ (Eq. $\ref{eq:prod_combination}$)\end{tabular} & \begin{tabular}[c]{@{}c@{}}MinCut (\textbf{MC})\\ (Sec. $\ref{sec:min_cut_seg}$)\end{tabular} \\ \hline \begin{tabular}[c]{@{}c@{}}Layered RGB-D Flow Estimation (\textbf{LRGBD})\\ (Page $\pageref{sec:lrgbd_flows}$)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Sum of Distances (\textbf{SD})\\ (Eq. $\ref{eq:sum_dist}$)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Kernighan-Lin (\textbf{KL})\\ (Sec. $\ref{sec:kl_impl}$)\end{tabular} \\ \hline \begin{tabular}[c]{@{}c@{}}Semi Rigid Scene Flows (\textbf{SRSF})\\ (Page $\pageref{sec:srsf_flows}$)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Sum of Distances with Depths (\textbf{SED})\\ (Eq. $\ref{eq:sum_dist}$)\end{tabular} & \\ \hline \end{tabular} } \caption[List of Pipeline Abbreviations]{A list of all implemented pipeline modes and their abbreviations. A pipeline mode is formed by combining one flow method, together with one similarity measure and one segmentation technique.} \label{tab:pipeline_abbreviations} \end{table} In this chapter we describe how we evaluate segmentations produced by different pipeline combinations. In particular, we examine the influence of several pipeline modes on the segmentation quality. \\ \\ We start be defining the default pipeline parameter specifications (Sec. $\ref{sec:spectral_clustering_parameters}$) of specific implementations. Next, we introduce the datasets (Sec. $\ref{sec:datasets}$) we use during our evaluation. In the following we describe our cluster merger post-processing step (Sec. $\ref{sec:seg_merger}$), which has to be run before starting the evaluation. We continue by describing our methodology (Sec. $\ref{sec:methodology}$) we rely on to perform the experiments. Finally, we describe our experiments (Sec. $\ref{sec:experiments}$), the corresponding results and observations. We conclude this chapter by listing some pipeline runtime measurements (Sec. $\ref{sec:runtime_measurements}$) and a discussion about our main findings (Sec. $\ref{sec:discussion}$). \section{Default Parameter Assignments} \label{sec:spectral_clustering_parameters} Even though our pipeline mainly consists of tree main components, that is the flow computation stage, the affinity matrix generation stage and the segmentation stage, it exhibits a large amount of parameters that have to be specified. This issues makes it tedious to use our implementation, especially, when having to run the pipeline repeatedly while using different modes (e.g. during our experiments). Therefore, we aim at reducing the number of free parameters in our pipeline by providing certain default assignments. \\ \\ In this section we address this problem of default parameter assignments. In particular we explain which parameter gets what default value assigned to and why. \\ \\ Our motion segmentation pipeline exhibits many parameters since it consists of several stages such as the flow generation-, affinity matrix computation-, segmentation stage. Moreover, every stage implements different techniques to approach its specified tasks. Hence, a user has to specify both, which pipeline combination should be run and what parameter values of an specific method should be used. To get the overall picture of this complexity we illustrate all available \textit{pipeline combinations} in Figure $\ref{fig:pipeline_combinations}$. \begin{figure}[H] \begin{center} \includegraphics[width=1\linewidth] {evaluation/pipeline_combinations} \end{center} \caption[Pipeline Combinations]{A listing of all available pipeline combinations. Our pipeline implements a series of flow methods (4), various affinity computation modes (4) and different segmentation techniques (3). In total, there are $4 \times (2 \times 2 + 2 \times 1) = 24$ different combinations available. However, not every component mode can be combined with any other. For example \textit{S-affinities} can only be used in combination with the \textit{Kernighan-Lin Heuristic} (KL). A listing of all these combination acronyms and their meaning is given in Table $\ref{tab:pipeline_abbreviations}$.} \label{fig:pipeline_combinations} \end{figure} In the following we give a brief description of several default assignments used in different pipeline stages. \\ \\ For generating the flow fields we rely on existing implementations as described in Section $\ref{sec:generate_of}$. We do not attempt to modify any parameter settings while utilizing those flow methods and thus strictly rely on their default settings. A summary about these flow methods can be found in Section $\ref{sec:impl_optical_flow}$ on page $\pageref{sec:impl_optical_flow}$. \\ \\ Before tracking trajectories as described in Section $\ref{sec:trajectory_tracking}$, we initially have to extract their feature locations (Sec. $\ref{sec:tracking_candidates}$). The resulting feature extraction is determined by a parameter defining the sparsity of the sampling rate, indicating to use only every $k-th$ feature. Throughout performing our experiments we will be using $k$ equals to 8. This choice is justified by the fact that using a smaller sampling rate would not enhance the result's quality and negatively influence the overall runtime, whereas larger values would cause the opposite. \\ \\ Next some words about choosing defaults for the affinity matrix generation stage. Before computing a $\textit{P-affinity}$, either by running the mode \textit{PD} or $\textit{PED}$, we have to specify a certain value $\lambda$. This parameter is used in Equation $\ref{eq:prod_dist_affinity}$ and acts as a scale of the similarity between two trajectories. In the following we use$\footnote{We determined the defaults by trying out different values for $\lambda$ and took those which yielded the visually most promising results. Please note that this kind of parameter selection is by no means optimal.}$ $\lambda$ equals $0.01$ when running $PD$ and $\lambda = 50$ when running the affinity mode $PED$. The reason for using different $\lambda$ values is because PED and PD have different scales. As we can see $\lambda$ takes different powers to ten. However the exact choice of this scale does not matter$\footnote{Meaning, that it \textit{does not matter} regarding using a different mantissa, since it will not affect the final outcome of the affinity matrix drastically.}$ that much as long as it close to the same power to ten value as its corresponding default. \\ \\ When having a closer look at the computed affinities, we observe, that approximately one third of the trajectory neighbors have similarity large enough to have a big influence on it. Therefore, we decided to set the number of closest neighbors per trajectory equals one third of the total neighbors. An example of such affinities is shown in Figure $\ref{fig:cars_affinities}$. A dark pixel indicates a low affinity between two trajectories and bright regions indicate large affinities. \\ \\ In order to generate S-affinities we rely on the definition of Equation $\ref{eq:sum_dist}$, which is used to compute distances between trajectories. However, this equation is parameterized by several weights $\beta$. In this work we use exactly the same $\beta$ parameter values as specified in the paper $\cite{KB15b}$, which are equal to \begin{equation} \begin{aligned} \bar{\beta}_0 = 6 \text{, } \beta_0 = 2 \text{, } \beta_1 = \beta_3 = -0.02 \text{ and } \beta_2 = -4 \end{aligned} \end{equation} In our experiments we set the dummy vertex count in the Kernighan-Lin partitioning method always equals zero. \\ \\ For every segmentation methods, we set the \textit{cluster count} \enquote{they are supposed to solve for} to two times the number of an estimated number of moving objects present in the target dataset. In other words the cluster count (\textbf{CC}) is defined as \begin{equation} \text{CC} = 2 \times \text{Estimated Moving Objects in Dataset} \label{eq:cc_def} \end{equation} This estimation approach only takes into account very clearly, distinct moving objects and is thus probably underestimating the real moving object count. \\ \\ Segmentation methods that rely on P-affinities are utilizing a certain number of eigenvectors resulting from the eigenvalue decomposition of the Laplacian. For this eigenvector Count (EV) we use twice the number of the clusters the methods are supposed to solve for, i.e. this count is defined as \begin{equation} \text{EV} = 2 \times \text{CC} \end{equation} Moreover, when using the MC segmentation, we set the default of its data- and smoothness relaxation parameter $\nu$ used by its energy term (defined in Equation $\ref{eq:min_cut_energy_revisited}$) equals to the dimension of the affinity matrix times $10^{-6}$. Again, this value has been determined by simply trying out various assignments. Furthermore, we observed that it is sufficient to run about 10-20 iterations until MC converges. Therefore, we conservatively assign the number of iterations $\text{MC}_i$ equals 20. \\ \\ Unless stated otherwise we rely on those parameter default parameter specifications to generate out results. Moreover, in Section $\ref{sec:parameter_experiments}$, we offer a statistical justification for the default choices of $\lambda$, $\text{CC}$, $\text{EV}$ and $\text{MC}_i$. \section{Datasets} \label{sec:datasets} In this section we introduce the dataset we were using for generating our results. Our dataset are a diverse collection of videos, originating from various source. Some videos were captures by Microsoft's Kinect, some by Asus' Xtion. Some were manually captures and again others are from other authors. In particular the cars dataset is from the BMS-26 dataset$\footnote{See \url{http://lmb.informatik.uni-freiburg.de/resources/datasets/}}$, the datasets prefixed by \textit{Bonn} are from Bonn's RGB-D Rigid Multi-Body Dataset $\footnote{See \url{http://www.ais.uni-bonn.de/download/rigidmultibody/}}$ and the Two Chairs, Statue and One Chairs are from $\cite{2016arXiv160804642B}$. In addition, we captured the Waving Hand dataset by ourselves using the Xbox Kinect device.\\ \\ We aim to work with a diverse collection of datasets. Meaning that we want to use datasets that capture different types of motions, show indoor- and outdoor scenes and are captured by a static or moving cameras. Also, our implementation should be able to generated segmentations for long as well as for short sequences. \\ \\ Regularly, a dataset consists of the frames of a RGB-D video and the corresponding depth-and color camera calibrations. Moreover, for a selected set of frames we manually have drawn$\footnote{To draw these ground truth images using Gimp.}$ ground truth (\textbf{GT}) images. Please notice that all GT images were drawn according to the painter's opinion and thus do probably not truly correspond to the real ground truth motion. Each color in such a GT image belongs to a moving object. However, the color \textit{black} has a spacial meaning and marks pixels that are not certain to which moving object they belong to. Such pixel locations are ignored during our evaluation. \\ \\ In the following a listing of our datasets, showing three of its frames, one of its ground truth motion segmentation images, a list of properties, such as the number of estimated objects and lastly, a brief description what is happening in the video sequence: \begin{itemize} \item \textbf{Cars}: \\ \textbf{Frames}: 19, \textbf{Resolution}: $480 \times 640$, \textbf{Depths}: No, \textbf{Estimated Objects}: 3 \begin{figure}[H] \begin{center} \subfigure[Frame 1]{ \includegraphics[width=0.22\linewidth] {evaluation/datasets/cars/1} } \subfigure[Frame 10]{ \includegraphics[width=0.22\linewidth] {evaluation/datasets/cars/10} } \subfigure[Frame 19]{ \includegraphics[width=0.22\linewidth] {evaluation/datasets/cars/19} } \subfigure[GT Frame 1]{ \includegraphics[width=0.22\linewidth] {evaluation/datasets/cars/gt1} } \end{center} \caption[Dataset Bonn Cars]{An outdoor scene showing two cars, both moving to the left. One in the front and the other in the background. The camera is static.} \label{fig:eval_datasets_cars} \end{figure} \item \textbf{Bonn Watercan}: \\ \textbf{Frames}: 58, \textbf{Resolution}: $480 \times 640$, \textbf{Depths}: Yes, \textbf{Estimated Objects}: 5 \begin{figure}[H] \begin{center} \subfigure[Frame 4]{ \includegraphics[width=0.22\linewidth] {evaluation/datasets/bonn_watercan/4} } \subfigure[Frame 31]{ \includegraphics[width=0.22\linewidth] {evaluation/datasets/bonn_watercan/31} } \subfigure[Frame 58]{ \includegraphics[width=0.22\linewidth] {evaluation/datasets/bonn_watercan/58} } \subfigure[GT Frame 4]{ \includegraphics[width=0.22\linewidth] {evaluation/datasets/bonn_watercan/gt4} } \end{center} \caption[Dataset Bonn Watercan]{An indoor scene showing two men pushing different objects on a table. First, the man on the right side moves a water can along the table, then after a while the second man moves a packet from the left side of the table to its center. The camera is slightly shaking.} \label{fig:eval_datasets_bonn_watercan} \end{figure} \item \textbf{Bonn Chairs}: \\ \textbf{Frames}: 58, \textbf{Resolution}: $480 \times 640$, \textbf{Depths}: Yes, \textbf{Estimated Objects}: 5 \begin{figure}[H] \begin{center} \subfigure[Frame 15]{ \includegraphics[width=0.22\linewidth] {evaluation/datasets/bonn_chairs/15} } \subfigure[Frame 30]{ \includegraphics[width=0.22\linewidth] {evaluation/datasets/bonn_chairs/30} } \subfigure[Frame 45]{ \includegraphics[width=0.22\linewidth] {evaluation/datasets/bonn_chairs/45} } \subfigure[GT Frame 15]{ \includegraphics[width=0.22\linewidth] {evaluation/datasets/bonn_chairs/gt15} } \end{center} \caption[Dataset Bonn Chairs]{An indoor scene showing a man moving a chair. First, he moved it to the left, then he rotates it slightly. The camera is slightly shaking.} \label{fig:eval_datasets_bonn_chairs} \end{figure} \item \textbf{Bonn Cerealbox}: \\ \textbf{Frames}: 101, \textbf{Resolution}: $480 \times 640$, \textbf{Depths}: Yes, \textbf{Estimated Objects}: 5 \begin{figure}[H] \begin{center} \subfigure[Frame 40]{ \includegraphics[width=0.22\linewidth] {evaluation/datasets/bonn_cerealbox/40} } \subfigure[Frame 60]{ \includegraphics[width=0.22\linewidth] {evaluation/datasets/bonn_cerealbox/60} } \subfigure[Frame 80]{ \includegraphics[width=0.22\linewidth] {evaluation/datasets/bonn_cerealbox/80} } \subfigure[GT Frame 40]{ \includegraphics[width=0.22\linewidth] {evaluation/datasets/bonn_cerealbox/gt40} } \end{center} \caption[Dataset Bonn Cerealbox]{An indoor scene showing a man moving a cup on table. The camera is slightly moving.} \label{fig:eval_datasets_bonn_cerealbox} \end{figure} \item \textbf{Statue}: \\ \textbf{Frames}: 111, \textbf{Resolution}: $480 \times 640$, \textbf{Depths}: Yes, \textbf{Estimated Objects}: 8 \begin{figure}[H] \begin{center} \subfigure[Frame 30]{ \includegraphics[width=0.22\linewidth] {evaluation/datasets/statue/30} } \subfigure[Frame 60]{ \includegraphics[width=0.22\linewidth] {evaluation/datasets/statue/60} } \subfigure[Frame 90]{ \includegraphics[width=0.22\linewidth] {evaluation/datasets/statue/90} } \subfigure[GT Frame 30]{ \includegraphics[width=0.22\linewidth] {evaluation/datasets/statue/gt30} } \end{center} \caption[Dataset Statue]{An indoor scene a rotating statue. After a whole a man removes the upper part. We only see the man's hand. The camera is static.} \label{fig:eval_datasets_statue} \end{figure} \item \textbf{Waving Hand}: \\ \textbf{Frames}: 104, \textbf{Resolution}: $480 \times 640$, \textbf{Depths}: Yes, \textbf{Estimated Objects}: 5 \begin{figure}[H] \begin{center} \subfigure[Frame 20]{ \includegraphics[width=0.22\linewidth] {evaluation/datasets/wh/20} } \subfigure[Frame 30]{ \includegraphics[width=0.22\linewidth] {evaluation/datasets/wh/30} } \subfigure[Frame 40]{ \includegraphics[width=0.22\linewidth] {evaluation/datasets/wh/40} } \subfigure[GT Frame 40]{ \includegraphics[width=0.22\linewidth] {evaluation/datasets/wh/gt40} } \end{center} \caption[Dataset Waving Hand]{An indoor scene showing a waving Hand. The camera is static.} \label{fig:eval_datasets_waving_hand} \end{figure} \item \textbf{Two Chairs}: \\ \textbf{Frames}: 61, \textbf{Resolution}: $512 \times 424$, \textbf{Depths}: Yes, \textbf{Estimated Objects}: 7 \begin{figure}[H] \begin{center} \subfigure[Frame 15]{ \includegraphics[width=0.22\linewidth] {evaluation/datasets/two_chairs/15} } \subfigure[Frame 40]{ \includegraphics[width=0.22\linewidth] {evaluation/datasets/two_chairs/40} } \subfigure[Frame 60]{ \includegraphics[width=0.22\linewidth] {evaluation/datasets/two_chairs/60} } \subfigure[GT Frame 15]{ \includegraphics[width=0.22\linewidth] {evaluation/datasets/two_chairs/gt15} } \end{center} \caption[Dataset Two Chairs]{An indoor scene showing two spinning chairs. The camera is static.} \label{fig:eval_datasets_two_chairs} \end{figure} \item \textbf{One Chair}: \\ \textbf{Frames}: 101, \textbf{Resolution}: $512 \times 424$, \textbf{Depths}: Yes, \textbf{Estimated Objects}: 7 \begin{figure}[H] \begin{center} \subfigure[Frame 45]{ \includegraphics[width=0.22\linewidth] {evaluation/datasets/one_chair/45} } \subfigure[Frame 60]{ \includegraphics[width=0.22\linewidth] {evaluation/datasets/one_chair/60} } \subfigure[Frame 75]{ \includegraphics[width=0.22\linewidth] {evaluation/datasets/one_chair/75} } \subfigure[GT Frame 45]{ \includegraphics[width=0.22\linewidth] {evaluation/datasets/one_chair/gt45} } \end{center} \caption[Dataset One Chair]{An indoor scene a man lifting a chair and spinning its lower part. The camera is static.} \label{fig:eval_datasets_one_chair} \end{figure} \end{itemize} \section{Segment Merger} \label{sec:seg_merger} In this section we describe our technique to reduce oversegmentations produced by our pipeline. Our goal is to offer a mechanism that allows to evaluate the maximal potential of pipeline. However, some segmentation results are not clearly defined, such as fast movements resulting from non-rigidly moving objects. An good example of this problem case is when we attempt to segment a video showing a waving Hand. That is rationale why we want to merge unclear segments. Moreover, we only allow to merger sparse segmentations, since our dense segmentation approach is basically blurring the sparse segments and thus is not yielding comparable results$\footnote{The more post-processing steps are added the more obfuscated the final segmentation quality gets.}$. This stage is invoked before running the evaluation program, implemented as a post-processing stage. \\ \\ Usually, the results generated by our pipeline exhibit an oversegmentation when comparing them against their ground truth. An example of such an oversegmentation is illustrated in Figure $\ref{fig:merger_result_b}$. Ideally, before evaluating the quality, we would like to refine our over-segmented results in a way such that the total number of unnecessary or unclear segments is reduced. Therefore we want to merge merge segments causing an oversegmentation. We achieve this by comparing the generated segments against a manually drawn ground truth image. For any generated segment we determine their best matching ground truth segment with respect to their overlapping parts. A conceptual visualization of our merging technique is visualized in Figure $\ref{fig:merger_result}$. Please note that such a merging technique is just a hack. A more sophisticated method for merging clusters is presented in $\cite{OB14b}$. \begin{figure}[H] \begin{center} \subfigure[Ground Truth Segmentation]{ \includegraphics[width=0.47\linewidth] {implementation/merger/mask} \label{fig:merger_result_a} } \subfigure[Generated Segmentation]{ \includegraphics[width=0.47\linewidth] {implementation/merger/oversegmentation} \label{fig:merger_result_b} } ~ \subfigure[Overlay Ground Truth / Segmentation]{ \includegraphics[width=0.47\linewidth] {implementation/merger/mask_segments_overlay} \label{fig:merger_result_c} } \subfigure[Merged Segmentation]{ \includegraphics[width=0.47\linewidth] {implementation/merger/merged} \label{fig:merger_result_d} } \end{center} \caption[Segmentation Merger]{Visualization of our segment merger's input and output. As an input it expects a ground truth segmentation (see Figure $\ref{fig:merger_result_a}$) and a generated oversegmentation (see Figure $\ref{fig:merger_result_b}$). As output it yields the merged segmentation as shown in Subfigure $\ref{fig:merger_result_c}$.} \label{fig:merger_result} \end{figure} A segmentation $S$ is basically a set of points in which every point is assigned to a certain segment identifier. In the following let us denote $S_j$ as the subset of points in $S$ that belong to the segmentation label $j$. Similarly, let $M$ denote the set of all ground truth points with their corresponding mask labels and $M_i$ the set of all points that belong to the ground truth mask $i$. To compute the merged segments we do the following: For every segment $S_j$ in $S$ we determine its best matching ground truth segment $M_i$ as defined in Equation $\ref{eq:merging_label_formula}$. \begin{equation} i = \argmaxl_{M_i \in M} \left\vert{S_j \cap M_i}\right\vert \label{eq:merging_label_formula} \end{equation} The idea is to find the ground truth mask $i$ that yields the most point intersections with the currently considered segment $S_j$. Next, we update every point in $S_j$ by setting their segment label to $i$. After doing so we have computed the merged segmentation version of the initial oversegmentation. An example is shown in Figure $\ref{fig:merger_result_c}$. \section{Methodology} \label{sec:methodology} In this section we give a brief description about the methodology used to perform our experiments. \\ \\ We want to quantitatively evaluate the quality of segmentations produces by running different pipeline combinations. In particular we propose to evaluate the quality of how well moving objects were segmented. We use the datasets described in Section $\ref{sec:datasets}$. Produced motion segmentations are compared against available ground truth images. While running the pipeline we rely on the defaults described in Section $\ref{sec:spectral_clustering_parameters}$.\\ \\ Before evaluating the performance of generated segmentations, we run our segment merger, which is described in Section $\ref{sec:seg_merger}$, as a post-processing step. The merged segments are quantitatively evaluated using different statistical measures. After running the merger, each GT object is assigned to at most one label. Therefore, the evaluation measures can be computed independently per object. In particular, for each segmentation result we compute their precession, recall and their F1 score by comparing them against their ground truth. The definition of these measures is listed in Equation $\ref{eq:statistical_measures}$. \begin{equation} \begin{aligned} & \text{precision} = \frac{\text{TP}}{\text{TP} + \text{FP}} \\ & \text{recall} = \frac{\text{TP}}{\text{TP} + \text{FN}} \\ & F_1 \text{ Score} = 2 \left( \frac{\text{precision} \times \text{recall}}{\text{precision} +\text{recall}} \right) \end{aligned} \label{eq:statistical_measures} \end{equation} The final reported outputs is the average value of these measured over all foreground objects. \\ \\ Finally some words about how we determined the quantities $TP$, $FP$ and $FN$. We only have to iterate over all points in every cluster and compare them against an available ground image. While doing so we count their true positives (\textbf{TP}), false positives (\textbf{FP}) and their false negatives (\textbf{FN}). The exact definition of these quantities is listed in Equation $\ref{eq:statistical_counts}$. For a given label $\alpha$ these measures are defined as: \begin{equation} \begin{aligned} \textbf{TP} &:= \text{Samples correctly labeled $\alpha$} \\ \textbf{FP} &:= \text{Samples incorrectly labeled $\alpha$} \\ \textbf{FN} &:= \text{Samples that were labeled $\alpha$ in GT but are attributed to a wrong label.} \end{aligned} \label{eq:statistical_counts} \end{equation} A detailed explanation of these measures can be found in the background Section $\ref{sec:on_statistics_bg}$ on page $\pageref{sec:on_statistics_bg}$. \\ \\ One last note: During our evaluations we want to determine the quality of segmentations and the influence of design choices regardless of additional post-processing steps. Since our dense motion segmentation method basically performs a blurring on the sparse segmentation, and thus the quality is arbitrary influenced, we disallow using dense segmentations in our quantitative evaluation. \section{Experiments} \label{sec:experiments} In this section we list the results of a series of experiments we performed running our pipeline on the presented dataset from Section $\ref{sec:datasets}$ on page $\pageref{sec:datasets}$. The results are evaluated according the previously explained measures from Section $\ref{sec:methodology}$ on page $\pageref{sec:methodology}$. \\ \\ For performing our experiments we used MacBook Pro. The hardware specifications of this machine are listed in Table $\ref{tab:used_hardware_specs}$. \begin{table}[H] \centering \begin{tabular}{|l|c|} \hline \multicolumn{2}{|c|}{\textbf{Hardware Specifications}} \\ \hline \textbf{CPU} & 2.5 GHz Intel Core i7 \\ \hline \textbf{Threads} & 8 \\ \hline \textbf{MEMORY} & 16 GB 1600 MHz DDR3 \\ \hline \textbf{GPU} & Intel Iris Pro 1536 MB \\ \hline \end{tabular} \caption[Hardware Specifications]{A listing of the hardware specifications of the used machine to produce the results.} \label{tab:used_hardware_specs} \end{table} The first series of experiments (Sec. $\ref{sec:parameter_experiments}$) examines the influence of certain pipeline parameters. In particular we justify the choice for our default parameter assignments. Next, we study the behaviour and quality of segmentations when altering our flow estimation methods (Sec. $\ref{sec:flow_methods}$). The main experiment (Sec. $\ref{sec:overall_performance}$) is the evaluation of some strong pipeline combinations applied on all of our datasets. Finally, we perform a series of experiments to find the best pipeline combinations for each stage (Sec. $\ref{sec:pipeline_combination_cmp}$). \subsection{Parameter Experiments} \label{sec:parameter_experiments} In this section we describe a series of experiments to examine our default pipeline parameters. In particular the results presented in this section act as a justification for choosing the described default values. \subsubsection{Examine Influence of Cluster Merger} In the following experiment we examine the influence of our cluster merger post-processing step. In particular we visually study the influence of this merging step on resulting segmentations. We generated segmentations for every LDOF pipeline combination. The results are shown in Figure $\ref{fig:eval_raw_vs_merged}$. \begin{figure}[H] \begin{center} \subfigure[Raw PD SC]{ \includegraphics[width=0.2\linewidth] {evaluation/bonn_chairs_c_10_segmentations_f_30/ldof_pd_sc} } \subfigure[Merged PD SC]{ \includegraphics[width=0.2\linewidth] {evaluation/bonn_chairs_c_10/ldof_pd_sc} } \subfigure[Raw PD MC]{ \includegraphics[width=0.2\linewidth] {evaluation/bonn_chairs_c_10_segmentations_f_30/ldof_pd_mc} } \subfigure[Merged PD MC]{ \includegraphics[width=0.2\linewidth] {evaluation/bonn_chairs_c_10/ldof_pd_mc} } ~ \subfigure[Raw PED SC]{ \includegraphics[width=0.2\linewidth] {evaluation/bonn_chairs_c_10_segmentations_f_30/ldof_ped_sc} \label{fig:eval_bonn_chairs_raw_segmentations_frame_30_c} } \subfigure[Merged PED SC]{ \includegraphics[width=0.2\linewidth] {evaluation/bonn_chairs_c_10/ldof_ped_sc} \label{fig:bonn_chairs_c_10_d} } \subfigure[Raw PED MC]{ \includegraphics[width=0.2\linewidth] {evaluation/bonn_chairs_c_10_segmentations_f_30/ldof_ped_mc} \label{fig:eval_bonn_chairs_raw_segmentations_frame_30_d} } \subfigure[Merged PED MC]{ \includegraphics[width=0.2\linewidth] {evaluation/bonn_chairs_c_10/ldof_ped_mc} \label{fig:bonn_chairs_c_10_e} } ~ \subfigure[Raw SD KL]{ \includegraphics[width=0.2\linewidth] {evaluation/bonn_chairs_c_10_segmentations_f_30/ldof_sd_kl} \label{fig:eval_bonn_chairs_raw_segmentations_frame_30_e} } \subfigure[Merged SD KL]{ \includegraphics[width=0.2\linewidth] {evaluation/bonn_chairs_c_10/ldof_sd_kl} \label{fig:bonn_chairs_c_10_d} } \subfigure[Raw SED KL]{ \includegraphics[width=0.2\linewidth] {evaluation/bonn_chairs_c_10_segmentations_f_30/ldof_sed_kl} \label{fig:eval_bonn_chairs_raw_segmentations_frame_30_f} } \subfigure[Merged SED KL]{ \includegraphics[width=0.2\linewidth] {evaluation/bonn_chairs_c_10/ldof_sed_kl} \label{fig:bonn_chairs_c_10_e} } \end{center} \caption[Bonn Chairs Segmentations Frame 30]{Raw-and merged segmentations produced by our pipeline when running all modes on the Bonn Chairs dataset using LDOF flow fields.} \label{fig:eval_raw_vs_merged} \end{figure} From these results we can see that our pipeline tends to produce oversegmentations. Especially the steady background is clustered into meany segments. Also, the seat of the chair seems to get segmented into many parts. We also see that, after applying our merger, these parts are forming a coherent object. We conclude that our cluster merger does not significantly obfuscate the segmentation results. The corresponding statistical measurements are listed in Table $\ref{tab:eval_stat_raw_merged}$ \subsubsection{Examine Varying Cluster Count} \label{sec:varying_cluster_exp} In this experiment we examine the influence of a varying number of clusters on the segmentation quality. For this purpose generated segmentations on the Bonn Chairs dataset using the LDOF SED KL pipeline mode. The resulting segmentations are shown in Figure $\ref{fig:bonn_chairs_sed_varyingclusters}$ and the corresponding measurements are listed in Table $\ref{tab:bonn_chairs_ldof_sed_c_6_9_10_eval}$. \begin{figure}[H] \begin{center} \subfigure[2 Clusters]{ \includegraphics[width=0.31\linewidth] {evaluation/bonn_chairs_ldof_varying_c_sed/f_30_c_2} } \subfigure[3 Clusters]{ \includegraphics[width=0.31\linewidth] {evaluation/bonn_chairs_ldof_varying_c_sed/f_30_c_3} } \subfigure[4 Clusters]{ \includegraphics[width=0.31\linewidth] {evaluation/bonn_chairs_ldof_varying_c_sed/f_30_c_4} } ~ \subfigure[5 Clusters]{ \includegraphics[width=0.31\linewidth] {evaluation/bonn_chairs_ldof_varying_c_sed/f_30_c_5} } \subfigure[6 Clusters]{ \includegraphics[width=0.31\linewidth] {evaluation/bonn_chairs_ldof_varying_c_sed/f_30_c_6} } \subfigure[7 Clusters]{ \includegraphics[width=0.31\linewidth] {evaluation/bonn_chairs_ldof_varying_c_sed/f_30_c_7} } ~ \subfigure[8 Clusters]{ \includegraphics[width=0.31\linewidth] {evaluation/bonn_chairs_ldof_varying_c_sed/f_30_c_8} } \subfigure[9 Clusters]{ \includegraphics[width=0.31\linewidth] {evaluation/bonn_chairs_ldof_varying_c_sed/f_30_c_9} } \subfigure[10 Clusters]{ \includegraphics[width=0.31\linewidth] {evaluation/bonn_chairs_ldof_varying_c_sed/f_30_c_10} } \end{center} \caption[Bonn Chairs SED Segmentations for Varying Cluster Count]{A visualization of the real segmentations when running \textit{LDOF SED KL} on the \textit{Bonn Chairs} dataset.} \label{fig:bonn_chairs_sed_varyingclusters} \end{figure} We observe that using more and more clusters yields better segmentations. This trivial fact is fortified with the statistical results shown in Figure $\ref{fig:bonn_chairs_plot_avg_stat}$. To have a more solid reasoning, we additionally evaluated LDOF PD SC and LDOF PED MC for a varying number of clusters and generated their corresponding performance graphs. \begin{figure}[H] \begin{center} \subfigure[Recall / Precision Plot]{ \includegraphics[width=0.47\linewidth] {evaluation/bonn_chairs/avg/avg_rec_prec} } \subfigure[Cluster Count / F1 Score Plot]{ \includegraphics[width=0.47\linewidth] {evaluation/bonn_chairs/avg/avg_clusters_f1} } \end{center} \caption[Bonn Chairs Varying Clusters]{Visualizing plots of the average performance of the combinations LDOF PD SC (Tab. $\ref{tab:bonn_chairs_ldof_sed_c_6_9_10_eval_pd_sc}$), LDOF PED MC (Tab. $\ref{tab:bonn_chairs_ldof_sed_c_6_9_10_eval_ped_mc}$) and LDOF SED KL (Tab. $\ref{tab:bonn_chairs_ldof_sed_c_6_9_10_eval}$) for a varying number of clusters. The left plots shows the recall/precision plot and the figure on the right shows the F1 score alongside the number of clusters.} \label{fig:bonn_chairs_plot_avg_stat} \end{figure} \subsubsection{Examine Convergence of MC} In this experiment we have a closer look at the convergence rate of the Minimum Cut (MC) segmentation method (Sec. $\ref{sec:min_cut_seg}$). For this purpose we run the mode LDOF PED MC on the Two Chairs dataset. Moreover, we set the cluster count to 20. On one hand, using that many clusters will result in producing oversegmentations but on the other hand this also ensures a good convergence rate plot (due to our segment merger). The graph in Figure $\ref{fig:convergence_rate_mc}$ visualizes the convergence rate of the MinCut (MC) segmentation technique. The corresponding measurements are listed in Table $\ref{tab:two_chairs_ped_mc_iterations}$ \begin{figure}[H] \begin{center} \includegraphics[width=0.47\linewidth] {evaluation/two_chairs/performance_iter/iter_f1} \end{center} \caption[Convergence Rate MinCut Segmentation]{Visualizing the convergence rate of Minimum Cut when running LDOF PED MC. We observe, that the more iterations are run, the higher the F1 Score gets. However} \label{fig:convergence_rate_mc} \end{figure} We observe that the more iterations we run, the higher the F1 Score gets. However, we also observe, that the F1 Score is converging already after 4 iterations. This matches with our assumption that no more than 20 iterations have to be run when using the MC segmentation method. \begin{figure}[H] \begin{center} \subfigure[1 Iteration]{ \includegraphics[width=0.22\linewidth] {evaluation/two_chairs/iters/iter_1} } \subfigure[2 Iterations]{ \includegraphics[width=0.22\linewidth] {evaluation/two_chairs/iters/iter_2} } \subfigure[3 Iterations]{ \includegraphics[width=0.22\linewidth] {evaluation/two_chairs/iters/iter_3} } \subfigure[5 Iterations]{ \includegraphics[width=0.22\linewidth] {evaluation/two_chairs/iters/iter_5} } \end{center} \caption[Convergence Segmentations Two Chairs]{Visualizing the convergence of the MC segmentation method.} \label{fig:two:chairs_segmentations_ped_mc_iters_exp} \end{figure} \subsubsection{Examine $\lambda$ Default} In this experiment we examine the influence of the parameter $\lambda$, which is used to scale P-affinities. In particular, we try legitimize our standard $\lambda$ choices. In the following we use the Bonn datasets and evaluate the quality of LDOF PD SC and LDOF PED SC for different $\lambda$ values. The averaged statistics are listed in Table $\ref{tab:varying_lambda_experiment}$. \begin{table}[H] \centering \setlength\tabcolsep{4pt} \begin{minipage}{0.48\textwidth} \centering \begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|c|}{Varying $\lambda$ on PD} \\ \hline $\lambda$ & \textbf{Precision} & \textbf{Recall} & \textbf{F1 Score} \\ \hline 5 & 11.68 & 10.73\% & 11.18\% \\ \hline 1 & 15.68 & 11.51\% & 13.28\% \\ \hline 0.1 & 34.05 & 30.08\% & 31.94\% \\ \hline \textbf{0.01} & \textbf{46.37} & \textbf{52.94}\% & \textbf{49.44}\% \\ \hline 0.001 & 43.00 & 45.67\% & 44.29\% \\ \hline 0.0001 & 25.11 & 25.78\% & 25.44\% \\ \hline \end{tabular} \end{minipage}% \hfill \begin{minipage}{0.48\textwidth} \centering \begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|c|}{Varying $\lambda$ on PED} \\ \hline $\lambda$ & \textbf{Precision} & \textbf{Recall} & \textbf{F1 Score} \\ \hline 100 & 36.60\% & 35.11\% & 35.84\% \\ \hline \textbf{50} & \textbf{52.44}\% & \textbf{59.28}\% & \textbf{55.65}\% \\ \hline 10 & 45.33\% & 52.22\% & 48.53\% \\ \hline 5 & 53.50\% & 51.02\% & 52.23\% \\ \hline 1 & 47.58\% & 43.50\% & 45.45\% \\ \hline 0.1 & 44.75\% & 41.00\% & 42.79\% \\ \hline \end{tabular} \end{minipage} \caption[Experiment Varying $\lambda$]{The averaged statistics of a varying $\lambda$ on P-affinities applied on the Bonn datasets. On the left table, the results when using PED affinities, on the right, when using PD affinities. The best determined choices for $\lambda$ are marked in bold face.} \label{tab:varying_lambda_experiment} \end{table} We observe that $\lambda$ equals $0.01$ works best for PD affinities and when $\lambda = 50$ when using PED affinities \\ \\ In the following we visualize the influence of $\lambda$ on the segmentation quality. For this purpose we produced PD SC segmentations on the Cars dataset for different $\lambda$ values. The resulting segmentations are shown in Figure $\ref{fig:cars_dataset_lambdas}$. \begin{figure}[H] \begin{center} \subfigure[$\lambda$ = 5]{ \includegraphics[width=0.31\linewidth] {evaluation/cars/lambdas/5} \label{fig:cars_dataset_lambdas_a} } \subfigure[$\lambda$ = 0.01]{ \includegraphics[width=0.31\linewidth] {evaluation/cars/lambdas/0_01} \label{fig:cars_dataset_lambdas_b} } \subfigure[$\lambda$ = 0.0001]{ \includegraphics[width=0.31\linewidth] {evaluation/cars/lambdas/0_0001} \label{fig:cars_dataset_lambdas_c} } ~ \subfigure[Undersegmented]{ \includegraphics[width=0.31\linewidth] {evaluation/cars/lambdas/seg_5} \label{fig:cars_dataset_lambdas_a} } \subfigure[Ideal]{ \includegraphics[width=0.31\linewidth] {evaluation/cars/lambdas/seg_0_01} \label{fig:cars_dataset_lambdas_b} } \subfigure[Oversegmented]{ \includegraphics[width=0.31\linewidth] {evaluation/cars/lambdas/seg_0_0001} \label{fig:cars_dataset_lambdas_c} } \end{center} \caption[Influence varying $\lambda$]{Illustration of the influence of the parameter $\lambda$ used to scale PD-affinities. The first row shows the affinities between a certain trajectory on the car on the back (marked by a red circle) and its neighbors. The second row shows the corresponding segmentations.} \label{fig:cars_dataset_lambdas} \end{figure} As we can see, the optimal value $\lambda = 0.01$ produced the best segmentation. Moreover, a too large value leads to an oversegmentation, but when using a too small value we obtain an undersegmentation. These results allow us to conclude that our $\lambda$ defaults are reliable enough to use them during our experiments. \subsubsection{Examine Eigenvector-/Cluster-Count Defaults} In Section $\ref{sec:spectral_clustering_parameters}$ we mentioned what default values we use for the number of clusters and eigenvectors, when either running the SC or the MD segmentation method. We stated, that we use twice the estimated count of the moving objects present in the video as the cluster count and twice the cluster count as the number of eigenvalues. In order to legitimize the usage of those defaults, we created a series of segmentations for a varying number of clusters and eigenvectors. \\ \\ In this experiment we evaluate the Cars dataset, since it has a static camera and a known number of large moving objects. In particular, there are two moving cars, each forming a moving objects and the background. Therefore we use the estimate \textit{3 segments}. Moreover, we use the optimal $\lambda$ value for PD affinities. Finally, we run the SC segmentation method on every cluster-/eigenvector-count combination between 3 to 6 clusters and 3 to 12 eigenvectors. To ease the readability of the results, we produced two graphs as shown in Figure $\ref{fig:cc_ev_combinations}$. The graph on the left plots the F1 score against a fixed number of clusters and a varying number of eigenvectors. In contrast, the graph on the right side fixes the number of used eigenvectors but keeps the cluster count as a free variable. \begin{figure}[H] \begin{center} \subfigure[Fixed Clusters and Varying Eigenvectors]{ \includegraphics[width=0.444\linewidth] {evaluation/exploring_params/free_ev} } \subfigure[Fixed Eigenvectors and Varying CC]{ \includegraphics[width=0.505\linewidth] {evaluation/exploring_params/free_cc} } \end{center} \caption[Varying number of Clusters / Eigenvectors]{Two graphs that show the F1 score for a varying number of clusters and eigenvectors on the cars dataset. The graph on the left shows plots for fixed cluster numbers and the graph on the right plots the F1 score for a fixed eigenvalue count and varying cluster counts.} \label{fig:cc_ev_combinations} \end{figure} At first glance, the resulting graphs seem to have a very complicated behaviour. We observe oscillations in the F1 scores, when increasing the number of clusters. This behaviour seems to be counterintuitive. Hence, the optimal cluster/eigenvalue count choice seems to be more complex than our simple heuristic. However, when considering the fixed clusters graph, we observe that our assumption to use \textit{6 clusters} ranks amongst the best variants. Moreover, when considering the plots for fixed eigenvectors, then we see that using 6 clusters together with 12 eigenvectors also yields top results. \\ \\ However, we also notice, that this assumption may not be the best variant, since other cluster-eigenvector combinations also produce top results, according to the F1 score. Despite this last observation we can conclude, that our defaults for the cluster- and and eigenvector count allow to produce good results. Therefore we will use this cluster and eigenvector heuristic during this evaluation. \\ \\ Additional \enquote{varying the cluster-eigenvalue count} experiments and their measurements can be found in the appendix in Section $\ref{sec:additional_cc_ev_exp}$. \subsection{On Exploring Flow Methods} \label{sec:flow_methods} Evaluating every pipeline combination on every dataset would exceed our capabilities, since there are simply too many cross-combinations. Therefore, we would like to detect and filter some weak combinations in advance. By having a closer look at the performance on the flow methods, we will be able to drop two methods. \\ \\ In this section we examine the quality of the utilized flow methods. In particular we show that the flow methods HS and LRGBD do not produce good estimates and thus can be dropped from further considerations during our series of experiments. This will reduce the total number of pipeline combinations by a factor of two. \\ \\ In the first sub-experiment we compare the segmentations produced on LDOF-and HS flow fields. In the second experiment we examine the quality of segmentations produced by LRGBD flow fields. \subsubsection{LDOF vs. HS Flow Fields} Both flow estimation methods, LDOF and HS, are not making use of depth fields. Therefore, we would like to determine whether one of these two methods is significantly better. \\ \\ In the following experiment we compute P-affinities using LDOF and HS flow fields on the Bonn datasets. In order to examine the quality of the flow fields, we evaluate SC segmentations using these affinities. The average statistics are listed in Table $\ref{tab:hs_vs_ldof}$. \begin{table}[H] \centering \begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|c|}{LDOF vs. HS on Bonn Datasets} \\ \hline Method & \textbf{Precision} & \textbf{Recall} & \textbf{F1 Score} \\ \hline LDOF PD SC & 42.98\% & 55.06\% & 48.28\% \\ \hline HS PD SC MC & 40.19\% & 52.33\% & 45.47\% \\ \hline LDOF PED SC & 69.34\% & 54.13\% & 60.80\% \\ \hline HS PED SC MC & 55.61\% & 53.15\% & 54.35\% \\ \hline \end{tabular} \caption[LDOF vs. HS Flow Fields]{Comparison of Bonn dataset segmentations produced by running PD SC and PED SC on LDOF- and HS flow fields. Methods using LDOF flow fields seem to be produce better segmentations than those which are using HS flow fields.} \label{tab:hs_vs_ldof} \end{table} Apparently, pipeline combinations that use LDOF flow fields are producing qualitatively better segmentations than those which use HS flow fields. Therefore, we decide to drop HS flow fields and only use LDOF flow fields instead. \subsubsection{On LRGBD Flow Fields} In the following we want to examine the quality of segmentations that use LRGBD, LDOF and SRSF flow fields on the Statue dataset. For this purpose we vary the used flow method and fix the pipeline mode to PED MC. Hence, we run the following three combinations: LDOF PED MC, SRSF PED MC, LGBDR PED MC. \\ \\ The achieved performances are listed in Table $\ref{tab:statue_performance}$ and visual segmentations are depicted in Figure $\ref{fig:alley_segmentations}$. \begin{table}[H] \centering \begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|c|}{Performance Statue dataset} \\ \hline Method & \textbf{Precision} & \textbf{Recall} & \textbf{F1 Score} \\ \hline LDOF PED MC & 69.93\% & 38.23\% & 49.44\% \\ \hline SRSF PED MC & 72.47\% & 70.68\% & 71.56\% \\ \hline LRGBD PED MC & 61.46\% & 8.37\% & 14.73\% \\ \hline \end{tabular} \caption[Performance Statue]{Comparing the segmentation quality produced by LDOF, SRSF and LRGBD flow fields.} \label{tab:statue_performance} \end{table} We observe that segmentations that use SRSF flow fields score good, and segmentations using LDOF fields score intermediate quality results. However, pipeline combinations based on LRGBD flow fields produce visually, as well as statistically very poor segmentations. Apparently, this flow method is a bad choice for the task of motion segmentation. Therefore, we drop it from our pipeline. \begin{figure}[H] \begin{center} \subfigure[LDOF PED MC]{ \includegraphics[width=0.31\linewidth] {evaluation/statue/segmentations/f30/ldof_ped_mc} \label{fix:alley_segmentations_a} } \subfigure[SRSF PED MC]{ \includegraphics[width=0.31\linewidth] {evaluation/statue/segmentations/f30/srsf_ped_mc} \label{fix:alley_segmentations_b} } \subfigure[LGBDR PED MC]{ \includegraphics[width=0.31\linewidth] {evaluation/statue/segmentations/f30/lrgbd_ped_mc} \label{fix:alley_segmentations_c} } ~ \subfigure[LDOF PED MC]{ \includegraphics[width=0.31\linewidth] {evaluation/statue/segmentations/f60/ldof_ped_mc} \label{fix:alley_segmentations_d} } \subfigure[SRSF PED MC]{ \includegraphics[width=0.31\linewidth] {evaluation/statue/segmentations/f60/srsf_ped_mc} \label{fix:alley_segmentations_e} } \subfigure[LGBDR PED MC]{ \includegraphics[width=0.31\linewidth] {evaluation/statue/segmentations/f60/lrgbd_ped_mc} \label{fix:alley_segmentations_f} } \end{center} \caption[Bonn Cerealbox Segmentations]{Visualization of the segmentation that belong to the results listed in Table $\ref{tab:statue_performance}$.} \label{fig:alley_segmentations} \end{figure} Although the LRGBD flow fields are supposed to be competitive against SRSF flow fields, using them in segmentation tasks yields qualitatively poor segmentations. This is due the fact, that LRGBD flow fields are estimated by segmenting the flow fields with respect to their depth layers instead according to their motion. \\ \\ In order to verify that \enquote{LRGBD flow fields are indeed not suitable for our motion segmentation pipeline}, we performed a second experiment: This time we evaluated the Bonn Watercan segmentations. We used all existing flow methods and produced segmentations using PED-affinities. The results are listed in Table $\ref{tab:bonn_wc_flwo_methods}$. \begin{table}[H] \centering \begin{tabular}{|l|r|l|l|} \hline \multicolumn{4}{|c|}{Flow Comparison on Bonn Watercan Dataset} \\ \hline & \textbf{Precision} & \textbf{Recall} & \textbf{F1 Score} \\ \hline HS PED SC & 71.59\% & 71.96\% & 71.77\% \\ \hline HS PED MC & 71.84\% & 72.59\% & 72.21\% \\ \hline LDOF PED SC & 94.30\% & 58.20\% & 71.98\% \\ \hline LDOF PED MC & 67.50\% & 81.06\% & 73.66\% \\ \hline SRSF PED SC & 90.00 \% & 96.25\% & 93.02\% \\ \hline SRSF PED MC & 94.23 \% & 95.04\% & 94.63\% \\ \hline LRGBD PED SC & 30.00\% & 16.68\% & 21.43\% \\ \hline LRGBD PED MC & 31.23\% & 16.32\% & 21.44\% \\ \hline \end{tabular} \caption[Flow Method Comparission on Bonn Watercan]{Comparing the quality of our flow methods on the Bonn Watercan dataset.} \label{tab:bonn_wc_flwo_methods} \end{table} Again, we observe, that LRGBD flow fields produce poor segmentations. So, what is going on there? To get a better understanding of the underlying problem, we have a look at the actual segmentations produced by the combination LRGBD PED MC. The corresponding segmentation of frame 30 is visualized in Figure $\ref{fig:issues_lrgbd_flow_methods}$. \begin{figure}[H] \begin{center} \subfigure[Extracted Layers Frame 30]{ \includegraphics[width=0.31\linewidth] {evaluation/lrgbd_issues/layers_30} \label{fig:issues_lrgbd_flows_a} } \subfigure[Forward Flow Frame 30]{ \includegraphics[width=0.31\linewidth] {evaluation/lrgbd_issues/fw_flow_30} \label{fig:issues_lrgbd_flows_b} } \subfigure[Segmentation PED MC Frame 30]{ \includegraphics[width=0.31\linewidth] {evaluation/lrgbd_issues/seg_30} \label{fig:issues_lrgbd_flows_c} } \end{center} \caption[Issue with LRGBD Flow Fields]{Comparing the a LRGBD based segmentation against its flow field.} \label{fig:issues_lrgbd_flow_methods} \end{figure} We notice that the LRGBD flow fields are locally homogenous within the layer they belongs to. Moreover, LRGBD segmentations separate the objects according to their depth layers rather than by their motion. \subsection{Overall Performance} \label{sec:overall_performance} So far, we justified our default parameter choices and we were able to drop two poorly behaving flow methods (HS and LRGBD). In this experiment, we want to evaluate the segmentation quality of all LDOF and SRSF pipeline combinations applied on five datasets (Cerealbox, Chairs, Watercan, Waving-Arm and Statue). Unfortunately, we cannot use every dataset in this experiment because not every pipeline method can be run on every available dataset. For instance, SRSF flow fields can only be generated for images with a resolution of $640 \times 480$ pixels. Furthermore, we only use datasets that have measured depth fields.\\ \\ In particular we are interested in comparing the modes LDOF PD SC and SRSF PED MC, since the first combination resembles to the implementation described in $\cite{OB14b}$$\footnote{Both implementations use LDOF flow fields, a similar affinity measure and a similar spectral clustering technique.}$ and the second is supposed to be our best P-affinity variant. Moreover, we used $\cite{KB15b}$' GraphCut$\footnote{This method does not use depth data.}$ implementation and let it compete against our pipeline. The averaged F1 scores of this experiment is visualized in Figure $\ref{fig:overall_performance_bar_chart}$. The corresponding measurements are listed in Table $\ref{tab:overall_performance}$. \begin{figure}[H] \begin{center} \includegraphics[width=0.7\linewidth] {evaluation/overall/all} \end{center} \caption[Overall Performance Bar Chart]{Evaluation of LDOF and SRSF flow fields using all pipeline combinations on all compatible datasets.} \label{fig:overall_performance_bar_chart} \end{figure} We observe the following facts: \begin{itemize} \item Combinations that use SRSF flow fields achieve better results than those that use LDOF flow fields. \item Methods that make use of depth cues are generally performing better than two dimensional approaches. \item Lastly, using the Minimum Cut (MC) segmentation technique produces better results than spectral clustering (SC) \end{itemize} Overall, The Kernighan-Lin (KL) heuristic yields the best segmentations among all segmentation methods. Lastly, Brox's GraphCut does not produce very good segmentation results. However, to be fair, we used its standard parameters but used optimal parameters in our pipeline (determined for the used datasets). Additionally, we evaluated all LDOF P-affinity pipeline combinations on every dataset. The results are listed in Table $\ref{tab:overall_performance_ldof}$. \begin{table}[H] \centering \begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|c|}{LDOF on every datasets} \\ \hline Method & \textbf{Precision} & \textbf{Recall} & \textbf{F1 Score} \\ \hline LDOF PD SC & 63.52 \% & 41.87\% & 50.47\% \\ \hline LDOF PD MC & 58.69\% & 57.86\% & 58.27\% \\ \hline LDOF PED SC & 66.94\% & 57.91\% & 62.10\% \\ \hline LDOF PED MC & 64.11\% & 67.27\% & 65.65\% \\ \hline \end{tabular} \caption[Overall Performance LDOF P-Affinities]{Evaluation of all LDOF-P-affinity pipeline combinations on every dataset. } \label{tab:overall_performance_ldof} \end{table} From these measurements we observe again, that incorporating depths (via running PED) into the pipeline yields significant better results than without doing so. Moreover, MC achieves higher F1 scores than SC. \subsection{Pipeline Combination Comparisons} \label{sec:pipeline_combination_cmp} Our pipeline allows to alter between many different flow estimation-, affinity matrix computation- and segmentation methods. In this section we want to determine an optimal pipeline combination by comparing the methods of each stage and choosing the best one. \subsubsection{LDOF vs. SRSF} In this section we want to illustrate the dominance of SRSF flow fields. For this purpose we evaluated the segmentations of the Bonn datasets. In this experiment we strictly use the MC segmentation method and generated segmentations for all P-affinity measures. The results are listed in Table $\ref{tab:bonn_datasets_ldof_vs_srsf}$. \begin{table}[H] \centering \begin{tabular}{|l|c|c|c|} \hline \multicolumn{4}{|c|}{Comparison flow fields Bonn dataset using MC} \\ \hline \textbf{Method} & \textbf{Precision} & \textbf{Recall} & \textbf{F1 Score} \\ \hline LDOF PD & 35.76\% & 46.90\% & 40.58\% \\ \hline SRSF PD & 48.32\% & 48.99\% & 48.65\% \\ \hline LDOF PED & 44.22\% & 50.92\% & 47.34\% \\ \hline SRSF PED & 81.88\% & 59.32\% & 68.80\% \\ \hline \end{tabular} \caption[LDOF vs. SRSF: Bonn Datasets]{Comparing segmentations on LDOF and SRSF flow fields using MC} \label{tab:bonn_datasets_ldof_vs_srsf} \end{table} We observe that segmentations based on SRSF flow fields always obtain higher F1 scores. Therefore, using SRSF flow fields seems to be beneficial. \subsubsection{2D vs. 3D} In this section we want to illustrate the dominance of affinities that incorporate depth measurements. For this purpose we evaluated the segmentations of the Bonn Chairs dataset. In this experiment we strictly use LDOF flow fields and generated segmentations for every possible pipeline mode. The results are listed in Table $\ref{tab:bonn_datasets_2d_vs_3d}$. \begin{table}[H] \centering \begin{tabular}{|l|c|c|c|} \hline \multicolumn{4}{|c|}{Comparison Bonn Chairs on LDOF} \\ \hline \textbf{Method} & \textbf{Precision} & \textbf{Recall} & \textbf{F1 Score} \\ \hline LDOF PD SC & 47.48\% & 54.88\% & 50.91\% \\ \hline LDOF PED SC & 56.86\% & 55.35\% & 55.35\% \\ \hline LDOF PD MC & 46.46\% & 63.10\% & 53.51\% \\ \hline LDOF PED MC & 56.68\% & 64.56\% & 60.36\% \\ \hline LDOF SD KL & 51.34\% & 63.44\% & 56.75\% \\ \hline LDOF SED KL & 68.42\% & 69.63\% & 69.01\% \\ \hline \end{tabular} \caption[2D vs. 3D: Bonn Datasets]{Comparing affinities computed by 3D trajectories against 2D trajectories.} \label{tab:bonn_datasets_2d_vs_3d} \end{table} We observe that, regardless of the choice of segmentation method, affinities produced by using depths achieve higher F1 scores than does without. Therefore, using depths seems to be beneficial. \subsubsection{SC vs. MC} In this experiment we compare our two P-affinity segmentation methods, SC and MC. For this purpose we strictly used LDOF flow fields and evaluated the generated segmentations of the following combinations: PD SC, PD MC, PED SC and PED MC. The corresponding results are listed in Table $\ref{tab:one_chair_sc_vs_mc}$ \begin{table}[H] \centering \begin{tabular}{|l|c|c|c|} \hline \multicolumn{4}{|c|}{Comparison one Chair dataset on LDOF} \\ \hline \textbf{Method} & \textbf{Precision} & \textbf{Recall} & \textbf{F1 Score} \\ \hline LDOF PD SC & 54.23\% & 47.82\% & 50.82\% \\ \hline LDOF PD MC & 55.64\% & 49.50\% & 52.39\% \\ \hline LDOF PED SC & 56.14\% & 53.75\% & 54.92\% \\ \hline LDOF PED MC & 62.23\% & 58.07\% & 60.08\% \\ \hline \end{tabular} \caption[SC vs. MC]{Comparing the segmentation methods SC and MC} \label{tab:one_chair_sc_vs_mc} \end{table} We observe, that segmentations produced by MC always obtain higher F1 scores than SC methods. It seems beneficial to use the MC segmentation method when using P-affinities. \section{Runtime Measurements} \label{sec:runtime_measurements} In this section we present a series of runtime measurements on our pipeline when using the machine described in Table $\ref{tab:used_hardware_specs}$. Note that by no means, these measurements can be considered as statistical evident. The sole purpose of this section is to offer the reader some further insights about the pipeline and its usability in terms of \textit{how handy is this whole pipeline to use w.r.t. its runtime}. \\ \\ We start with measuring the runtimes of the utilized flow methods. For each method we run three dataset$\footnote{The dataset frames used to perform these measurement all exhibit a resolution 640 x 480 pixels.}$ and measured their total required time to process their input. Each measurement was divided by the total number of used frames. The final resulting timings are the average of these measurements. Table $\ref{flow_method_runtimes}$ lists the average time in seconds that our flow methods require to process$\footnote{Here, the term process refers to generating the forward-and backward flow field of a frame.}$ one dataset frame. \begin{table}[H] \centering \begin{tabular}{|l|c|} \hline \textbf{Flow Method} & \textbf{Time per Frame} \\ \hline HS & 18s \\ \hline LDOF & 24s \\ \hline SRSF & 72s \\ \hline LRGBD & 674s \\ \hline \end{tabular} \caption[Flow Method Runtimes]{Listing of the average time of our flow methods required to process one dataset frame} \label{flow_method_runtimes} \end{table} Next, let us discuss the timings of the affinity matrix generation. Figure $\ref{fig:runtime_tra_track_affinity_gen}$ shows the timings in seconds of 261 measurements (blue dots) to accomplish the task of tracking a varying count of trajectories and generating their corresponding affinity matrix. Moreover, we fit a quadratic polynomial (red curve) on our measurements, since the runtime complexity of this pipeline stage is supposed to be in $\mathcal{O}(n^2)$ and $n$ denotes the trajectory count. \begin{figure}[H] \begin{center} \includegraphics[width=0.8\linewidth] {evaluation/runtimes/affinity} \end{center} \caption[Runtime Trajectory Tracking and Generating Affinity Matrix]{Plotting the runtime (in seconds) of trajectory tracking and affinity matrix generation stage against the utilized trajectory count. The measurements are visualized as blue dots. A reconstructed quadratic curve is shown in red. The runtime of the evaluated stage is supposed to exhibit a quadratic complexity.} \label{fig:runtime_tra_track_affinity_gen} \end{figure} Unfortunately, we have not performed detailed measurements for the $\textit{P-affinity}$ segmentation methods. In our pipeline, we implemented the Spectral Clustering (SC) and Minimum Cut (MC) segmentation methods in Matlab. Generally, loading an affinity matrix, that represents the similarity of 1000 affinities, takes 5 seconds, 3000 approximately 20 seconds and 6000 about 120 seconds. For solving the eigenvalue decomposition in our Matlab implementations, we rely on the fast numerical approximation of $\textit{eigs}$. The final k-means run in SC takes about 30 seconds when applied on 6000 trajectories and running 200 repetitions. \\ \\ The most outer loop in the MC implementation runs a k-means step and a graph-cut step. The k-means step takes about the same time as it does in SC. Again, when using about 6000 trajectories and the default neighborhood assignment as discussed in Section $\ref{sec:spectral_clustering_parameters}$, then calling graph-cut in this loop takes about 3 minutes. \\ \\ We measured the timings of several Kerninghan-Lin runs. For a fixed number of neighbors, the overall runtime of the KL algorithm depends on the number of trajectories and the number of iterations. The number of iterations is determined by the number of clusters CC we want to solve for and is defined as \begin{equation} \text{Iters} = \sum_{k=1}^{\text{CC}-1} k \end{equation} which corresponds to the number of all possible distinct pair formations. The resulting measurement graph is shown in Figure $\ref{fig:runtime_kl_graph_part}$. \begin{figure}[H] \begin{center} \includegraphics[width=0.8\linewidth] {evaluation/runtimes/kl} \end{center} \caption[Runtime KL Graph Partitioning]{Visualizing of the runtimes in seconds of KL vs. the product between the number of the used trajectories times the number of iterations. As we can see, KL is takes a lot of time when trying to solve for many clusters and many trajectories the same time.} \label{fig:runtime_kl_graph_part} \end{figure} As we can see, running KL is very time consuming. For instance, when we use a large number of clusters, say 10 clusters, and using approximately 5000 trajectories, then this algorithm takes about 7000 seconds to finish. \section{Discussion} \label{sec:discussion} In the previous section we performed a series of experiments on different datasets to examine the influence of various pipeline components and their corresponding parameters. In this section we list the main findings of those experiments and relate them to our initial thesis goals. \\ \\ We visually and statistically demonstrated that our pipeline is able to produce good segmentation results on complex scenes. In particular, our best pipeline combination could handle scenes consisting of moving cameras and rotating moving objects. Therefore, we achieved our initial goal of building a motion segmentation pipeline on RGB-D videos using optical flow fields. \\ \\ Our motion segmentation pipeline is an over-parameterized multi-stage implementation. Usually, finding the correct parameter values is not obvious. We therefore suggested reliable heuristics to specify some of the parameters. In our experiments we were able to justify our default parameter choices. \\ \\ We further were able to determine the best pipeline combinations. We experimentally demonstrated that SRSF flow fields outperform every other flow method supported by our pipeline. Moreover, we successfully could show that using depth information drastically increases the quality of resulting motion segmentations. Among our segmentation methods, KL produces the best results. This method, however, uses S-affinities. In contrast, when using P-affinities, MC seems the be the best choice. When comparing KL against MC we can conclude, that KL produces better results but at the same time requires an enormous amount of computation time. In contrast, MC runs in a decent time and at the same time produces good and convincing results. By combining all these findings, we nominate the two following winner combinations: SRSF PED MC and SRSF SED KL. \\ \\ Additionally, we let our pipeline compete against Brox' GraphCut $\cite{KB15b}$, which corresponds to the pipeline combination $\textit{LDOF SD KL}$. Interestingly, every pipeline combination obtains better results even when not using depth data. However, to be fair, we have to mention that we did not specify any specific GraphCut parameters and hence his implementation was using its standard setup. In contrast, we used optimal parameters for our implementations during the experiments. \\ \\ Additional experiments and corresponding results can be found in the appendix Chapter $\ref{chap:additional_exp}$.
{ "alphanum_fraction": 0.7655774335, "avg_line_length": 83.5626566416, "ext": "tex", "hexsha": "3d4cfd4dbaf0904998dadc98837cd3a385b1f21d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d296c2befba97942765d87d40722105a26e4e97b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "simplay/master_thesis", "max_forks_repo_path": "Document/Source/Chapters/chapter5.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "d296c2befba97942765d87d40722105a26e4e97b", "max_issues_repo_issues_event_max_datetime": "2017-03-05T00:08:25.000Z", "max_issues_repo_issues_event_min_datetime": "2017-03-05T00:08:25.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "simplay/master_thesis", "max_issues_repo_path": "Document/Source/Chapters/chapter5.tex", "max_line_length": 1174, "max_stars_count": 2, "max_stars_repo_head_hexsha": "d296c2befba97942765d87d40722105a26e4e97b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "simplay/master_thesis", "max_stars_repo_path": "Document/Source/Chapters/chapter5.tex", "max_stars_repo_stars_event_max_datetime": "2021-11-11T04:28:28.000Z", "max_stars_repo_stars_event_min_datetime": "2019-04-18T05:17:34.000Z", "num_tokens": 17799, "size": 66683 }
\section{Exercise 2} \label{sec:ex2} We can now take a first look at Metasploit's exploit modules. Let us start from scratch, pretending Exercise 1 was never done and requiring a new scan. \subsection{Setting up the database} \label{subsec:ex2:setting-up-db} In order to look at the services offered by the target machine - which will again be Metasploitable - we are now going to use \texttt{Nmap} instead of the auxiliary modules. To get started with Metasploit's integration with Nmap, we need to verify that the database is alive and working. For the \texttt{ova} installation, this should be almost guaranteed. For the installation from scratch, please double check before proceeding. Figure \ref{fig:ex2:db_status} shows the correct output of \hltexttt{db\_status}. \begin{figure}[htbp] \centering \includegraphics[width=0.5\textwidth]{../drawable/exercise_2_screenshots/db_status.png} \caption{Checking the DB setup is alive and working} \label{fig:ex2:db_status} \end{figure} The database is divided into \textit{workspaces}, which allow segmenting of the data stored inside. With \hltexttt{\mbox{workspace}} command, workspaces can be created, switched, and destroyed. In this lab, we will just use the default workspace. By using commands such as \hltexttt{hosts}, and \hltexttt{\mbox{services}}, we realise that having had the database on all the time actually already filled it up with information with previous scans (Figure \ref{fig:ex2:services_hosts}). If needed, it can be discarded with we can wipe data with \hltexttt{\mbox{hosts -d}} and \hltexttt{\mbox{services -d}}. Additionally, we can stop the previous connections with \hltexttt{\mbox{sessions -K}}. \begin{figure}[htbp] \centering \includegraphics[width=0.6\textwidth]{../drawable/exercise_2_screenshots/services_before.png} \includegraphics[width=0.6\textwidth]{../drawable/exercise_2_screenshots/hosts_empty.png} \caption{\hltexttt{\mbox{services}} and \hltexttt{hosts}} \label{fig:ex2:services_hosts} \end{figure} \subsection{Scanning again} \label{subsec:ex2:scanning-again} We are now ready to scan. We begin by using \hltexttt{\mbox{db\_nmap}}. Its syntax is equivalent to that of \texttt{nmap}. In this case, we're going to inspect the Metasploitable machine at the 1000 most popular ports (the list being provided by the tool) with a \texttt{TCP} scan. Additionally, we supply the \texttt{-O} flag, which instructs \texttt{db\_nmap} to perform OS fingerprinting. \begin{lstlisting} db_nmap --top-ports 1000 192.168.5.2-3 -O \end{lstlisting} Other types of scan are outside the scope of this laboratory, and can be found at the official \texttt{Nmap} website\footnote{https://nmap.org/docs.html}. Figure \ref{fig:ex2:db_nmap} shows the output of the command. \begin{figure}[htbp] \centering \includegraphics[width=0.8\textwidth]{../drawable/exercise_2_screenshots/db_nmap_v2.png} \caption{Scanning the potential victims} \label{fig:ex2:db_nmap} \end{figure} Now that the database is populated, we can check again \hltexttt{\mbox{services}}. We will notice that the table has been populated with the newly found services from our target. Figure \ref{fig:ex2:services_full_cropped} shows the output of the command. \begin{figure}[htbp] \centering \includegraphics[width=0.7\textwidth]{../drawable/exercise_2_screenshots/services_full_v2.png} \caption{Checking the data that we gathered} \label{fig:ex2:services_full_cropped} \end{figure} From the previous exercise - and as \hltexttt{\mbox{services}} reminds us - the \texttt{HTTP} server on the Metasploitable machine is old and runs a vulnerable version of \texttt{PHP}. However, this time we're not going to slip this through unnoticed. %As we did in the previous exercise, we can now use a scanner to get some more details on the HTTP service: %\begin{lstlisting} %use auxiliary/scanner/http/http_version %\end{lstlisting} %Figure \ref{fig:ex2:use_http_version_scanner+execute_http_version} shows the output of the command. %\begin{figure}[htbp] % \centering % \includegraphics[width=\textwidth]{../drawable/exercise_2_screenshots/use_http_version_scanner.png} % \includegraphics[width=\textwidth]{../drawable/exercise_2_screenshots/execute_http_version.png} % \caption{Preparing the scanner, and then hitting \hltexttt{run}} % \label{fig:ex2:use_http_version_scanner+execute_http_version} %\end{figure} % Of course, the results aren't surprising: the web server is highly vulnerable. \subsection{Taking the knives out} \label{subsec:ex2:taking-knives-out} We're now going to exploit the fact that this \texttt{PHP} version is riddlded with vulnerabilities\footnote{PHP doesn't exactly have a great record from a security standpoint: see\\\texttt{https://www.cvedetails.com/product/128/PHP-PHP.html?vendor\_id=74}}. At this point, we can go to the next phase and find a suitable exploit and payload for our vulnerability. A quick trip to a CVE database can yield tremendous results. For this laboratory, we decided to settle on \texttt{CVE-2012-1823}\footnote{\texttt{https://www.cvedetails.com/cve/CVE-2012-1823/}}. The vulnerability states the following: \medskip \begin{center} \noindent\fbox{% \parbox{0.9\textwidth}{% \texttt{sapi/cgi/cgi\_main.c} in PHP before 5.3.12 and 5.4.x before 5.4.2, when configured as a CGI script (aka php\-cgi), does not properly handle query strings that lack an = (equals sign) character, which allows remote attackers to execute arbitrary code by placing command-line options in the query string, related to lack of skipping a certain php\_getopt for the 'd' case. }% } \end{center} \medskip { \noindent \begin{minipage}{\linewidth} The following is a code snippet from \texttt{cgi\_main.c}, the source of the buffer overflow: \begin{lstlisting}[showspaces=false,breaklines=true] // [...] here len is the same length of php_optarg memcpy( cgi_sapi_module.ini_entries + ini_entries_len, php_optarg, len ); // [...] \end{lstlisting} \end{minipage} } { \noindent \begin{minipage}{\linewidth} While this is another snippet from the exploit file. \begin{lstlisting} payload_oneline = "<?php " + payload.encoded.gsub(/\s*#.*$/, "").gsub("\n", "") response = send_request_cgi( { 'method' => "POST", 'global' => true, // [...] }, 0.5) \end{lstlisting} \end{minipage} } Let us load up the exploit as usual. \begin{lstlisting} use exploit/multi/http/php_cgi_arg_injection \end{lstlisting} Then, we \hltexttt{set} our \texttt{RHOSTS} and \texttt{LHOST} variables. Figure \ref{fig:ex2:use_exploit_cgi_exception+set_exploit_options} shows these two steps. Notice how now the default payload defaults to \texttt{php/meterpreter/reverse\_tcp}. \begin{figure}[htbp] \centering \includegraphics[width=0.8\textwidth]{../drawable/exercise_2_screenshots/use_exploit_cgi_exception.png} \includegraphics[width=0.8\textwidth]{../drawable/exercise_2_screenshots/set_exploit_options.png} \caption{Using the default payload and setting options} \label{fig:ex2:use_exploit_cgi_exception+set_exploit_options} \end{figure} As we said before, the payloads in Metasploit are highly configurable, and each one of them may provide advantadges or disadvantages. For simplicity purposes, we decided to stick to the default one, which provides a \textit{reverse shell} with Meterpreter. Figure \ref{fig:ex2:shell_opened} shows what opening the shell actually looks like. \begin{figure}[htbp] \centering \includegraphics[width=0.7\textwidth]{../drawable/exercise_2_screenshots/shell_opened.png} \caption{Opening a shell with Meterpreter} \label{fig:ex2:shell_opened} \end{figure} To end the exercise, we provide a little insight on what the payload actually provided. Once the exploit has been run on the target, we theoretically have the capability to \textit{execute arbitrary code by placing command-line options in the query string}, as the CVE definition says. This is the perfect moment for executing a payload's arbitrary code, and where our Meterpreter comes into play. Meterpreter is a dynamically extensible payload that uses in-memory DLL injection stagers and is extended over the network at runtime. It communicates over the stager socket and provides a comprehensive client-side Ruby API. It features command history, tab completion, channels, and more.\cite{online:meterpreter} In this case, Meterpreter was installed using a reverse shell: a shell that was initiated by the victim - unknowingly - rather than from the attacker. Figure \ref{fig:ex2:schema_reverse_shell} shows a diagram representing the logical idea behind reverse shells. \begin{figure}[htbp] \centering \includegraphics[width=0.7\textwidth]{../drawable/decorations/schema_reverse_shell.png} \caption{Shell and reverse shell connections} \label{fig:ex2:schema_reverse_shell} \end{figure} The exercise has finished: for the final one, we're going even deeper on Metasploit, spicing things up with the usage of PDF files. \clearpage
{ "alphanum_fraction": 0.7696381289, "avg_line_length": 52.3930635838, "ext": "tex", "hexsha": "262c79b09bf79dac2df567718248c516809b4c7a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "28027a4c5cd46352dcc4efe07e21e85428131460", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mfranzil-unitn/unitn-m-ns", "max_forks_repo_path": "project/2_report/chapters/5_exercise2.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "28027a4c5cd46352dcc4efe07e21e85428131460", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mfranzil-unitn/unitn-m-ns", "max_issues_repo_path": "project/2_report/chapters/5_exercise2.tex", "max_line_length": 444, "max_stars_count": null, "max_stars_repo_head_hexsha": "28027a4c5cd46352dcc4efe07e21e85428131460", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mfranzil-unitn/unitn-m-ns", "max_stars_repo_path": "project/2_report/chapters/5_exercise2.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2426, "size": 9064 }
\documentclass[letterpaper]{ltxdoc} \usepackage[hmargin={3.5 cm,1.5cm}, top=1.5cm, marginpar=3.5cm ]{geometry} \usepackage{graphicx} \usepackage{hyperref} \hypersetup{colorlinks, linkcolor=blue, urlcolor=blue} \usepackage{multicol} \usepackage{lipsum} \begin{document} \title{\textsf{Device Notes}} \author{Polina Lemenkova \\Skunkworks Division} \maketitle \abstract{\lipsum[3]} \tableofcontents \addtocontents{toc}{% \protect\setlength{\columnsep}{5pc} \protect\begin{multicols}{2}} \section{The Device} \lipsum[30] \section{How to Install} \lipsum[33] \section{How to Use} \lipsum[37] \section{Other Applications} \lipsum[41] \addtocontents{toc}{\protect\end{multicols}} \end{document}
{ "alphanum_fraction": 0.7461212976, "avg_line_length": 22.15625, "ext": "tex", "hexsha": "6a78f7cab3266ee33f186adcc7117bd263745f7a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "930916b17ae1502a0e19408e33d5fcd9276625e3", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "paulinelemenkova/LaTeX-templates-classes", "max_forks_repo_path": "template_ltxdoc.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "930916b17ae1502a0e19408e33d5fcd9276625e3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "paulinelemenkova/LaTeX-templates-classes", "max_issues_repo_path": "template_ltxdoc.tex", "max_line_length": 57, "max_stars_count": null, "max_stars_repo_head_hexsha": "930916b17ae1502a0e19408e33d5fcd9276625e3", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "paulinelemenkova/LaTeX-templates-classes", "max_stars_repo_path": "template_ltxdoc.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 247, "size": 709 }
\chapter{{\bf Event selection}} \label{selection_chapter} This chapter describes the criteria used to select and identify $B$-meson decays needed for two analyses: the measurement of the \bdmumu and \bsmumu \BFs; and the measurement of the \bsmumu \el. In order to measure properties of \bmumu decays from data collected by the LHCb experiment, \bmumu decays must be separated from backgrounds in the data. The main sources of backgrounds are described in Section~\ref{sec:backgroundoutline}. %The reconstructed data contains many \bmumu candidates but not all are from real \bmumu decays. Therefore \bmumu decays must be separated from these backgrounds before the \BF and \el can be measured. The main sources of background that \bmumu decays must be separated from are described in Section~\ref{sec:backgroundoutline}. %The backgrounds that \bmumu decays must be separated from in data are described in Section~\ref{sec:backgroundoutline}. The development of the selection criteria and analysis strategies rely on information from simulated particle decays, these are documented in Section~\ref{sec:MCsamples}. The selection criteria used to identify \bmumu decays and decays used as normalisation channels for the \BF measurement are described in Section~\ref{sec:BFsel}. There are four main steps in the selection process. First, requirements are applied to the output of the trigger and then the data is refined by cut-based selection criteria. The last two steps in the selection process uses particle identification variables and multivariate classifiers to separate signal and background decays. The selection of decays for the \el measurement differs from that used for the \BF measurement due to the different analysis strategies described in Chapters~\ref{sec:BFanalysis} and~\ref{sec:lifetimemeasurement}. The criteria used to identify decays needed for the \BF measurements are adapted for the \el measurement and the changes made to each step of the \BF selection process are documented in Section~\ref{sec:ELsel}. %the section. During the development of the selection criteria, \bmumu candidates in data that have an invariant mass of the two muons within a specified window around the \bs or \bd meson masses are not revealed. This is done to avoid introducing biases into the selection procedure based on statistical fluctuations in the data. An analysis performed in this way is known as a `blind analysis'. The mass windows are defined as 5287 - 5447~\mevcc for the \bs and 5200 - 5360~\mevcc for the \bd. %The criteria used to identify $B$-meson decays for the \BF measurements are described in Section~\ref{sec:BFsel}. As well as \bmumu decays this analysis also uses \bukpsik, \bsjpsiphi and \bhh decays, where $h = K, \pi$. Firstly the trigger requirements to select these decays are given in Section~\ref{sec:BFtrigger}. Then in Section~\ref{sec:cutbasedsel} cut-based criteria used to identify each $B$-meson decay are described alongside a study into the optimisation of signal efficiency for several cuts used in this stage of the selection process. The particle identification requirements used to separate \bmumu decays from backgrounds are given in Section~\ref{sec:BFpid}. The last step in the selection relies on multivariate classifiers that are described in Section~\ref{sec:MVC}. A summary of the criteria used to select \bmumu decays for the \BF measurements is given in Section~\ref{sec:BFsummary}. %The criteria used to identigy decays for the \BF measurements are adapted for the \el measurement as described in Section~\ref{sec:ELsel}. %Similar to the \BF analysis, the \el measurement requires \bsjpsiphi, \bdkpi and \bskk decays as well as \bsmumu decays. The changes made to the trigger requirements, mass range of \bsmumu candidates and the particle identification requirements are presented in Sections~\ref{sec:ELtrigger}, \ref{sec:ELmass} and \ref{sec:ELpid}, respectively. Section~\ref{sec:ELmva} describes an investigation into multivariate classifiers for the \el measurement. Finally the complete set of selection criteria are summarised in Section~\ref{sec:ELsummary}. %This chapter describes the criteria used to select and identify \bmumu candidates in data to measure the \bmumu branching fractions and the \bsmumu effective lifetime. In addition to identifying \bmumu, several other decays are needed, % for the measurements and validating the analysis method. %the \BF and effective lifetime measurements both require \bsjpsiphi and \bhh decays, where $h = K, \pi$ and the \BF measurements also use \bujpsik decays. %Although \bsmumu decays leave a clear two muon signature in the detector, the identification of these decays is challenging because these decays occur very rarely and there are many other processes that can mimic a \bmumu decays in the detector, creating backgrounds in the data. The different background sources present in the data set are discussed in Section~\ref{sec:backgroundoutline}. The development of the selection criteria and analysis strategies to study these decays uses information from simulated particles decays, the details of the simulated decays are given in Section~\ref{sec:MCsamples}. %The selection criteria are different for the two analyses. The selection for the \BF measurements %has been continuously developed over many years and the latest version %is detailed in Section~\ref{sec:BFsel}. This selection has been adapted for the effective lifetime measurement and the changes in the selection are given in Section~\ref{sec:ELsel}. %The LHCb collaboration has published a number of papers studying the \bsmumu decay, the selection described in this Chapter has been built up over a number of years by a range of different collaboration members. The studies detailed in Sections \ref{strippingstudies} and \ref{sec:ELsel} were completed for this thesis as well as all figures and quoted efficiencies. \section{Background sources} \label{sec:backgroundoutline} %A \bs decaying into two muons leaves information in the LHCb detector with certain identifying characteristics. The two muons form a good vertex that is displaced from the primary vertex of the event because the Bs has a long lifetime and the combined momentum of the muons can be extrapolated backwards to the primary vertex because the muons are the only decay products of the \bs. There are other processes that occur in proton-proton decays that can leave information in the detector in a similar pattern to \bsmumu decays. The reconstruction, described in Section X, produces many \bsmumu candidates, the aim of the selection is to separate the real \bsmumu decays from the background in the reconstructed candidates. %The background sources for \bsmumu decays can be split into two groups, those that have quite obvious difference from \bsmumu decays and those that do not. The first set can be removed from the data set by taking advantage of the obvious differences whilst keeping a high The reconstruction of the data collected by the LHCb experiment, described in Section~\ref{SoftwareSimulation}, produces numerous \bmumu candidates from pairs of muons in the detector. Some candidates will have come from real \bmumu decays but there are other processes that occur during $pp$ collisions that leave a signature in the detector which can be reconstructed incorrectly as a \bmumu decay. %In the detector a \bsmumu decay will produce two muons that form a good vertex which is displaced from the primary vertex where the \bs was produced. The selection aims to separate real \bmumu decays from these backgrounds to produce a set of \bmumu candidates with a high signal purity. % from which the branching fractions and the effective lifetime can be measured. %The selection aims to seperate real \bsmumu decyas from these background to produce a set of \bsmumu candidates with a high signal purity from which the \bs effective lifetime can be measured. (here?) %The main sources of background processes for \bsmumu decays are; The main background sources are: \begin{itemize} \item Elastic collisions of protons that produce a pair of muons via the exchange of a photon, $pp \to p \mu^{+} \mu^{-} p$. The protons travel down the beam pipe and are undetected leaving the muons to be reconstructed as \bmumu. Typically the muons produced in this way have low transverse momentum; %whilst the protons travel down the beam pipe. The muons produced have low transverse momentum. \item Inelastic proton collisions that create two muons at the primary vertex. The muons form a good vertex and can be combined to form a \bsd that appears to decay instantaneously. This type of background is prompt combinatorial background; \item $B_{s}^{0}\to\mu^{+}\mu^{-}\gamma$ decays where the photon is not reconstructed. The presence of the photon in the decay means that $B_{s}^{0}\to\mu^{+}\mu^{-}\gamma$ decays are not helicity suppressed and could therefore be a sizable background. However, the photon gains a large transverse momentum resulting in the reconstructed \bsd mass being much lower than the expected \bs mass. The \BF of $B_{s}^{0}\to\mu^{+}\mu^{-}\gamma$ varies with the energy of the photon and is approximately an order of magnitude higher than the \bsmumu \BF~\cite{Bobeth:2013uxa,Melikhov:2004mk,Aditya:2012im}; \item Random combinations of muons produced by separate semi-leptonic decays. The \bmumu candidates formed in this way are called long-lived combinatorial background because the reconstructed \bsd will have a significantly longer apparent lifetime than the \bsd candidate of prompt combinatorial background; %The mass distribution of this background is either an exponetially decaying slope or a flat distribution as illustrated in Figure~\ref{fig:LHCbCMS}. %can be formed when muons produced in separated semi-leptonic decays are combined. These are known as long-lived combinatorial background because the reconstructed \bs will not decay instantaneously. \item Semi-leptonic decays where one of the decay products is mis-identified as a muon and/or is not detected. The resulting mass of the \bsd candidate is lower than expected due to the missing particle information. The semi-leptonic decays that contribute to \bmumu backgrounds in this way are \bdpimunu, \bsKmunu, \lambdab, \bupimumu, \bdpimumu and \bcjpsimunu where \jpsimumu; and %The mass distribution of these backgrounds are illustrates in Figure~\ref{fig:LHCbCMS} as the semi-leptonic decays. \item \bhh decays, where $ h^{(')} = K, \pi$, when both hadrons are mis-identified as muons. This usually occurs when the hadrons decay whilst travelling through the detector. Similar to mis-identified semi-leptonic decays, the reconstructed \bsd candidate mass is lower than expected. %The mass distribution of these backgrounds are illustrates in Figure~\ref{fig:LHCbCMS} as peaking backgrounds. %\item \bdmumu decays that are identical to \bsmumu decays apart from the difference in the $B$ meson masses. The \bd decay is irrelevant for the measurement of the \bsmumu effective lifetime and is therefore a background for this measurement. \end{itemize} %The selection aims to separate real \bsmumu decays from the background to produce a set of \bsmumu candidates with a high signal purity from which the \bs effective lifetime can be measured. The \BFs of the backgrounds from mis-identified decays are shown in Table~\ref{tab:backgroundBFs}. The separation of \bdmumu and \bsmumu decays from the backgrounds is challenging because these decays are much less abundant than the backgrounds. Therefore reconstructed candidates are predominantly made from background decays. \begin{table}[tbp] \begin{center} \begin{tabular}{lr} \toprule \toprule Decay & Branching fraction \\ \midrule $B_{s}^{0}\to\mu^{+}\mu^{-}\gamma$ & $\sim 10^{-8}$ \\ \bskk & $(2.52 \pm 0.17) \times 10^{-5}$\\%~\cite{Olive:2016xmw}\\ \bskpi & $(5.6 \pm 0.6) \times 10^{-6}$\\%\cite{Olive:2016xmw} \\ \bdkpi & $(1.96 \pm 0.05)\times 10^{-5}$\\%~\cite{Olive:2016xmw}\\ \bdpipi & $(5.12 \pm 0.19) \times 10^{-6}$\\%~\cite{Olive:2016xmw} \\ \bdpimunu& $(1.45 \pm 0.05) \times 10^{-4}$\\%~\cite{Olive:2016xmw} \\ \bsKmunu& $(1.42 \pm 0.35) \times 10^{-4}$\\%~\cite{Olive:2016xmw, Bouchard:2014ypa,PhysRevD.91.074510} \\ % not measured a prediction using form factors from QCD (2 ana refs) and PDG V_ub \lambdab& $(4.1 \pm 1.0) \times 10^{-4}$\\%~\cite{Aaij:2015bfa} \\ \bupimumu& $(1.83 \pm 0.25) \times 10^{-8}$\\%~\cite{Aaij:2015nea} \\ %LHCb measurement \bdpimumu& $(8.6 \pm 3.6) \times 10^{-9}$\\%~\cite{Aaij:2015nea,Wang:2012ab} \\ % not measured uses theory perdiction for ratio with bupimumu \bcjpsimunu & $(9.5 \pm 0.2) \times 10^{-6}$\\%~\cite{Aaij:2012dd,Aaij:2014jxa}\\ % not measured use LHCb ration measurements \bottomrule \bottomrule \end{tabular} \vspace{0.7cm} \caption{Branching fractions for background decays. The estimate of the $B_{s}^{0}\to\mu^{+}\mu^{-}\gamma$ \BF comes from reference~\cite{Melikhov:2004mk}. The measured values of the \bhh, \bdpimunu \lambdab and \bupimumu \BFs are taken from references~\cite{Olive:2016xmw,Aaij:2015bfa, Aaij:2015nea}. The theoretical prediction for \bsKmunu \BF combines information from references~\cite{Bouchard:2014ypa,PhysRevD.91.074510}, the \bcjpsimunu \BF is estimated from references~\cite{Aaij:2012dd,Aaij:2014jxa} and the \bdpimumu \BF is evaluated from references~\cite{Aaij:2015nea,Wang:2012ab}. } \label{tab:backgroundBFs} \end{center} \vspace{-1.0cm} \end{table} %The removal of some background decays is straight forward by taking advantage of obvious differences between the signal and the backgrounds, however backgrounds from mis-identified semi-leptonic and \bhh decays and long-lived combinatorial background are more challenging to remove. %The dimuon invariant mass distribution from the last published \bmumu Branching Fraction analysis but LHCb is shown in Figure~\ref{fig:LHCbCMS}, components for background from mis-identified semi-leptonic and \bhh decays are present below the \bs mass and the long-lived combinatorial background has an almost flat distribution across the entire mass range. %The \bdmumu is also a background process for measuring the \bsmumu effective lifetime, the only way to seperate \bsmumu and \bdmumu decays is by using the different masses of the \bs and $B^{0}$ mesons. (In the bullet points?) %Maybe say how since the decay is so rare there are many many more reconstruced background decays than real bsmumu decays? %\begin{figure}[htbp] % \centering % \begin{subfigure}[b]{0.4\textwidth} % \includegraphics[width= 0.8 \textwidth]{./Figs/Selection/CMSLHCb_fig2.pdf} %\caption{ } % \label{fig:BDTSsig} %\end{subfigure} % ~ %add desired spacing between images, e. g. ~, \quad, \qquad, \hfill etc. %(or a blank line to force the subfigure onto a new line) % \begin{subfigure}[b]{0.4\textwidth} % \includegraphics[width=\textwidth]{./Figs/placeholder.jpeg} % \caption{ } % \label{fig:BDTSbkg} % \end{subfigure} % \caption{Weighted dimuon invariant mass spectrum from combined analysis of CMS and LHCb Run~1 data for \bmumu Branching Fraction measurements~\cite{CMS:2014xfa}. Backgrounds included in the mass fit are mis-identified semi-leptonic decays in red, mis-identified \bhh decays in purple and long-lived combinatorial background in green. } % \label{fig:LHCbCMS} %\end{figure} %{\it I could put a plot showing the mass plot from the previous analysis or I could make a plot something like Siim has to illustrate what i mean but that feels a bit like copying!} %Separating the backgrounds from the \bsmumu decays can be done relatively straightly forwardly for many of the background processes by taking advantage of the obvious differences between the background and \bsmumu decays. However, distinguishing \bsmumu decays from long-lived combinatorial backgrounds, and mis-idetificed \bhh and semi-leptonic decays is more challenging and \bsmumu decays must be sacrificed in order to remove a sufficient about of the background processes for the analysis to be performed. For the effective lifetime analysis the \bdmumu decay is not relevant and is therefore a background, however since the decays are extremely similar the \bsd masses are the only way to seperate the decays. \section{Simulated particle decays} \label{sec:MCsamples} Simulated particle decays, as described in Section~\ref{SoftwareSimulation}, are used to develop the selection and analysis of \bmumu decays. Large samples of simulated decays are needed to separate signal from background decays and to evaluate the efficiency of the selection criteria to identify different particle decays. %Many different simulated decay types have been used for the development of the selection and analysis of \bmumu decays, The simulated decays used for studies performed for this dissertation are listed in Table~\ref{tab:MC_decays} alongside the data-taking conditions and simulation versions used to generated the decays. \begin{table}[tbp] \begin{center} \begin{tabular}{p{0.19 \textwidth}p{0.32 \textwidth}p{0.14 \textwidth}p{0.12 \textwidth}p{0.10 \textwidth}} \toprule \toprule Decay & Generator level cuts & Data taking & Simulation & Events \\ & & conditions & version & ($\times 10^6$) \\\midrule \multicolumn{5}{c}{{\it Cut-based selection studies}} \\ \midrule \bsmumu& &2012& sim06b & 2.0 \\ \bdmumu& &2012& sim06b & 2.0 \\ \bdkpi& &2012& sim06b & 1.0 \\ \bujpsik& &2012& sim06b & 1.0\\ \midrule \multicolumn{5}{c}{{\it Multivariate classifier training}} \\ \midrule \bbbarmumux &$p>$~3~\gevc & 2012 & sim06b & 8.0\\ & 4.7~$< m_{\mu^{+} \mu^{-}} <$~6.0~\gevcc & & & \\ & DOCA~$<$~0.4mm & & & \\ & 1~$<$~PtProd~$<$~16~GeV/$c^{4}$ & & & \\ % & 2012 & sim06b & 8 \\ \bbbarmumux & $p>$~3~\gevc &2012 & sim06b& 6.6 \\ & 4.7~$< m_{\mu^{+} \mu^{-}} <$~6.0~\gevcc & & & \\ & DOCA~$<$~0.4mm & & & \\ & PtProd~$>$~16~GeV/$c^{4}$ & & & \\ % & 2012 & sim06b & 7 \\ \bsmumu & & 2012 & sim06b & 2.0 \\ \midrule \multicolumn{5}{c}{{\it Analysis method development}} \\ \midrule \bsmumu& &2011 & sim08a &0.6 \\ & & 2012 & sim08i & 2.1 \\ & & 2015& sim09a & 2.1 \\ & & 2016& sim09a & 1.1 \\ %Is this correct? I thought in the ntuples we have a lot more 2015 than 2016 MC? \bdkpi& &2011& sim08b & 0.8 \\ %11102003 & & 2012& sim08g & 8.6 \\ & & 2015& sim09a & 4.6 \\ & & 2016& sim09a & 8.1 \\ \bskk & & 2012& sim08g & 7.1 \\ %13102002, 2016 is sim09a 4.1 M per pol, 2011 is sim-8b and 0.8 M per pol & & 2015& sim09a & 4.0 \\ \bottomrule \bottomrule \end{tabular} \vspace{0.7cm} \caption{Simulated samples used for developing the selection and analysis of \bmumu decays listed according to the study the decays are used in. Cuts are applied to \bbbarmumux to the magnitude of the muon momenta ($p$), invariant mass of the two muons ($m_{\mu^+ \mu-}$), the distance of closest approach of the tracks of the muons to each other (DOCA) and the product of the transverse momenta of the muons (PtProd).} \label{tab:MC_decays} \end{center} \vspace{-1.0cm} \end{table}%%\begin{tabular}{p{6cm}p{2.5cm}p{2cm}p{3cm}} %\begin{tabular}{p{0.45 \textwidth}p{0.15 \textwidth}p{0.15 \textwidth}p{0.15 \textwidth}} %\hline %Decay & Data taking conditions & Simulation version & Generated events \\ \hline %\multicolumn{4}{c}{{\it Stripping selection studies selection}} \\ \hline %\bsmumu& 2012& sim06b & 2$\times 10^6$ \\ %\bdmumu& 2012& sim06b & 2$\times 10^6$ \\ %\bdkpi& 2012& sim06b & 1 $\times 10^6$ \\ %\bujpsik& 2012& sim06b & 1 $\times 10^6$ \\ \hline %\multicolumn{4}{c}{{\it Multivariate classifier training}} \\ \hline %\bbbarmumux, {\footnotesize $p>$~3~\gevc, 4.7~$< M_{\mu^{+} \mu^{-}} <$~6.0~\gevcc, DOCA~$<$~0.4mm, 1~$<$~PtProd~$<$~16~\gevc} % & 2012 & sim06b & 8 $\times 10^6$ \\ %\bbbarmumux, {\footnotesize $p>$~3~\gevc, 4.7~$< M_{\mu^{+} \mu^{-}} <$~6.0~\gevcc, DOCA~$<$~0.4mm, PtProd~$>$~16~\gevc} % & 2012 & sim06b & 7 $\times 10^6$ \\ %\bsmumu & 2012 & sim06b & 2 $\times 10^6$ \\ \hline %\multicolumn{4}{c}{{\it Analysis method development}} \\ \hline %\bsmumu& 2011 & sim08a &6 $\times 10^5$ \\ %& 2012 & sim08i & 2 $\times 10^6$ \\ %& 2015& sim09a & 2 $\times 10^6$ \\ %& 2016& sim09a & 2 $\times 10^6$ \\ %Is this correct? I thought in the ntuples we have a lot more 2015 than 2016 MC? %\bdkpi& 2011& sim08b & 8 $\times 10^5$ \\ %11102003 %& 2012& sim08g & 9 $\times 10^6$ \\ %& 2015& sim09a & 4 $\times 10^6$ \\ %& 2016& sim09a & 8 $\times 10^6$ \\ %\bskk & 2012& sim08g & 7 $\times 10^6$ \\ %13102002, 2016 is sim09a 4.1 M per pol, 2011 is sim-8b and 0.8 M per pol %& 2015& sim09a & 4 $\times 10^6$ \\ \hline %\end{tabular} %\vspace{0.7cm} %\caption{Simulated samples used for developing the selection and analysis of \bmumu decays listed according to the study the decays are used in. Cuts are applied to \bbbarmumux to the magnitude muon momenta ($p$), invariant mass of two muons ($M_{\mu^+ \mu-}$), the distance of closest approach of the muons (DOCA) and the product of the transverse momenta of the muons (PtProd).} %\label{tab:MC_decays} %\end{center} %\vspace{-1.0cm} %\end{table} There exist multiple versions of the simulation because it is updated as understanding of the detector improves and to incorporate differences in data taking conditions, such as new trigger lines or changes in the centre-of-mass energy. Similar versions are chosen for decay samples used within each study listed in Table~\ref{tab:MC_decays}, so that differences between different decays are not masked by variations in the simulation.% of the decays. % for each study given in the table consistent %samples with similar simulation versions are used through each study %Similar simulation versions must be used to compare different types of simulated decays or data taking conditions so that differences are not masked by variations in the simulation of the decays. %The simulated decays in Table~\ref{tab:MC_decays} listed under the studies they are used in. Simulated \bbbarmumux decays are used to understand the long-lived combinatorial background. However, producing a large enough sample of these decays to be useful is computationally expensive and produces large output files. Therefore cuts are applied as the decays are generated to reduce the size of the samples and to speed up the simulation process. The cuts, listed in Table~\ref{tab:MC_decays}, are applied to the magnitude of the muon momenta, the reconstructed mass of the muon pair, the product of the transverse momenta of the muons and the distance of closest approach of the tracks of the two muons. In addition, these samples are `stripping filtered' which means that only candidates that pass the \bmumu stripping selection criteria, discussed in Section~\ref{sec:MCsamples}, are saved to further reduce the size of the output files. The cuts applied in the stripping selection are given in Table~\ref{tab:PreviousStrippingA}.%The generator level cuts save a factor of 5 of what needs to be saved, striping filtering also reduces it by a factor of 10! The information is in the LHCb-ANA-2013-032, the 2012 bbbarmumux sample corresponds to 7fb-1. %The development of the selection and analysis of \bsmumu decays requires the use of simuluated decays, as described in Section X. The reconstucted \bsmumu candidates come for a range of different different processes, as already discussed, in order to seperate real \bsmumu decays from the background, large clean samples of simulated decays are used so that the differences between signal and background decays can be understood. Futhermore simulated samples are needed to understand %Simulated \bsmumu, \bdmumu, \bdkpi and \bujpsik decays for 2012 data taking condition are used for studying the stripping selection in Section X. %The training and testing of multivarite classifiers in Section X uses simulated \bsmumu and \bbbarmumux decays for for 2012 data taking conditions. %Simulated events for \bsmumu, \bskk and \bdkpi for data taken in 2011, 2012, 2015 and 2016 are used for developing the analysis method in Chapter X. %The production of simulated decays is constantly being developed as understanding of the detector increases and to include changes made for each data taking year. Therefore there exist samples fo a number of different simulation versions that can be used to simulate events. %Each year data is collected at LHCb the conditions the experiment operates at and the proton collisions delivered by the LHC change. These changes include differences in the the selection used in the trigger for each year and increases in the centre of mass energy of proton collisions. %Therefore to understand data collected in different years simulated events from each year of data taking is needed, it is important to use similar simulation versions for each year so that the difference in the data taking conditions are not masked by differences in simulation versions. Simiarly for training multivarite classifiers consistent simulation versions are needed for the signal and background samples so that difference between signal and background distributions are not masked by differences in simulation versions. %In general the stripping selections are applied to simulated events, however events that do not pass the stripping selection are still saved and can be used after reprocessing the simulated events. However simulation conditions can be set up so that events that do not pass the stripping selection are discarded and can never be used and also cuts can be applied on particles when they are generated before the detector response is simulated and the events are reconstructed. This is used when a very large same of simulated events needs to be generated in order to have a suitably large same of events reconstructed and is the case for the samples of \bbbarmumux simulated events. %Two simulated samples of \bbbarmumux is used to understand the long-lived combinatorial background and for the training of the multivariate classifier in Section \ref{sec:BDT}. For these samples events that did not pass the stripping selection have not been saved and requirements were applied to the generated events. %Simulated \bsmumu events with the same simulation version are also used for training the multivariate classifier to keep the simulation condition consistent. Overall simulated decays accurately model what occurs in data. However, the distributions of particle identification variables and properties of the underlying $pp$ collision, such as the number of tracks in an event, are not well modelled in the simulation. %Have I said what an event is? The mis-modelling of particle identification variables is corrected for using the PIDCalib package~\cite{Anderlini:2202412} and simulated decays are re-weighted using information from data to accurately model the underlying event, as described in Section~\ref{sec:signalDTpdf}. %\begin{table}[tp] %\begin{center} %\begin{tabular}{lll} %\toprule \toprule % Particle & \bmumu & \bhh \\ %\midrule %\bsd & |$M_{B^{0}_{(s)}}$ - $M_{B^{0}_{(s)}}^{\mathrm{PDG}}$| $<$ 1200 \mevcc & |$M_{B}$ - $M_{B}^{\mathrm{PDG}}$| $<$ 500 \mevcc \\ % & DIRA > 0 & DIRA > 0 \\ % & \chiFD $>$225 & \chiFD $>$225 \\ % & \chiIP $<$ 25 & \chiIP $<$ 25 \\ % & \chivtx < 9 & \chivtx < 9 \\ % & DOCA $<$ 0.3 mm & DOCA $<$ 0.3 mm \\ % & & $\tau$ $<$ 13.248 \ps \\ % & & $p_{T}$ $>$ 500 \mevc \\ %\\ %$\mu$ or $h$ &$\chi^{2}_{\mathrm{trk}} < 3$ & $\chi^{2}_{\mathrm{trk}} < 3$ \\ % & isMuon = True & Ghost probability $<$ 0.3 \\ % & Minimum \chiIP $>$ 25 & Minimum \chiIP $>$ 25 \\ % & $p_{T}$ $>$ 0.25 \gevc & 0.25 \gevc $<$ $p_{T}$ $<$ 40 \gevc \\ %% & & Ghost probability $<$ 0.3 \\ %%% %\bottomrule \bottomrule %\end{tabular} %\vspace{0.7cm} %\caption{Selection requirements applied during the stripping selection for Run~1 data used in the \bmumu \BF analysis~\cite{Aaij:2013aka, CMS:2014xfa} to select \bmumu and \bhh decays. $M^{\mathrm{PDG}}$\ % corresponds to the Particle Data Group~\cite{Olive:2016xmw} mass of each particle.} %\label{tab:PreviousStrippingA} %\end{center} %\vspace{-1.0cm} %\end{table} \section[Event selection for the \bmumu branching fraction measurements]{Event selection for the \boldmath{\bmumu} branching fraction measurements} \label{sec:BFsel} As well as identifying \bmumu decays in data, the \BF measurements, described in Chapter~\ref{sec:BFanalysis}, require \bujpsik and \bhh decays as normalisation modes to determine the branching fractions from the observed number of \bmumu decays in data. Furthermore \bsjpsiphi decays are used to verify steps of the measurement process. This section describes the selection criteria used to identify \bmumu, \bhh, \bujpsik and \bsjpsiphi decays in data. The trigger requirements used to identify these decays are given in Section~\ref{sec:BFtrigger}. Section~\ref{sec:cutbasedsel} describes a cut based selection, tailored for each decay mode, that is used to refine the candidates that pass the trigger requirements. Included in this section is an investigation into the selection efficiency of cuts used in this step of the selection process. Up until the cut-based selection the process for selection of \bmumu, \bhh, \bujpsik and \bsjpsiphi decays is similar but the selection of \bmumu decays diverges from the other decays with the requirements placed on particle identification variables described in Section~\ref{sec:BFpid}. The last step in the selection process uses two multivariate classifiers that are described in Section~\ref{sec:MVC} to separate signal and background decays. One classifier is applied to all decays needed for the \BF analysis whereas the other classifier is used only to separate \bmumu decays from backgrounds. Finally, the selection criteria used for the \BF measurements are summarised in Section~\ref{sec:BFsummary}. %The cuts based selection is taylored for each decay mode. %The selection of \bmumu decays occurs in several steps. The first step is choosing the trigger requirements, which is followed by a cut-based selection to remove some background events. Particle identification variables are then used to reduce backgrounds from mis-identified semi-leptonic and \bhh decays. Finally multivariate classifiers are used to reduce the backgrounds to a low enough level so that the \bmumu \BFs can be measured.% from the data. %The \BF measurements are described in Chapter~\ref{sec:BFanalysis} and require \bujpsik and \bhh decays as normalisation modes to determine the branching fractions from the observed number of \bmumu decays in data. The selection criteria for these decays are kept as similar as possible to the selection of \bmumu decays and will be described alongside the signal selection. Furthermore \bsjpsiphi decays are used to verify steps of the measurement process. %The selection criteria for these decays are kept as similar as possible to the selection of \bmumu decays and will be described alongside the signal selection. \subsection{Trigger requirements} \label{sec:BFtrigger} %The trigger is the first step in the selection process and the structure of the trigger is described in Section X. Since \bsmumu decays are very rare a broad set of trigger requirements is used in order to keep a high proportion of \bsmumu decay at this step of the selection. Specific trigger lines are not used in the selection but rather the combined results of a large selection of trigger lines at each level of the trigger. The combinations of trigger lines used are the L0Global, Hlt1Phys and Hlt2Phys triggers. The L0Global trigger combines all trigger lines present in the L0 trigger, it selects an event provided at least one L0 selects it and rejects an event if no L0 trigger selects it. The Hlt1Phys and Hlt2Phys triggers are very similar to the L0Global trigger except that decisions are based only trigger lines related to physics processes and HLT trigger lines used for calibration are excluded. %Different trigger decisions on these lines are used to select decays for the Branching Fraction and effective lifetime analyses. The Branching fraction selection imposed the loosest trigger requirements by requiring a event to pass the `Dec' decision at each trigger level as illustrated in set `A' of Table X. Trigger decisions are defined in Section X. The effective lifetime analysis has slightly more constrained trigger requirement, requiring an event passes either the `TIS' or `TOS' decision at each level of the trigger as illustrated in set `B' of Table X. The trigger choice for the effective lifetime is motivated by the determination of the acceptance function in Section X. %The selection criteria used in trigger lines and the specific lines included in the trigger change with each year of data taking, the dominant lines for triggering \bsmumu decays for each year are shown in Table~X. %Events are required to be either TIS, triggered independent of signal or TOS, triggered on signal, on the trigger lines used at each level of the trigger. %The selection criteria used in trigger lines and the specific lines included in the trigger change with each year of data taking, the dominant lines for triggering \bsmumu decays for each year are shown in Table~X. %Slightly different trigger requirements are used to select \bhh decays used to develop and validate the effective lifetime analysis, the same broad trigger lines are used but the requirement on the output varies depending on the use of the \bhh events. The are two sets of trigger requirements, set `A' and `C', in Table~\ref{tab:triggers} are used to select \bhh decays, it will be made clear in later sections where \bhh decays are used which trigger requirements are imposed. The trigger, described in Section~\ref{Trigger}, selects events that could contain interesting particle decays. Candidates consistent with different particle decay hypotheses are reconstructed from events that were accepted by the trigger. %The trigger is the first step in the selection, which selects events that could contain an interesting particle decays. %and these events are saved to be used in physics analyses. %Candidates from different particle decays are reconstructed from events that have passed the trigger. For each candidate it is useful to know whether it was a component in that candidate that caused the event to be selected by a trigger line or if it was another particle in the event. Each trigger line produces different decisions that identify this. The possible trigger decisions are: %There are several different decisions that identify this: \begin{itemize} \item TOS, triggered on signal - a candidate is identified as TOS if only information from the candidate was enough to cause a trigger line to save the event; \item TIS, triggered independent of signal - a candidate is identified as TIS if part of the event independent of the candidate was sufficient to cause a trigger line to save the event; and \item DEC - a candidate is identified as DEC if anything in the event caused a trigger line to save the event. This includes TIS and TOS decisions and also when a combination of information from the candidate and something else in the event caused a trigger line to save the event. \end{itemize} Since \bmumu decays are very rare decays, the trigger requirements are chosen to keep a high efficiency for selecting \bmumu decays at this step of the selection. Individual trigger lines are not used for the selection; instead global trigger lines that combine the information from many separate lines are used. Furthermore candidates are required to be identified as DEC for each level of the trigger to ensure a high efficiency is achieved. The combined trigger lines used at each level of the trigger are the L0Global, Hlt1Phys and Hlt2Phys lines. The L0Global combines all trigger lines present in the L0 trigger. It selects an event provided at least one L0 trigger line selects it and rejects an event if no L0 trigger selects it. The Hlt1Phys and Hlt2Phys triggers are very similar to the L0Global trigger except that decisions are based only on trigger lines related to physics processes and HLT trigger lines used for calibration are excluded. %Candidates are required to be identified as DEC for each level of the trigger. The trigger lines L0Global, Hlt1Phys and Hlt2Phys are used in the analysis. %and candidates are required to be identified as DEC at each level of the trigger. %These trigger lines combine the decisions of many individual lines, which allows a high efficiency to be achieved for selecting \bsmumu decays. The L0Global trigger combines all trigger lines present in the L0 trigger. It selects an event provided at least one L0 trigger line selects it and rejects an event if no L0 trigger selects it. The Hlt1Phys and Hlt2Phys triggers are very similar to the L0Global trigger except that decisions are based only trigger lines related to physics processes and HLT trigger lines used for calibration are excluded. The trigger requirements to identify \bmumu decays are also used to select \bujpsik and \bsjpsiphi decays and slightly different trigger requirements are used for \bhh decays. \bhh decays are required to be TIS by the L0Global and Hlt1Phys trigger lines and TOS at the HLT2 level by specific trigger lines designed to select \bhh decays. The TIS decision is used for \bhh decays to reduce the difference in selection efficiencies between the dominant lines that trigger \bhh and \bmumu decays. However, the efficiency of TIS decisions is quite low at the HLT2 level, therefore TOS decisions are used so that there is a large enough samples of decays. %there to high a high enough number of decays. %Slightly different trigger decisions are used to select \bhh, \bujpsik and \bsjpisphi decays ...... (Prehaps indicate that this will be made clear later? Or just say what is used for the normalisation?) B2H is TIS at L0 and HLT1 and TOS for a specifice HLT2 line B2HH. I think the other decays are the same as Bs2MuMu. %but the same trigger lines are used. To be useful as a validation channel the efficiency of the trigger requirements as a function of the decay time needs to be similar to the \bsmumu triggers, this is achieved by requiring \bhh decays to be TIS at each level of the trigger. %\bhh candidates are required to be TIS at each level of the trigger, this trigger decision is used to ensure the trigger efficiency to select \bhh decays is similar to the \bsmumu trigger efficiency. In summary, the requirements imposed on the trigger to select \bsmumu, \bhh, \bujpsik and \bsjpsiphi decays are shown in Table~\ref{tab:triggers}. \begin{table}[tb] \begin{center} \begin{tabular}{lc} \toprule \toprule Trigger Line& Trigger decision \\ \midrule %\multicolumn{2}{c}{{\it set A}} \\ \hline %L0Global& Dec\\ %Hlt1Phys& Dec \\ %Hlt2Phys& Dec \\ \hline \multicolumn{2}{c}{{\it \bmumu, \bujpsik, \bsjpsiphi}} \\ \midrule L0Global& DEC \\ Hlt1Phys& DEC \\ Hlt2Phys& DEC \\ \midrule \multicolumn{2}{c}{{\it\bhh}} \\ \midrule L0Global& TIS\\ Hlt1Phys& TIS \\ Hlt2B2HHDecision& TOS \\ \bottomrule \bottomrule \end{tabular} \vspace{0.7cm} \caption{Trigger decisions used to select \bsmumu, \bhh, \bujpsik and \bsjpsiphi decays.}% Set `A' is used to select decays for the Branching Fraction analysis. Set `B' is used to select \bsmumu decays for the effective lifetime analysis. Sets `A' and `C' are used to select \bhh decays used to develop the \bsmumu effective lifetime analysis.} \label{tab:triggers} \end{center} \vspace{-1.0cm} \end{table} %There was a problem with the implementation of the Hlt2Phys Dec decision in 2016 simulated events.%, the decision returned was always 1. %This only affect the selection of \bhh decays. In order to emulate this trigger a combination of Hlt2 lines that select \bhh events, listed in Table~\ref{tab:HltDecEmulation}, is used instead of HLT2Phys when the Dec decision is required. %\begin{table}[ht] %\begin{center} %\begin{tabular}{l} %\hline %\bhh trigger lines \\ \hline %Hlt2Topo2BodyDecision Dec \\ %Hlt2B2HH Lb2PPiDecision Dec \\ %Hlt2B2HH Lb2PKDecision Dec \\ %Hlt2B2HH B2PiPiDecision Dec \\ %Hlt2B2HH B2PiKDecision Dec \\ %Hlt2B2HH B2KKDecision Dec \\ %Hlt2B2HH B2HHDecision Dec \\ \hline %\end{tabular} %\vspace{0.7cm} %\caption{Trigger lines used to emulate the Hlt2Phys$\_$Dec decision for \bhh data and simulated events.} %\label{tab:HltDecEmulation} %\end{center} %\end{table} \subsection{Cut-based selection} \label{sec:cutbasedsel} %http://lhcb-release-area.web.cern.ch/LHCb-release-area/DOC/stripping/config/stripping20/dimuon/strippingbs2mumulineswidemassline.html The \bmumu candidates that pass the required trigger decisions are refined by a cut-based selection. %These selection cuts are aimed at removing obvious backgrounds by exploiting the differences between signal and background decays. %The selection of \bhh, \bujpsik and \bsjpsiphi The selection criteria for \bhh, \bujpsik and \bsjpsiphi decays are kept as similar as possible to that used to identify \bmumu decays in order to reduce systematic uncertainties from selection efficiencies in the normalisation procedure described in Section~\ref{sec:Normalisation}. %decays is kept as similar as possible to the signal selection to avoid introducing systematic uncertainties in the normalisation procedure of the \bmumu branching fractions described in Section~\ref{sec:Normalisation}}. The cut-based selection is composed of two parts; the stripping selection and the offline selection. The stripping selection described in Section~\ref{SoftwareSimulation}, is applied to all events that pass the trigger. It consists of individual stripping lines that select reconstructed candidates for specific decays. The stripping selection used to select \bmumu, \bhh, \bujpsik and \bsjpsiphi decays in the \BF measurements published in references~\cite{Aaij:2013aka,CMS:2014xfa} are described in Sections~\ref{strippingold}. These selection requirements were designed at the start of Run~1 by studying the efficiencies of different selection cuts from simulated events~\cite{Diego}. However since then improvements have been made to the simulation of particle decays at LHCb, therefore it is prudent to check the accuracy of the selection efficiencies with updated simulated events and also to investigate where improvements can be made to the efficiency of the \bmumu stripping selection. % used to select \bmumu events. An investigation into the choice of cuts used in the stripping selection is described in Section~\ref{strippingstudies}. % is described in Sections~\ref{strippingold}~and~\ref{strippingstudies}. % of stripping lines by exploiting differences between the decays and the backgrounds that mimic them. %The primary purpose of the stripping selection is to reduce to size of the data set produced from $pp$ collisions to a manageable size from which properties of particle decays can be measured. %The selection of \bsmumu and \bhh decays for the \bsmumu effective lifetime measurement uses the same stripping lines as those in the \bmumu Branching Fraction measurements. These lines were designed at the start of Run~1 by studying the efficiencies of different selection cuts from simulated events \cite{}. However since then improvements have been made to the simulation of particle decays at LHCb, therefore it is prudent to check the accuracy of the selection efficiencies with updated simulated events and investigate where improvements can be made to the efficiency of the stripping selection used to select \bsmumu events. These studies are detailed in Sections~\ref{strippingold} and~\ref{strippingstudies}. The offline selection cuts are applied to the output of the stripping selection. Overall the stripping selection imposes loose selection requirements onto \bmumu candidates so that as much information as possible is still available to develop the analysis and understand background events after the stripping selection. Therefore the offline selection further refines the data, removing background candidates. The offline selection cuts are presented in Section~\ref{finalloosesel} and difference in selection criteria applied to Run 1 and Run 2 data are detailed. %REDO THIS AFTER WORKING OUT EXACTLY WHAT DETAILS I THINK ARE IMPORTANT TO INCLUDE! %The stripping selection is a set of loose cuts that are applied to reconstructed events that have passed the trigger. The stripping selection consist of `lines' that are taylored to select particular decays. The aim of stripping lines is to reduce the size of the data sets collected by the experiment to a managable size on which tighter selection cuts to be developed and applied offline. Events that do not pass the selection cuts in the stripping lines are not directly avaliable to physics analyses. Therefore the cuts applied in the stripping lines are designed to remove obvious background events whilst keeping a high efficecny on the decay of interest. Restraints are placed on the amount of data that can pass the stripping selection for a particular analysis, typically this is set to be 0.05$\%$ of the original LHCb data set size for events that are saved in DST files. %This paragraph is ok. \subsubsection{Development of the stripping selection} \label{strippingold} %eThe stripping selection used to select \bsmumu and \bhh decays for the \bsmumu effective lifetime measurement uses the same stripping lines as the selection of \bmumu decays for the Branching Fraction measurement. %ELSEWHERE %The stripping selections applied to all decays needed for the \BF measurements were designed at the start of Run~1 by studying the efficiencies of different selection cuts from simulated events~\cite{Diego}. However since then improvements have been made to the simulation of particle decays at LHCb, therefore it is prudent to check the accuracy of the selection efficiencies with updated simulated events and also to investigate where improvements can be made to the efficiency of the \bmumu stripping selection.% used to select \bmumu events. %In addition to \bmumu and \bhh decays the measurement of the \bmumu Branching Fractions requires \bujpsik decays. \bdkpi and \bujpsik decays are used to normalise the number of observed \bsmumu decays to the number created in $pp$ collisions. There are four separate stripping lines that are designed to select \bmumu, \bujpsik, \bsjpsiphi and \bhh candidates, respectively. Although the selection of all decays is kept as similar as possible to the signal selection, the selection of \bujpsik and \bsjpsiphi decays diverges from the \bmumu selection due to the additional particles in the final state of the decay. %LATER!! %Any changes made to the \bmumu stripping selection to improve the selection efficiency must be included in the selection of the other decays to keep the systematic uncertainties under control, this is particularly important for \bhh and \bujpsik decays. %, therefore all three stripping lines must be studied together. The stripping selection cuts applied for the Run~1 branching fraction analysis~\cite{CMS:2014xfa, Aaij:2013aka} to select \bmumu, \bujpsik, \bsjpsiphi and \bhh candidates are listed in Table~\ref{tab:PreviousStrippingA} and~\ref{tab:PreviousStrippingB}. %The stripping selection cuts and cuts applied during the reconstruction of particle decays for the Run 1 \bmumu Branching Fraction analysis \cite{} to select \bmumu, \bhh and \bujpsik are shown in Table X. The selection of \bujpsik and \bhh decays is kept as similar as possible to the selection of \bsmumu decays to avoid introducing systematic errors when \bhh and \bujpsik decays are used in the normalisation for the Branching Fraction measurement. The selection of \bujpsik event must diverge from the \bsmumu selection due to additoinal particles in the final state of the decay. The stripping selection imposes more cuts to select \bhh decays compared to \bsmumu because \bhh decays are much more abundant therefore extra cuts are needed to reduce the number of events passing the stripping to an acceptable level. The cuts applied to \bhh in the stripping are the later applied to \bsmumu events after the stripping selection. %The measurement of the \bsmumu Branching Fraction, described in Chapter X, uses \bujpsik and \bdkpi decays to normalise the number of observed \bsmumu decays to the number created in proton-proton collisions. There are three stripping lines that select \bmumu, \bujpsik and \bhh candidates, where $h = K, \pi$, the selection of the normalisation channels is kept as similar as possible to the signal selection to avoid introducing systematic uncertainties in the normalisation procedure. However, the selection of \bujpsik decays must diverge from \bsmumu due to additional particles in the final state of the decay. Any changes made to the \bmumu stripping selection to improve the selection efficiency must be included in the selection to the normalisation channels to keep the systematic uncertainties under control, therefore all three stripping lines must be studied together. The stripping selection cuts applied for the Run~1 Branching Fraction analysis~\cite{} to select \bmumu, \bhh and \bujpsik decays are listed in Table~\ref{tab:PreviousStripping}. %\afterpage{ %\begin{landscape} %\vspace*{\fill} \begin{table}[tp] \begin{center} \begin{tabular}{lll} \toprule \toprule Particle & \bmumu & \bhh \\ \midrule \bsd & |$M_{B^{0}_{(s)}}$ - $M_{B^{0}_{(s)}}^{\mathrm{PDG}}$| $<$ 1200 \mevcc & |$M_{B}$ - $M_{B}^{\mathrm{PDG}}$| $<$ 500 \mevcc \\ & DIRA > 0 & DIRA > 0 \\ & \chiFD $>$225 & \chiFD $>$225 \\ & \chiIP $<$ 25 & \chiIP $<$ 25 \\ & \chivtx < 9 & \chivtx < 9 \\ & DOCA $<$ 0.3 mm & DOCA $<$ 0.3 mm \\ & & $\tau$ $<$ 13.248 \ps \\ & & $p_{T}$ $>$ 500 \mevc \\ \\ $\mu$ or $h$ &$\chi^{2}_{\mathrm{trk}} < 3$ & $\chi^{2}_{\mathrm{trk}} < 3$ \\ & isMuon = True & Ghost probability $<$ 0.3 \\ & Minimum \chiIP $>$ 25 & Minimum \chiIP $>$ 25 \\ & $p_{T}$ $>$ 0.25 \gevc & 0.25 \gevc $<$ $p_{T}$ $<$ 40 \gevc \\ % & & Ghost probability $<$ 0.3 \\%% \bottomrule \bottomrule \end{tabular} \vspace{0.7cm} \caption{Selection requirements applied during the stripping selection for Run~1 data used in the \bmumu \BF analysis~\cite{Aaij:2013aka, CMS:2014xfa} to select \bmumu and \bhh decays. $M^{\mathrm{PDG}}$ corresponds to the Particle Data Group~\cite{Olive:2016xmw} mass of each particle.} \label{tab:PreviousStrippingA} \end{center} \vspace{-1.0cm} \end{table} %\vspace*{\fill} %\end{landscape} %} %\afterpage{ %\begin{landscape} %\vspace*{\fill} %\afterpage{ \begin{table}[tp] \begin{center} \begin{tabular}{llll} \toprule \toprule Particle &$B^{+} \to J/\psi(\mu^{+}\mu^{-})K^{+}$ & Particle &$B^{0}_{s} \to J/\psi(\mu^{+}\mu^{-}) \phi(K^{+}K^{-})$ \\ \midrule $B^{+}$ & |$M_{B^{+}}$ - $M_{B^{+}}^{\mathrm{PDG}}$| $<$ 500 \mevcc & \bs & |$M_{B^{0}_{s}}$ - $M_{B^{0}_{s}}^{\mathrm{PDG}}$| $<$ 500 \mevcc \\ & $\chi^{2}_{VTX}<$ 45 & & $\chi^{2}_{VTX} <$ 75 \\ & \chiIP $<$ 25 & & \chiIP $<$ 25 \\ \\ \jpsi & |$M_{J/\psi}$ - $M^{\mathrm{PDG}}_{J/\psi}$| $<$ 100 \mevcc & \jpsi & |$M_{J/\psi}$ - $M^{\mathrm{PDG}}_{J/\psi}$| $<$ 100 \mevcc \\ & DIRA > 0 & & DIRA > 0 \\ & \chiFD $>$ 225 & & \chiFD $>$ 225 \\ & \chivtx < 9 & & \chivtx < 9 \\ & DOCA $<$ 0.3 mm & & DOCA $<$ 0.3 mm \\ \\ $\mu^{\pm}$ & \chitrk < 3 &$\mu^{\pm}$ & \chitrk < 3 \\ & isMuon = True & &isMuon = True \\ & Minimum \chiIP $>$ 25 & & Minimum \chiIP $>$ 25 \\ & $p_{T}$ $>$ 0.25 \gevc & & $p_{T}$ $>$ 0.25 \gevc \\ \\ $K^{+}$ & \chitrk < 3 & $\phi$ & |M - M$^{\mathrm{PDG}}_{\phi}$| $<$ 20 \mevcc \\ & $p_{T}$ $>$ 0.25 \gevc & & Minimum \chiIP $>$ 4 \\ & Minimum \chiIP $>$ 25 & \\ & &K$^{\pm}$ & \chitrk < 3 \\ & & & $p_{T}$ $>$ 0.25 \gevc \\ \bottomrule \bottomrule \end{tabular} \vspace{0.7cm} \caption{Selection requirements applied during the stripping selection for Run~1 data used in the \bmumu branching fraction analysis~\cite{Aaij:2013aka,CMS:2014xfa} to select \bujpsik and \bsjpsiphi decays. $M^{\mathrm{PDG}}$ corresponds to the Particle Data Group~\cite{Olive:2016xmw} mass of each particle.} \label{tab:PreviousStrippingB} \end{center} \vspace{-1.0cm} \end{table} %\vspace*{\fill} %\end{landscape} %} The variables used in the stripping selection are: \begin{itemize} \item the reconstructed mass ($M$) - the mass and momenta of the decay products of the $B$ meson (or \jpsi) are combined to provide its reconstructed mass. Cuts on the mass remove candidates with a reconstructed mass far from expected, which are consistent with background. Loose mass requirements are made in the \bmumu selection to allow for the study of semi-leptonic backgrounds in data that have a mass lower than the \bsd mass when mis-identified as a \bmumu decay; \item the ``direction cosine'' (DIRA) - this is the cosine of the angle between the momentum vector of the particle and the vector connecting the production and decay vertices\footnote{The production vertex of the $B$ or the primary vertex is identified by extrapolating the $B$ meson momentum vector towards the beam axis. The closest vertex to the intersection of the $B$ momentum and the beam axis is assigned as the primary vertex.} of the particle. For correctly reconstructed candidates the direction cosine should be very close to one. Requiring candidates to have positive value ensures that candidates travelling in the wrong direction are removed; \item the flight distance \chisqd (\chiFD) - this is computed by performing the fit for the production vertex of a particle with and without the tracks originating from the decay vertex of the particle. For a $B$ meson the \chiFD is likely to be large because $B$ mesons have long lifetimes therefore the tracks from its decay vertex will not point towards the production vertex; \item track fit \chisqd per degree of freedom (\chitrk) - provides a measure of the quality of a fitted track, placing an upper limit on this variable removes poor quality tracks and backgrounds composed of poorly reconstructed decays; \item vertex fit \chisqd per degree of freedom (\chivtx)- provides a measure of how well tracks can be combined to form a vertex, placing an upper limit on this variable removes poorly constrained vertices and backgrounds composed of poorly reconstructed decays; \item distance of closest approach (DOCA) - this is the distance of closest approach of the tracks of the two daughter particles that make up a parent particle. It is computed from straight tracks in the VELO. For the decay products of a particle, for example the muons from \bmumu, this distance would ideally be zero because the muons originate from the same vertex; \item decay time ($\tau$) - is the length of time a particle lives as it travels from its production vertex to its decay vertex. Applying an upper decay time cut removes unphysical background decays; \item isMuon - particle identification variable, defined in Section~\ref{PID}, that returns True for muons and False for other particles; \item transverse momentum ($p_{T}$) - the component of a particle's momentum perpendicular to the beam axis. Decay products of $B$ mesons are expected to have relatively high \pt due to the heavy masses of $B$ mesons, however, an upper limit removes unphysical backgrounds; \item momentum ($p$) - an upper limit placed on the momentum of a particle to remove unphysical backgrounds; \item ghost probability - defined in Section~\ref{sec:Track_recon}, provides the probability of a track being composed of random hits in the detector, tracks from the passage of real particles will have a low ghost probability; \item impact parameter \chisqd (\chiIP) - this is the change in the fit \chisqd for a primary vertex (PV) caused by removing one track in the fit. In a \bmumu decay, the \bsd is produced at the PV therefore it should have a small \chiIP value whereas the muons will be displaced from the PV and will have a large \chiIP because of the relatively long lifetime of the \bsd; \item minimum impact parameter (minimum \chiIP) - this is the \chiIP of the muons with respect to all PVs in the event, this parameter is used to remove prompt muons created at any PV in the event and therefore reduce the prompt combinatorial background. \end{itemize} The stripping selection imposes a greater number of cuts to select \bhh decays compared to \bsmumu because \bhh decays are much more abundant. Therefore extra cuts are needed to reduce the number of events passing the stripping to an acceptable level. The cuts applied to only \bhh decays in the stripping are applied to \bmumu candidates in the offline selection. %The cuts on muon transverse momentum, track \chisqd and isMuon (when used) are applied in the reconstruction and cannot be changed. \subsubsection{Investigation of the stripping selection} \label{strippingstudies} An investigation into the efficiency of selection cuts used in the stripping lines to select \bmumu, \bhh and \bujpsik decays is presented in this section. %investigated. The efficiency of the \bsjpsiphi stripping line is not investigated because it is not used as a normalisation channel for the \BF measurements. First, the efficiencies of selection cuts used in each stripping line to select its signal decay are evaluated using simulated decays. This is done to identify which cuts can be changed to improve the selection efficiency of \bmumu decays. Then, the efficiencies of cuts used in the \bmumu stripping line are compared to the efficiencies of the \bhh and \bujpsik stripping lines for different cut values. % made on variables used in the stripping selection. %stability of the ratio of the selection cut efficicencies for the \bmumu stripping line and the \bhh and \bujpsik stripping lines is evaluated for a range of different cut values. This is done because the selection efficiencies of \bmumu and the normalisation channels must be similar to keep systematic uncertainties of the normalisation procedure under control and any change made to the \bmumu stripping line must be propagated through to the other stripping lines. These studies show that improvements to the selection efficiency of the \bmumu stripping line can be made by changing the cuts on the \bsd \chiFD and the muon minimum \chiIP of \bmumu candidates. Therefore the efficiencies of a set of different cut values applied to these variables are investigated. These variables are chosen because the selection requirements imposed on them have the lowest efficiencies in the stripping selection, as shown in Table~\ref{tab:Run1strippingEff}. The impact of changing the requirements placed on these variables on the measurement of the \bmumu \BFs has not been evaluated. %Therefore the efficiencies of a set of different cut values applied to these variables are investigated. % and new values for these cuts are chosen. One of the main purposes of the stripping selection is to reduce the size of the data collected by LHCb to a manageable level that can be analysed. Therefore the change in the amount of data retained by the stripping lines is evaluated and new cut values are chosen. Finally the cuts used to select candidates in the stripping for the \BF measurements described in Chapter~\ref{sec:BFanalysis} are summarised at the end of this section. \subsubsection*{Stripping line efficiency} %The \bmumu stripping line is designed to have a high efficiency for selecting \bmumu decays whilst removing backgrounds. The \bhh and \bujpsik stripping lines are designed to have similar efficiencies for selecting \bhh and \bujpsik decays similar to the signal efficiency of the \bmumu stripping line in order to reduce systematic uncertainties in the normalisation procedure. The efficiencies of the selection cuts in the \bmumu, \bhh and \bujpsik stripping lines at selecting \bmumu, \bhh and \bujpsik, respectively, are evaluated using simulated decays. The efficiencies are evaluated as the fraction of reconstructed decays within the detector angular acceptance that pass a stripping selection cut. %The signal efficiencies of the selection cuts in the \bmumu, \bhh and \bujpsik stripping lines have been evaluated using simulated \bmumu, \bhh and \bujpsik decays, respectively, listed in Table~\ref{tab:MC_decays}. Several selection requirements are applied to the simulated decays before the efficiencies are evaluated. These are: the $p_T$ requirement on the daughter particles in the decays; the \chitrk requirement; and all muons are required to have isMuon = True. These cuts are applied when \textsc{Root} files are created, as described in Section~\ref{SoftwareSimulation}, and cannot be changed. %The efficiencies are evaluated as the fraction of reconstructed decays within the detector angular acceptance that pass a stripping selection cut. % 0.25 \gevc, \chitrk $<$3 and isMuon true for muons in \bmumu and \bujpsik decays. The signal efficiencies for individual cuts are evaluated as well as the total signal efficiency of the stripping line to understand whether the lines can be made more efficient and the results are shown in Table~\ref{tab:Run1strippingEff}. %The efficiency of the cuts used in the stripping lines to selecting \bmumu, \bhh and \bujpsik decays are shown in Table \ref{tab:Run1strippingEff}; only cuts that are in common with the \bmumu stripping lines are listed. The efficiencies of \bhh and \bujpsik are studied as well as \bmumu because the selection of these channels are used for the normalisation of the branching fractions. %must be kept similar to reduce systematic uncertainties in the normalisation procedure described in Chapter~\ref{sec:BFanalysis}. %The efficiencies are evaluated using 2012 sim06 simulated events that have the minimum track \pt, \chitrk and isMuon requirements imposed. %These cuts are applied during the reconstruction and particles that do not pass these requirements are not included in the samples of simulated decays. No trigger requirements have been applied so that only the effect of the stripping selection on the efficiencies can be assessed. During the simulation of particle decays the trigger is run in {\it pass through} mode so that all reconstructed decays are saved, not just those that have passed a trigger line. The efficiencies have been evaluated for each cut that is used in all three stripping lines and also for the total selection efficiency of each stripping line, the results are shown in Table~\ref{tab:Run1strippingEff}. %The signal efficiencies for individual cuts ar\ %e evaluated as well as the total signal efficiency of the stripping line to understand whether the lines can be made more efficien\ %t and the results are shown in Table~\ref{tab:Run1strippingEff}. \afterpage{ \begin{landscape} %\vspace*{\fill} %\begin{sideways} \begin{table}[tbp] \begin{center} \begin{tabular}{lrrrr} \toprule \toprule & \multicolumn{4}{c}{Efficiency} \\ \cmidrule{2-5} Requirement & \bsmumu & \bdmumu & \bhh &\bujpsik \\ \midrule $|M_{B} - M^{PDG}_{B}|$ & (100.00 $\pm$ 0.00)$\%$ & (100.00 $\pm$ 0.00)$\%$ & (98.25 $\pm$ 0.02)$\%$ & (99.73 $\pm$ 0.02)$\%$ \\ \bsd or \jpsi DIRA & (99.41 $\pm$ 0.01) $\%$ & (99.47 $\pm$ 0.01)$\%$ & (99.47 $\pm$ 0.01)$\%$ & (95.83 $\pm$ 0.08)$\%$ \\ \bsd or \jpsi \chiFD & (83.74 $\pm$ 0.06) $\%$ & (83.96 $\pm$ 0.06)$\%$ & (83.83 $\pm$ 0.06)$\%$ & (82.90 $\pm$ 0.15)$\%$ \\ \bsd or \jpsi \chiIP & (96.78 $\pm$ 0.03) $\%$ & (96.93 $\pm$ 0.03)$\%$ & (97.44 $\pm$ 0.03)$\%$ & (97.52 $\pm$ 0.06)$\%$ \\ \bsd or \jpsi \chivtx & (97.21 $\pm$ 0.03) $\%$ & (97.18 $\pm$ 0.03)$\%$ & (97.68 $\pm$ 0.02)$\%$ & (96.78 $\pm$ 0.07)$\%$ \\ \bsd or \jpsi DOCA & (99.82 $\pm$ 0.01) $\%$ & (99.80 $\pm$ 0.01)$\%$ & (99.83 $\pm$ 0.01)$\%$ & (99.58 $\pm$ 0.03)$\%$ \\ $\mu$, $h$, $K^{+}$ minimum \chiIP & (80.16 $\pm$ 0.06) $\%$ & (80.62 $\pm$ 0.06)$\%$ & (79.66 $\pm$ 0.07)$\%$ & (86.98 $\pm$ 0.14)$\%$ \\ \midrule Total after above cuts & (71.29 $\pm$ 0.07) $\%$ & (71.82 $\pm$ 0.07)$\%$ & (70.97 $\pm$ 0.07)$\%$ & (71.30 $\pm$ 0.18)$\%$ \\ \midrule Total after all cuts & (71.29 $\pm$ 0.07) $\%$ & (71.82 $\pm$ 0.07)$\%$ & (70.70 $\pm$ 0.07)$\%$ & (62.25 $\pm$ 0.20)$\%$ \\ \bottomrule \bottomrule \end{tabular} \vspace{0.7cm} \caption{Signal efficiencies of \bmumu, \bhh and \bujpsik stripping lines using simulated \bmumu, \bdkpi and \bujpsik decays, respectively. The stripping selection cuts are listed in Tables~\ref{tab:PreviousStrippingA} and~\ref{tab:PreviousStrippingB} and efficiencies are evaluated for cuts are shared between all stripping lines and the total efficiency for each line.}%n the \bsmumu stripping, each cut separately and the total efficiencies are given for the listed cuts and the complete set of cuts present in each stripping line. } \label{tab:Run1strippingEff} \end{center} \end{table} %\vspace*{\fill} \end{landscape} %\end{sideways} } %\noindent %requirements have been applied so that only the effect of the stripping selection on the efficiencies can be assessed. During the simulation of particle decays the trigger is run in {\it pass through} mode so that all reconstructed are saved, not just those that have passed a trigger line. %The selection efficiencies are very similar for stripping cuts across the different decays, fitting the requirement that the selection of signal and normalisation decays used in the branching fraction measurement are as similar as possible. The efficiencies for most of the stripping cuts are $\sim 97 \%$ or higher. However, the efficiencies of the cuts on the \chiFD of the \bsd or \jpsi and the daughter \chiIP of the muon or hadron pair are lower at $83 \%$ and $80 \%$, respectively. Therefore improvements to the stripping selection efficiencies could be achieved by altering these two selection requirements. \subsubsection*{Comparison of different stripping lines} The selection efficiencies are very similar for stripping cuts across the different decays, fitting the requirement that the selection of signal and normalisation decays used in the branching fraction measurement are as similar as possible. The similarity of the selection efficiencies for the signal and normalisation decays is further illustrated in Figures~\ref{fig:BJpsiK} and~\ref{fig:BKPi}, which show the ratio of selection efficiencies of \bsmumu decays to \bujpsik and \bdkpi decays for a range of selection cuts. With the exception of the \bsmumu and \bujpsik \chiIP cuts on the daughter particles, the ratio of efficiencies is well within $3\%$ of unity for the range of cuts values shown. The ratio of the \bsmumu and \bujpsik efficiencies for the daughter particle \chiIP, Figure~\ref{fig:JpsidaugtIP}, markedly deviates from unity, showing that the \chiIP distribution of the muons and kaon are very different as seen previously in reference~\cite{Diego}. If the \chiFD, \bs or \jpsi \chiIP and \chivtx selection cuts are applied to the simulated events before the daughter \chiIP requirement the ratio of \bsmumu and \bujpsik efficiencies is much closer to~1. The stability of the ratios of selection efficiencies across a large range of cuts values shows that changing a cut value in the \bmumu selection will have a similar impact on the efficiencies of the normalisation decays. \begin{figure}[p] \centering \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{./Figs/Selection/B2JpsiK_IP.pdf} \caption{} \label{fig:JpsiIP} \end{subfigure} \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{./Figs/Selection/B2JpsiK_VTX.pdf} \caption{} \label{fig:Jpsovertex} \end{subfigure} \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{./Figs/Selection/B2JpsiK_FD.pdf} \caption{} \label{fig:JpsiFD} \end{subfigure} \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{./Figs/Selection/B2JpsiK_daugt.pdf} \caption{} \label{fig:JpsidaugtIP} \end{subfigure} \caption{Ratio of $B^{0}_{(s)}\to\mu^{+} \mu^{-}$ to $B^{+}\to J/\psi K^{+}$ stripping efficiencies when each cut has been applied independently of all other cuts. The ratios are shown for cuts on the $B$ meson \chiIP (a), $B$ meson and $J/\psi$ \chivtx (b) and \chiFD (c) and the muon and kaon \chiIP (d). The square root of each \chisqd is used to condense the $x$-axis of the plots and follows the previous work in reference~\cite{Diego}.} \label{fig:BJpsiK} \end{figure} \begin{figure}[p] \centering \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{./Figs/Selection/Bd2KPi_IP.pdf} \caption{} \label{fig:KPiIP} \end{subfigure} \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{./Figs/Selection/Bd2KPi_VTX.pdf} \caption{} \label{fig:KPivertex} \end{subfigure} \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{./Figs/Selection/Bd2KPi_FD.pdf} \caption{} \label{fig:KPiFD} \end{subfigure} \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{./Figs/Selection/Bd2KPi_daugt_IP.pdf} \caption{} \label{fig:KPidaugtIP} \end{subfigure} \caption{Ratio of \bsmumu to \bdkpi stripping efficiencies when each cut has been applied independently of all other cuts. The ratios are shown for cuts on the $B$ meson \chiIP (a), \chivtx (b) and \chiFD (c) and the muon and kaon \chiIP (d). The square root of each \chisqd is used to condense the $x$-axis of the plots and follows the previous work in reference~\cite{Diego}.} \label{fig:BKPi} \end{figure} %\pagebreak %The efficiencies for most of the stripping cuts are $\sim 97 \%$ or higher. However, the efficiencies of the cuts on the \chiFD of the \bsd or \jpsi and the daughter \chiIP of the muon or hadron pair are lower at $83 \%$ and $80 \%$, respectively. Therefore improvements to the stripping selection efficiencies could be achieved by altering these two selection requirements. \subsubsection*{Efficiencies of different cuts values} The set of events removed by each cut in the stripping selection is not independent. Therefore the effect of changing one cut on the total efficiency of a stripping selection must be considered. Figure~\ref{fig:efficiencyplots} shows the total efficiency of the \bsmumu stripping line on simulated \bsmumu decays for a range of \chiFD and daughter \chiIP cut values. As expected the lower the cut values the more efficient the stripping line becomes. It is important that any increase in \bsmumu selection efficiency from the stripping is not removed when the trigger requirements are applied. Figure~\ref{fig:triggereffplots} shows that the trigger efficiencies are relatively flat across a large range of \chiFD and daughter \chiIP cut values. Therefore the efficiency gained by a change in the stripping selection is not lost when trigger requirements are imposed. The selection efficiency for \bdmumu %is very similar to \bsmumu as seen in Table~\ref{tab:Run1strippingEff}, therefore only \bsmumu have been studied for different stripping selection cut values. \clearpage \begin{figure}[tbp] \centering \includegraphics[width= 0.9 \textwidth]{./Figs/Selection/sel_eff_chart.pdf} \caption{Efficiency of the \bmumu stripping line to select \bsmumu simulated decays for a range of cuts on the \bs \chiFD and the minimum muon \chiIP.} \label{fig:efficiencyplots} \end{figure}%% \begin{figure}[tbp] \centering \includegraphics[width=0.9\textwidth]{./Figs/Selection/Trigger.pdf} \caption{Trigger efficiencies of \bsmumu simulated decays across a range of \bs \chiFD and the minimum muon \chiIP cut values for the trigger requirements listed in Table~\ref{tab:triggers} for \bmumu decays.}%used to select \bmumu decays for the branching fraction measurement. } \label{fig:triggereffplots} \end{figure} \clearpage \noindent is very similar to \bsmumu as seen in Table~\ref{tab:Run1strippingEff}, therefore only \bsmumu have been stud\ ied for different stripping selection cut values. \subsubsection*{Data retention} One of the main purposes of the stripping selection is to reduce the size of the data set, therefore the cuts cannot be set as loose as possible and the amount of data passing the selection must be considered. Any change applied to the \bmumu stripping line must be propagated through into the stripping lines for \bhh, \bujpsik and \bsjpsiphi decays. Therefore the retention of all stripping lines must be evaluated. The efficiencies of the \bsjpsiphi stripping line have not been presented because this decay is not used as a normalisation mode for the \BF measurements but only to cross check the results, therefore the efficiency of this stripping line compared to the \bmumu stripping line is less important. Table~\ref{tab:Retention} shows the total selection efficiency of the \bmumu stripping line along side the amount of data retained for the set of cuts on the \chiFD and daughter \chiIP for the \bmumu, \bhh, \bujpsik and \bsjpsiphi stripping lines. %The amount of data retained by each selection has been normalised to the original set of stripping select cuts. The set of chosen cuts used in Table~\ref{tab:Retention} aims to keep both cuts as tight as possible for a certain \bsmumu efficiency. \afterpage{ \begin{landscape} %\vspace*{\fill} \begin{table}[tbp] \begin{center} \begin{tabular}{ ccccccc}\toprule \toprule \multicolumn{2}{c}{Stripping cut} & Stripping line efficiency & \multicolumn{4}{c}{Stripping line retention} \\ \midrule \bsd $\sqrt{\chi^{2}_{\mathrm{FD}}}$ & $\mu^{\pm}$ $\sqrt{\chi^{2}_{\mathrm{IP}}}$ & \bsmumu & \bmumu & \bdkpi & \bujpsik & \bsjpsiphi \\ \midrule 15 &5.00 & (71.29 $\pm$ 0.07) $\%$ & 1.0 &1.0 &1.0 & 1.0 \\ 14 &4.25 & (74.91 $\pm$ 0.07) $\%$ & 1.5 &1.3 &1.1 & 1.3 \\ 13 &4.00 & (76.84 $\pm$ 0.07) $\%$ & 1.8 &1.5 &1.2 & 1.4 \\ 12 &3.50 & (79.76 $\pm$ 0.07) $\%$ & 2.6 &1.8 &1.3 & 1.7 \\ 11 &3.00 & (82.72 $\pm$ 0.06) $\%$ & 3.7 &2.4 &1.6 & 1.9 \\ 10 &2.75 & (84.86 $\pm$ 0.06) $\%$ & 4.7 &3.0 &1.7 & 2.1 \\ 9 &2.50 & (86.96 $\pm$ 0.06) $\%$ & 6.8 &3.9 &2.0 & 2.2 \\ \bottomrule \bottomrule \end{tabular} \end{center} %\label{tab:Retention} \vspace{0.7cm} \caption{Efficiency of the \bmumu stripping line to select \bsmumu decays and the change in the data retention for \bmumu, \bhh, \bujpsik and \bsjpsiphi stripping lines for a range of \chiFD and daughter \chiIP cut values. The amount of data passing each selection has been normalised to the original set of stripping select cuts of $\sqrt{\chi^{2}_{\mathrm{FD}}} > 15$ and $\sqrt{\chi^{2}_{\mathrm{IP}}} > 5$. The uncertainty on the normalised retention is less than 1$\%$.} \label{tab:Retention} \end{table} %\vspace*{\fill} \end{landscape} } The data retention is computed by applying the stripping selection to a sub-set of 2012 data to find the number of events that pass the stripping lines for each pair of \chiFD and daughter \chiIP cuts. No trigger requirements are imposed on trigger lines because the stripping selection is run on the full output of the trigger. The number of events for each set of cuts is normalised to the number of events passing the original Run~1 stripping line requirements to show the fractional increase caused by loosening the cut values. An increase of 15$\%$ can be gained in the stripping selection efficiencies by using the loosest cuts in Table~\ref{tab:Retention}. However, the loosest cuts increases the amount of data passing the \bmumu stripping selection by a factor of 7 and the \bhh stripping selection by a factor of 4. Table~\ref{tab:NumEvents} shows the number of Run~1 candidates passing the original stripping selection listed in Tables~\ref{tab:PreviousStrippingA} and~\ref{tab:PreviousStrippingB}. The \bhh stripping line lets through the most candidates whereas the \bmumu stripping line saves far fewer candidates, therefore a change in the retention of the \bhh line is more significant than the \bmumu line. The final set of cuts used in the stripping selection must be a compromise between the selection efficiency and the amount of data that passes the selection. The studies detailed here show that using selection cuts of \bs \chiFD $>$ 121 ($\sqrt{\chi^{2}_{\mathrm{FD}}} > 11$) and minimum muon \chiIP $>$ 9 ($\sqrt{\chi^{2}_{\mathrm{IP}}} > 3$) in the stripping lines would increase the \bmumu selection efficiency from 71$\%$ to 82$\%$ and the amount of data retained would be doubled. The increase of the data retained by the \bhh, \bujpsik and \bsjpsiphi lines for equivalent cut values is smaller and the efficiencies are similar to the \bmumu selection efficiencies. Therefore, the cuts of \bs \chiFD~$>$~121 and minimum muon \chiIP~$>$~9 offer a good compromise between signal efficiency and the amount of data retained. The stripping lines have been updated to include the new, looser cut values that will be used in future studies of \bmumu decays. %\subsubsection*{Final stripping selection} %Although the new looser cut values would improve the efficiency of identifying \bmumu decays at this step in the selection process, the new looser cut values are not used in the analyses presented in this dissertation. The multivariate classifier use to separate signal and combinatorial background decays, described in Section~\ref{sec:globalBDT}, is trained on simulated \bbbarmumux decays. As discussed in Section~\ref{sec:MCsamples}, cuts are applied to \bbbarmumux decays when the decays are simulated and only decays that pass the original \chiFD and daughter \chiIP requirements are available in the simulated sample. In order to gain the best performance of the multivariate classifier on data the same cuts are applied to data that are applied to the samples used to train the classifier. Therefore the original cuts on \chiFD and daughter \chiIP listed in Table~\ref{tab:PreviousStrippingA} must be used to select \bsmumu candidates. \begin{table}[tbp] \begin{center} \begin{tabular}{lrr} \toprule \toprule Stripping Lines & Selected events & Retention / $\%$ \\ \midrule \bmumu & 898880 & 0.0022 \\ \bhh & 14502295 & 0.0831 \\ \bujpsik & 3344568 & 0.0087 \\ \bsjpsiphi & 456787 & 0.0011 \\ %$B \to J/\psi K^{*}$ & 12574956 & 0.018 \\ \midrule %Total & 31779975& - \\ %Total with correlation & 31402536 \\ Total & 18745743& - \\ \bottomrule \bottomrule \end{tabular} \vspace{0.7cm} \caption{Number of events passing stripping lines used for the \bmumu \BF measurement in reference~\cite{Aaij:2013aka,CMS:2014xfa} that are listed in Tables~\ref{tab:PreviousStrippingA} and~\ref{tab:PreviousStrippingB} and the percentage of the total LHCb data set that they correspond to. The total does not include correlation between lines and the requirements of \chiFD $ > 225$ and daughter \chiIP $> 25$ are used.}%, which is expected to be 42 $\%$ between \bmumu and \bhh lines. } \label{tab:NumEvents} \end{center} \vspace{-1.0cm} \end{table} \subsubsection*{Final stripping selection} Although the new looser cut values would improve the efficiency of identifying \bmumu decays at this step in the selection process, the new looser cut values are not used in the measurements presented in this dissertation. The multivariate classifier used to separate signal and combinatorial background decays, described in Section~\ref{sec:globalBDT}, is trained on simulated \bbbarmumux decays. As discussed in Section~\ref{sec:MCsamples}, cuts are applied to \bbbarmumux decays when the decays are simulated. Only decays that pass the original \chiFD and daughter \chiIP requirements are available in the simulated sample. Therefore to ensure the best performance of the classifier on data, the same cuts are applied to data that are applied to the simulated samples. Therefore the original cuts on \chiFD and daughter \chiIP listed in Table~\ref{tab:PreviousStrippingA} are used to select \bmumu candidates. \subsubsection{Additional offline cuts} \label{finalloosesel} %Change of previous version, have a summary at the end of the section for each decay and here just list the additional cuts that are applied. Additional selection requirements are applied after the stripping to remove specific backgrounds. A lower bound is placed on the $B$ meson transverse momentum to remove pairs of muons originating from $pp \to p\mu^{+}\mu^{-} p$ decays and a \jpsi veto is used to remove backgrounds from \bcjpsimunu decays. Semi-leptonic \bcjpsimunu decays, where \jpsimumu, are backgrounds for \bmumu decays when a muon from the \jpsi forms a good vertex with the muon from the $B_{c}^{+}$ decay. Due to the high mass of the $B_{c}^{+}$ this could place mis-reconstructed candidates within the \bs mass window. A `\jpsi veto' is used to remove background events from \bcjpsimunu decays. The veto removes events where one muon from the \bmumu candidate combined with any other oppositely charged muon in the event has $|m_{\mu\mu} - m_{J/\psi}| < 30$ \mevcc. %The veto has a rejection power of X $\%$ on \bcjpsimunu events that have passed \bmumu other selection cuts in Table~\ref{tab:fullpreselection} and rejects only $\%$ of \bmumu signal events. The expected number of \bcjpsimunu events after the full selection can be found in Section~X. The offline selection of \bmumu decays includes the momentum, ghost track probability and decay time cuts made in the \bhh stripping line, but were absent in the \bmumu stripping line. Also a narrower mass range of 4900 - 6000 \mevcc is imposed to remove $B_{s}^{0} \to \mu^{+} \mu^{-} \gamma$ backgrounds. The stripping selection for \bmumu decays is kept loose to allow for the study of background decays in data. The selection applied to Run~1 and Run~2 data is the same for all variables except the track ghost probability and \chitrk. Slightly looser cuts of track ghost probability~$<$~0.4 and \chitrk~<~$4$, are used in Run~2 to take advantage of changes in the reconstruction that were introduced for Run~2. Table~\ref{tab:BFfullselection} summaries all selection cuts used to identify \bmumu %, \bhh, \bujpsik and \bsjpsiphi decays at the end of this section. %The complete list of selection cuts applied in the cut based selection to select \bsmumu and \bhh decays in Run~1 and Run~2 data are listed in Tables~\ref{tab:fullpreselection}. The stripping selection cuts from Table~\ref{tab:PreviousStripping} are included with the $B$ mesons \chiFD and daughter \chiIP requirements updated to the looser values and the selection of \bmumu decays includes the momentum, ghost track probability and decay time cuts made in the \bhh stripping line, but were absent in the \bmumu stripping line. %Additional selection requirements are applied after the stripping to remove specific backgrounds. A lower bound is placed on the $B$ meson transverse momentum to remove pairs of muons originating from $pp \to p\mu^{+}\mu^{-} p$ decays and a \jpsi veto is used to remove backgrounds from \bcjpsimunu decays. The semi-leptonic \bcjpsimunu decays, where \jpsimumu, contribute to the background of \bmumu decays when a muon from the \jpsi forms a good vertex with the muon from the $B_{c}^{+}$ decay. Due to the high mass of the $B_{c}^{+}$ this could place mis-reconstructed candidates within the \bs mass window. A `\jpsi veto' can be used to remove background events from \bcjpsimunu decays. The veto works by removing events where one muon from the \bmumu candidate combined with any other oppositely charged muon in the event has $m_{\mu^{+}\mu^{-}}$ -$ m_{J/\psi}} < 30$ \mevcc. %The veto has a rejection power of X $\%$ on \bcjpsimunu events that have passed \bmumu other selection cuts in Table~\ref{tab:fullpreselection} and rejects only $\%$ of \bmumu signal events. The expected number of \bcjpsimunu events after the full selection can be found in Section~X. %The $B$ meson mass range for both \bsmumu and \bhh decays is narrower than the range in the stripping selection in Section~\ref{strippingold}. \bsmumu candidates are required to have a dimuon invariant mass greater than 5320 \mevcc. The motivation comes from mass fit studies that are detailed in Section X. The consequence of this cut is to remove \bdmumu decays, $B_{s}^{0} \to \mu^{+} \mu^{-} \gamma$ backgrounds and most backgrounds from mis-identified semi-leptonic and \bhh decays. This can be seen from the mass distribution in Figure~\ref{fig:LHCbCMS}. The expect number of \bdmumu and mis-identified decays after the full selection can be found in Section~X. Similarly the \bhh mass window is reduced to remove contributions from mis-identified backgrounds. %The selection applied to Run~1 and Run~2 is the same for all variables expect the track ghost probability and track \chisqd/$ndof$. Slightly looser cuts are used for Run~2 to take advantage to changes in the reconstruction that were introduced for Run~2. %\begin{landscape} %\vspace*{\fill} %\begin{table}[htbp] %\begin{center} %\begin{tabular}{lll} %\hline %Particle & \bsmumu & \bhh \\ %\hline %\bs or $B^{+}$ & 5320 \mevcc $<$ M $<$ 6000 \mevcc & 5100 \mevcc $<$ M $<$ 5500 \mevcc \\ % & DIRA $>$ 0 & DIRA $>$ 0 \\ % & \chiFD $>$ 121 & \chiFD $>$ 121 \\ % & \chiIP $<$ 25 & \chiIP $<$ 25 \\ % & Vertex $\chi^{2}$/ndof $<$ 9 & Vertex $\chi^{2}$/ndof $<$ 9 \\ % & DOCA $<$ 0.3 mm & DOCA $<$ 0.3 mm \\ % & $\tau$ $<$ 13.248 \ps & $\tau$ $<$ 13.248 \ps \\ % & $p_{T}$ $>$ 500 \mevc & $p_{T}$ $>$ 500 \mevc \\%% %\hline %Daughter $\mu$ or $h$ & Track $\chi^{2}$/ndof $<$ 3 (4) & Track $\chi^{2}$/ndof $<$ 3 (4) \\ % & Minimum \chiIP $>$ 9 & Minimum \chiIP $>$ 9 \\ % & 0.25 \gevc $<$ $p_{T}$ $<$ 40 \gevc & 0.25 \gevc $<$ $p_{T}$ $<$ 40 \gevc \\ % & $p$ $<$ 500 \gevc & $p$ $<$ 500 \gevc \\ % & ghost probability $<$ 0.3 (0.4) & ghost probability $<$ 0.3 (0.4) \\ % & $|$m_{\mu\mu} - m_{\jpsi}$| $<$ 30$~\mevcc &$|$m_{\mu\mu} - m_{\jpsi}$| $<$ 30$~\mevcc \\ % & isMuon = True & - \\ % %\hline% %\hline %\end{tabular} %\vspace{0.7cm} %\caption{Selection cuts applied to select \bsmumu and \bhh decays, where selection is different between Run~1 and Run~2 the Run~2 values are shown in parenthesis.} %\label{tab:fullpreselection} %\end{center} %\end{table} %\vspace*{\fill} %\end{landscape} \subsection{Particle identification} \label{sec:BFpid} %Particle identification (PID) variables are used to refine the selection of \bsmumu candidates and to separate different \bhh decays. In the selection of \bmumu decays, particle identification variables are particularly useful to reduce the backgrounds coming from mis-identified semi-leptonic decays and \bhh decays and also help to reduce the number of combinatorial background decays. On top of the isMuon requirement used in the stripping selection, ProbNN variables, defined in Section~\ref{PID_variables}, are used. A linear combination of these variables \begin{equation} \text{PID}_{\mu} = \text{ProbNN}\mu \times(1 - \text{ProbNN}K) \times(1 - \text{ProbNN}p) \end{equation} is used to refine the selection of \bmumu candidates. The ProbNN$K$ variable is effective at removing mis-identified \bhh backgrounds and the ProbNN$p$ variable is effective at removing \lambdab backgrounds. Different tunings of the algorithms used in the ProbNN variables are used to select candidates in Run 1 and 2015 data compared to 2016 data. The tunings have different efficiencies to select particles therefore the cut values placed on PID$_{\mu}$ are different for each tuning. The cuts applied to data are $\text{PID}_{\mu} > 0.4$ for Run~1 and 2015 data and $\text{PID}_{\mu} > 0.8$ for 2016 data. %\begin{equation} %%\begin{split} %\text{Run 1 and 2015:} \text{ PID}_{\mu}(MC12TuneV2) > 0.4 \\ %\text{2016:& \text{ PID}_{\mu}(MC15TuneV1) > 0.8. %%\end{split} %\end{equation} The cut value on PID$_{\mu}$ for the Run 1 and 2015 is optimised using pseudoexperiments to sufficiently reduce the background decays and give the highest sensitivity to the \bdmumu decays. Accurate particle identification is most important for \bdmumu decays because the backgrounds from \bhh and \lambdab pollute the \bd mass window. The cut value for 2016 was chosen to have the same or better background rejection as the Run 1 and 2015 cut, however the 2016 tuning has a better performance therefore the final cut choice has a higher efficiency for selecting \bmumu decays. %The cut values applied to data are given in Table~\ref{tab:BFPID}} along with the different ProbNN algorithm tunings. %The cut value for 2016 was chosen to have the same or lower background rejection as the Run 1 and 2015 cut and a high efficiency for selecting \bmumu decays. %\begin{table}[htbp] %\begin{center} %\begin{tabular}{lll} %\hline %Year & ProbNN tune & Cut value \\ %Run 1 and 2015 & MC12TuneV2 & PID$_{\mu}$ > 0.4 \\ %2016 & MC15TuneV1 & PID$_{\mu}$ > 0.8 \\ %\hline %\end{tabular} %\vspace{0.7cm} %\vspace{0.7cm} %\caption{Particle identification requirements to select \bsmumu decays in Run 1 and Run 2 data. } %\label{tab:BFPID} %\end{center} %\vspace{-1.0cm} %\end{table} \subsection{Multivariate Classifiers} \label{sec:MVC} The selection described so far removes a large number of background candidates. However, because \bmumu decays occur very rarely, the data is still dominated by long-lived combinatorial background. To improve the separation of signal and background decays multivariate classifiers are used. A multivariate classifier is an algorithm that learns differences between signal and background decays. The classifier is given two input samples, one containing only signal decays and the other containing only background decays and a set of input variables. The input variables have different distributions for signal and background decays. The classifier uses the distributions of the input variables along with its knowledge of which decays are signal and background to learn the difference between the two types of decays. The algorithm is then applied to a data set containing an unknown mixture of signal and background decays to separate them. For each decay the algorithm produces a number, typically between $-1$ and +1, where high numbers indicate signal-like decays and low numbers indicate background-like decays. %A cut can then be placed on the output of a classifier to remove background decays so that the remaining data set has a higher purity for signal events or Two multivariate classifiers are used to identify \bmumu decays. Both classifiers are a type called a Boosted Decision Tree (BDT), described in Section~\ref{sec:GeneralBDT}. %A range of different classifiers were investigated but BDTs preformed the best at separating signal from bac The first classifier, described in Section~\ref{BDTS}, is called the BDTS. It is used to remove candidates that are very unlikely to be signal by placing a cut on the BDTS output. %The signal efficiency and it has a high efficiency to select \bmumu decays. The second classifier, described in Section~\ref{sec:globalBDT}, it called the global BDT. The output of the global BDT is used to classify candidates into bins containing increasing proportions of signal candidates. The \BFs are measured from the invariant mass distribution of the two muons in bins of BDT output as described in Chapter~\ref{sec:BFanalysis}. %, no candidates are removed based on the output of the second BDT. The BDTS is necessary to reduce the background to a more manageable level for the global BDT. \subsubsection{Boosted Decision Trees} \label{sec:GeneralBDT} A BDT is made up of the combined outputs of separate decision trees. A decision tree begins with a data sample, where each decay is known to be either signal or background and a set of variables describing them. The decision tree applies a cut on a variable that will be the most effective at separating the signal and background in the sample and creates two sub-samples. Another cut is then applied to each of the sub-samples to further separate signal from background. This process is repeated until either a certain number of cuts, defined as the depth of the tree, or the number of candidates in each sub-sample has reached a minimum value. Each sub-sample produced at the end of the tree is called a leaf. The tree uses the knowledge of whether decays are signal or background to assign a value of +1 or $-1$ to every decay. A decay is given a value +1 if it is in a leaf where the majority of decays are signal and the value $-1$ if it is in a leaf that has a majority of background decays. The final decisions made by the tree are not perfect, some signal (background) decays will be mis-classified as background and given the value of $-1$ (+1). %The decision making process of a decision tree is illustrated in Figure~\ref{fig:DT}. %\begin{figure} % \centeringsd %\begin{subfigure}[b]{0.4\textwidth} % \includegraphics[width=\textwidth]{./Figs/placeholder.jpeg} % \caption{ } % \label{fig:eff} % \end{subfigure} % ~ %add desired spacing between images, e. g. ~, \quad, \qquad, \hfill etc. %(or a blank line to force the subfigure onto a new line) % \begin{subfigure}[b]{0.4\textwidth} % \includegraphics[width=\textwidth]{./Figs/Selection/strip_chart1.png} % \caption{ } % \label{fig:eff_contours} % \end{subfigure} % \caption{Illustration of a decision tree.} % \label{fig:DT} %\end{figure} Often a single decision tree is not particularly good at classifying decays; there is no way to correct mis-classified decays in the leaves, and it is particularly sensitive to statistical fluctuations in the training samples. A BDT combines the output of numerous decision trees to improve the classification of decays and reduce the dependence of the final decisions on statistical fluctuations. A BDT starts with a decision tree and assigns weights to decays in the signal and background samples depending on whether the output of the first decision tree classified them correctly. The weighted sample is then used as the input for the training of the next decision tree. The weights are designed so that the next tree is more likely to correctly classify previously mis-classified decays. This process is repeated until a certain number of trees have been trained. The re-weighting process is known as ``boosting'' and the weights applied to the samples are taken into account when combining the output value of each decision tree into the overall output of the BDT. The output of a BDT will be a number between $-1$ and +1 where high numbers indicate signal and low numbers indicate background. The TMVA package~\cite{Hocker:2007ht} is used to develop and train the BDTs. The package provides several different methods of boosting that can be used. The adaptive boosting method was found to produce the most effective BDT at separating \bmumu from combinatorial background. This method of boosting assigns the weight, $w$, to decays incorrectly classified by one tree before being used as the input to the next decision tree. The weights assigned are given by \begin{equation} w = \frac{1 - f}{f}\text{, where } f = \frac{\text{misclassified events}}{\text{total events}}. \end{equation} Therefore, incorrectly assigned candidates are given a higher weight than correctly classified candidates. The `speed’ at which the boosting occurs is controlled by the parameter $\beta$ where $w \rightarrow w^{\beta}$. The parameter $\beta$ is specified in the training of the decision tree and a large number of boosting steps can improve the performance of the BDT. The ability of a BDT to correctly identify signal and background candidates depends on three main factors: \begin{itemize} \item the size of the training samples - a large training sample is useful to prevent the BDT from being sensitive to statistical fluctuations and contains more information the classifier can use to learn the difference between signal and background; \item the input variables - different distributions in the input variables for signal and background candidates enable the classifier to easily separate the types of candidates. The overall performance is insensitive to poorly discriminating variables that are included; and \item parameters that dictate the BDT training - the training of a BDT is specified by several parameters; the number of trees (NTrees), the tree depth (MaxDepth), the minimum number of events a leaf can contain (nEventsMin or MinNodeSize\footnote{nEventsMin is the minimum number of decays in a leaf and MinNodeSize is the number of decays in a leaf given as a percentage of the training sample size. The parameter specified in the training depends on the version of the TMVA package used. }); the `speed’ at which the boosting occurs ($\beta$) and the number of cut values that a tree tries for a variable before making a decision (nCuts). \end{itemize} These three factors affect the performance of the BDT. However, the importance of each varies. Together they are used to prevent the BDT being very sensitive to the statistical fluctuations in the training sample. This is called overtraining; an overtrained BDT is extremely accurate at classifying the candidates in the training sample but performs poorly at classifying candidates in a statistically independent sample. Although this is less common in BDTs than single decision trees, it can be avoided by having a sufficiently large training sample or by limiting the depth of trees or the number of trees in the BDT. \subsubsection{The BDTS} \label{BDTS} %The output of the stripping selection still includes many background decays, further cuts shown in Table~\ref{} reduce the background decays. Some selection cuts are designed to remove specific background decays and the selection for \bsmumu decays used in the Branching Fraction and effective lifetime analyses starts to diverge slightly. %The BDTS is a multivariate classifier that is designed to reduce the number of combinatorial background events. It is a Boosted Decision Tree (BDT) (see Section~\ref{} for a detailed description) that is trained on \bsmumu and \bbbarmumux simulated decays that have passed the \bmumu selection requirements in Table~\ref{} and additional particle identification cuts listed in Table X. The BDTS uses input variables similar to those in the stripping selection to classify events: \begin{itemize} \item \chiIP of the \bsd; \item \chivtx of the \bsd; \item direction cosine of \bsd; \item distance of closest approach of the tracks of the muons; \item minimum \chiIP of the muons with respect to all primary vertices in the event; and \item impact parameter (IP) of the \bsd, this is the distance of closest approach of the $B$ to the primary vertex. \end{itemize} The signal and background samples used to train the BDTS are simulated \bsmumu decays and background candidates in a sample of Run~1 data from the mass ranges 4800 - 5000 \mevcc and 5500 - 6000 \mevcc. The selection cuts listed in Table~\ref{tab:BDTSpresel} are applied to the training samples and the training parameters used are listed in Table~\ref{tab:BDTStrainingparams}. The output of the BDTS is flattened to give a response between 0 and 1 so that signal is uniformly distributed across the range and background is peaked at zero as illustrated in Figure~\ref{fig:FlatteningBDTS}. The BDTS is applied to all candidates passing the \bmumu, \bhh and \bujpsik stripping lines, and candidates are required to have a BDTS value above 0.05. When the BDTS is applied to \bujpsik decays the distance of closest approach of the muons refers to the muons in the \jpsi and the \chivtx is of the \jpsi. %The chosen cut value has a efficiency of X $\%$ on \bsmumu decays and reject X $\%$ of \bbbarmumux decays. The performance of the BDTS at removing backgrounds is illustrated in Figure~\ref{fig:BDTSpreformance}. %Full details of the development of the BDTS can be found in~\ref{}. \begin{table}[tbp] \begin{center} \begin{tabular}{ll} \toprule \toprule \multicolumn{2}{c}{Selection applied to BDTS training samples.} \\ \midrule \bs & $\mu^{\pm}$\\ \midrule \chiFD $>$ 225 & $p_{T}$ $>$ 500 \mevc \\ \chiIP $<$ 25 & \chitrk $<$ 3 \\ \chivtx $<$ 9 & Minimum \chiIP $>$ 25 \\ DOCA $<$ 0.3 mm & 0.25 \gevc $<$ $p_{T}$ $<$ 40 \gevc \\ $\tau$ $<$ 13.248 \ps & $p$ $<$ 500 \gevc \\ $p_{T}$ $>$ 500 \mevc & \\ DIRA $>$ 0 & \\ \midrule %\multicolumn{2}{c}{Trigger requirements} \\ \midrule Trigger line & Decision \\ \midrule L0Global&DEC\\ Hlt1Phys&DEC \\ Hlt2Phys&DEC \\ \bottomrule \bottomrule \end{tabular} \vspace{0.7cm} \caption{Selection cuts applied to select the signal and background samples used to train the BDTS. The isMuon requirement is not applied to the muons so that the BDTS can be used on \bhh decays.}% $m_{\mu\mu}$ is the invariant mass to the two muons in the \bmumu candidate.} \label{tab:BDTSpresel} \end{center} \vspace{-1.0cm} \end{table} \begin{table}[tbp] \begin{center} \begin{tabular}{lr} \toprule \toprule Parameter & Value \\ \midrule nTrees & 250 \\ nEventsMin & 400 \\ MaxDepth & 3 \\ %NNodesMax = 100000 \\ $\beta$ & 1.0 \\ nCuts & 20 \\ \bottomrule \bottomrule \end{tabular} \vspace{0.7cm} \caption{Training parameters used to specify the training of the BDTS.} \label{tab:BDTStrainingparams} \end{center} \vspace{-1.0cm} \end{table} \begin{figure}[tbp] \centering %\begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=0.49\textwidth]{./Figs/Selection/BDTS_signal_Feb6.pdf} %\caption{ } %\label{fig:BDTSsig} %\end{subfigure} %~ %add desired spacing between images, e. g. ~, \quad, \qquad, \hfill etc. %(or a blank line to force the subfigure onto a new line) %\begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=0.49\textwidth]{./Figs/Selection/BDTS_background_Feb6.pdf} %\caption{ } %\label{fig:BDTSbkg} %\end{subfigure} \caption{Normalised BDTS response for a) simulated \bsmumu decays and b) \bmumu candidates in data with a mass above 5447 \mevcc consisting of background decays.} \label{fig:FlatteningBDTS} \end{figure} \begin{figure}[tbp] \centering % \begin{subfigure}[b]{0.4\textwidth} \includegraphics[width= 0.49 \textwidth]{./Figs/Selection/BDTS_impact_2012.pdf} \includegraphics[width= 0.49 \textwidth]{./Figs/Selection/BDTS_impact_2016.pdf} %\caption{ } % \label{fig:BDTSsig} %\end{subfigure} % ~ %add desired spacing between images, e. g. ~, \quad, \qquad, \hfill etc. %(or a blank line to force the subfigure onto a new line) % \begin{subfigure}[b]{0.4\textwidth} % \includegraphics[width=\textwidth]{./Figs/placeholder.jpeg} % \caption{ } % \label{fig:BDTSbkg} % \end{subfigure} \caption{Invariant mass spectrum for \bhh decays in a) 2012 and b) 2016 data passing the selection requirements in Table~\ref{tab:BDTSpresel} before and after the BDTS cut is applied.} \label{fig:BDTSpreformance} \end{figure} \subsubsection{Global BDT} \label{sec:globalBDT} The global BDT is the final step in identifying \bmumu decays and it is very effective at separating them from long-lived combinatorial background decays. The discriminating power achieved by the global BDT is mostly dependent on isolation criteria. Isolation criteria provide a measure of how far away each muon from a \bmumu candidate is from other tracks in the event. The tracks of the muons from a real \bmumu decay will be, in general, far from other tracks in the event because the \bmumu decay tree contains no other tracks apart from the muons. However, long-lived combinatorial background arises from semi-leptonic decays where the muon tracks are likely to be close to other tracks that originate from the same decay tree. % as the muon. %Various different definitions of isolations have been used across different experiments from D0 and CDF to ATLAS, CMS and LHCb. Isolation criteria are very useful in the selection of very rare decays like \bsmumu because they enable background to be removed whilst keeping a high efficiency for signal decays. Two isolation criteria are used in the global BDT, one compares long tracks in the event to the muons in \bmumu candidates and the other compares VELO tracks in the event to the muons. The definition of the track types are given in Section~\ref{sec:Track_recon}. The isolation variables are built from the output of BDTs. For each type of track a BDT is trained on simulated \bsmumu and \bbbarmumux decays using a set of input variables that describe track and vertex properties and the separation between muons in a \bsmumu candidate and other tracks in the event. The BDT for the long track isolation criteria compares the $\mu^{+}$ from a \bsmumu candidate with all other long tracks in the event, excluding the track of the $\mu^{-}$, and gives an output for each possible $\mu^{+}$ and track pairing. The process is repeated for the $\mu^{-}$. The BDT is designed to produce high output values for muons from \bbbarmumux decays and a low value for muons from \bsmumu decays. The long track isolation criteria of a \bsmumu candidate is then composed of the sum of the highest BDT output values produced for the $\mu^{+}$ and the $\mu^{-}$. The same setup is used for the VELO track isolation criteria except muons are compared to VELO tracks rather than long tracks. The separation power of these isolation criteria are shown in Figure~\ref{fig:BDTvars}. Full details of the isolation variables can be found in reference~\cite{Archilli:1970886}. %Two isolation criteria are used as input vairbales for the glabla BDT. One gives a measure of the seperation between muons an a \bmumu candidates and long tracks in an event, the other compares muons with VELO tracks in an event. The track type definitions are given in Section~\ref{sec:Track_recon}. The long track isolation criteria is made up from the output values of a BDT. It is trained in simulated \bmumu decays and long tracks from simulated \bbbarmumux decays as signal and background training samples, respectively. The BDT uses track and vertex information as well as the angular and spatial seperation between long tracks and muons in a \bmumu candidate to produce high output values for background and low output values for signal. The BDT compared the $\mu^+$ of a \bmumu candidate to all long tracks in the event, excluding the tracks from the $\mu^-$, and gives an output value for each pairing. The same is done for the $\mu^-$. The isolation criteria is then the sum of the highest BDT output values produce by $\mu^+$-track and $\mu^-$-track pairings. %\begin{figure}[tbp] % \centering % \includegraphics[width=0.49\textwidth]{./Figs/Selection/iso_vel_Mar.pdf} % \includegraphics[width=0.49\textwidth]{./Figs/Selection/long_track_Mar.pdf} % \caption{VELO track (a) and long track (b) isolation distributions of simulated \bsmumu and \bbbarmumux decays used to train the global BDT passing cuts in Table~\ref{tab:BDTpresel}.} % \label{fig:Isolations} %\end{figure} The isolation criteria are used along with five other variables in the global BDT. The full list of input variables used are: \begin{itemize} \item long track isolation criteria; \item VELO track isolation criteria; \item $\sqrt{\Delta \phi^{2} + \Delta \eta^{2}}$, where $\Delta \phi$ is the difference in azimuthal angles of the muons and $\Delta \eta$ the difference in the pseudo-rapidity of the muons; \item the smallest \chiIP with respect to the primary vertex of the \bmumu of the muons; \item \chivtx of the \bsd; \item \chiIP of the \bsd with respect to the primary vertex; and \item the angle, $\theta$, between the momentum vector of the \bsd and the vector connecting the production and decay vertices of the \bsd. \end{itemize} A comparison of the signal and background distributions of the input variables in the training samples are shown in Figure~\ref{fig:BDTvars}. These variables were chosen by training a BDT beginning with the most discriminating variable, the long track isolation criteria, and adding variables to determine which improved the performance to the classifier. Only variables that significantly improved the performance were included in the global BDT. The training parameters used in the BDT are listed in Table~\ref{tab:BDTtrainingparams}. These parameters were chosen by scanning across a range of variables and choosing those that gave the best performance. The performance of each BDT with different input variables and training parameters was evaluated by comparing the number of background decays from \bmumu candidates in data with masses above 5447~\mevcc remaining after different cuts on the BDT output values. The cut values compared have the same efficiency for each BDT to select simulated \bsmumu decays and the best performing BDT removed the lowest number of background decays for the highest signal efficiency. \begin{figure}[tbp] \centering \includegraphics[width=0.49\textwidth]{./Figs/Selection/iso_vel_Mar.pdf} \includegraphics[width=0.49\textwidth]{./Figs/Selection/long_track_Mar.pdf} \includegraphics[width=0.49\textwidth]{./Figs/Selection/Arcos_Mar.pdf} \includegraphics[width=0.49\textwidth]{./Figs/Selection/B_IPS_Mar.pdf} \includegraphics[width=0.49\textwidth]{./Figs/Selection/Vertex_Mar.pdf} \includegraphics[width=0.49\textwidth]{./Figs/Selection/srqt_Mar.pdf} \includegraphics[width=0.49\textwidth]{./Figs/Selection/Min_IP_Mar.pdf} \caption{Distributions of input variables of the global BDT from simulated \bsmumu and \bbbarmumux decays used to train the global BDT passing cuts in Table~\ref{tab:BDTpresel}. Input variables are VELO track isolation criteria (a), long track isolation criteria (b) $\theta$ (c), \bs \chiIP (d), \bs \chivtx (e), $\sqrt{\Delta \phi^{2} + \Delta \eta^{2}}$ (f) and minimum muon \chiIP (g).} \label{fig:BDTvars} \end{figure} %The global BDT was trained on simulated \bsmumu and \bbbarmumux decays with 2012 data taking conditions for the signal and background samples. The simulated decas had to pass the selection requirements listed in Table~\ref{}. Independant samples were used for training and testing the global BDT. Simulated \bsmumu and \bbbarmumux decays are used to provide large signal and background training samples for the global BDT. %In data \bsmumu candidates in the mass range 5431 to 6550 \mevcc consist almost entirely of \bbbarmumux decays, however the number of candidates in this mass range is too small to be a useful as sample of background candidates to train a BDT with comparable performance to one trained entirely on simulated decays. %The simulated sample \bbbarmumux decays corresponds to the background expected with 7~\fb of data from $pp$ collisions at $\sqrt{s}$~=~8~\tev. The production of such a large sample requires a lot of space to be saved, therefore several measures were taken to reduce the size needed to save the simulated \bbbarmumux decays. The cuts, listed in Table~\ref{tab:MC_decays}, were applied to the simulated decays as they were generated to reduce the number of events saved on disk. Also the stripping selection cuts in Table~\ref{tab:PreviousStrippingA} were applied and candidates that did not pass the stripping selection were not saved. Unfortunately the \bbbarmumux sample therefore does not include candidates that are selected by the looser stripping selection described in Section~\ref{sec:cutbasedsel}. In order to gain the best performance of the BDT on data the same cuts are applied to data that are applied to the samples used to train the BDT. Therefore the original cuts on \chiFD and daughter \chiIP listed in Table~\ref{tab:PreviousStrippingA} must be used to select \bsmumu candidates. The complete list of selection requirements applied to the training samples used to develop global BDT are listed in Table~\ref{tab:BDTpresel}, the same selection is applied to \bsmumu and \bbbarmumux decays. \begin{table}[tbp] \begin{center} \begin{tabular}{lr} \toprule \toprule Parameter & Value \\ \midrule nTrees & 1000 \\ %nEventsMin & 400 \\ MinNodeSize & 1$\%$ \\ MaxDepth & 3 \\ %NNodesMax = 100000 \\ $\beta$ & 0.75 \\ nCuts & 30 \\ \bottomrule \bottomrule \end{tabular} \vspace{0.7cm} \caption{Training parameters used to specify the training of the global BDT.} \label{tab:BDTtrainingparams} \end{center} \vspace{-1.0cm} \end{table} \begin{table}[tbp] \begin{center} \begin{tabular}{ll} \toprule \toprule \multicolumn{2}{c}{Selection applied to global BDT training samples.} \\ \midrule \bs & $\mu^{\pm}$\\ \midrule \chiFD $>$ 225 & $p_{T}$ $>$ 500 \mevc \\ \chiIP $<$ 25 & \chitrk $<$ 3 \\ \chivtx $<$ 9 & Minimum \chiIP $>$ 25 \\ DOCA $<$ 0.3 mm & 0.25 \gevc $<$ $p_{T}$ $<$ 40 \gevc \\ $\tau$ $<$ 13.248 \ps & $p$ $<$ 500 \gevc \\ $p_{T}$ $>$ 500 \mevc & isMuon = True\\ DIRA $>$ 0 & BDTS > 0.05 \\ 4900 $<$ $m_{\mu\mu}$ < 6000 \mevcc & \\ \midrule %\multicolumn{2}{c}{Trigger requirements} \\ \midrule Trigger line & Decision\\ \midrule L0Global&DEC\\ Hlt1Phys&DEC \\ Hlt2Phys&DEC \\ \bottomrule \bottomrule \end{tabular} \vspace{0.7cm} \caption{Selection cuts applied to select candidates for signal and background samples used to train the global BDT. $m_{\mu\mu}$ is the invariant mass to the two muons in the \bmumu candidate.} \label{tab:BDTpresel} \end{center} \vspace{-1.0cm} \end{table} \begin{figure}[tbp] \centering \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{./Figs/Selection/BDTflat_signal.pdf} %\caption{ } %\label{fig:BDTsig} \end{subfigure} ~ %add desired spacing between images, e. g. ~, \quad, \qquad, \hfill etc. %(or a blank line to force the subfigure onto a new line) \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{./Figs/Selection/BDTflat_bkgnd.pdf} %\caption{ } %\label{fig:BDTbkg} \end{subfigure} \caption{Normalised output distributions for the global BDT for a) \bsmumu simulated decays and b) \bbbarmumux decays in simulation and data.} \label{fig:FlatteningBDT} \end{figure} The global BDT is applied to data taken in all years and in the same way as the BDTS. The final output of the global BDT is flattened to have a response between 0 and 1 that is uniform for signal and the background peaks at zero. The global BDT output for signal and background is shown in Figure~\ref{fig:FlatteningBDT} for each year of data taking. The flattening is useful for the \BF measurements because a simultaneous fit is applied to the invariant mass of the two muons in the \bmumu candidate in bins of BDT. Flattening the BDT output enables bins containing equal proportions of signal decays to be created. The signal efficiency versus the background rejection of the global BDT is shown in Figure~\ref{fig:BDTperformance} for all years of data taking, the performance is similar across all the years but Run~2 data has a slightly better background rejection for a given signal efficiency than Run~1. A comparison of the input variables used in the global BDT for each year of data taking is given in Appendix~\ref{sec:appendix1}. %\begin{figure}[tbp] % \centering % \begin{subfigure}[b]{0.48\textwidth} % \includegraphics[width=\textwidth]{./Figs/Selection/BDTflat_signal.pdf} % %\caption{ } % %\label{fig:BDTsig} % \end{subfigure} % ~ %add desired spacing between images, e. g. ~, \quad, \qquad, \hfill etc. % %(or a blank line to force the subfigure onto a new line) % \begin{subfigure}[b]{0.48\textwidth} % \includegraphics[width=\textwidth]{./Figs/Selection/BDTflat_bkgnd.pdf} % %\caption{ } % %\label{fig:BDTbkg} % \end{subfigure} % \caption{Normalised output distributions for the global BDT for a) \bsmumu simulated decays and b) \bbbarmumux decays in simulation and data.} % \label{fig:FlatteningBDT} %\end{figure} \begin{figure}[tbp] \centering % \begin{subfigure}[b]{0.48\textwidth} % \includegraphics[width=0.8\textwidth]{./Figs/Selection/ROC_full.pdf} %\caption{ } %\label{fig:ROCfull} % \end{subfigure} % ~ %add desired spacing between images, e. g. ~, \quad, \qquad, \hfill etc. %(or a blank line to force the subfigure onto a new line) % \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=0.7\textwidth]{./Figs/Selection/ROC_zoom.pdf} %\caption{ } %\label{fig:ROCzoom} % \end{subfigure} \caption{Global BDT performance for 2011, 2012, 2015 and 2016 data taking conditions. Signal efficiency is calculated from \bsmumu simulated decays and background rejection from data passing the \bsmumu selection with $m_{\mu^{+}\mu^{-}} > 5447$ \mevcc. The performance is very similar for the different data taking years therefore only the most sensitive region is shown. The full range of BDT output values is from 0 to 1.} \label{fig:BDTperformance} \end{figure} \subsection{Summary} \label{sec:BFsummary} The complete set of selection criteria used to identify \bmumu decays in Run~1 and Run~2 data for the \BF measurements is listed in Table~\ref{tab:BFfullselection}. % ands~\ref{} alongside the selection for \bhh, \bujpsik and \bsjpsiphi decays. The selection requirements do not remove all backgrounds decays from the data set but reduce them to a level at which the \BFs can be measured. Figure~\ref{fig:BFdata} shows a scatter plot of the mass and global BDT values for all candidates that pass the selection criteria in Run~1 and Run~2 data. The criteria to select \bhh, \bujpsik and \bsjpsiphi decays are composed of the trigger requirements listed in Table~\ref{tab:triggers}, the stripping selection in Tables~\ref{tab:PreviousStrippingA} and~\ref{tab:PreviousStrippingB}, and the cut on the BDTS output. %The selection criteria for \bhh and \bujpsik decays is kept as close as possible to that used to identify \bmumu decays in order to reduce systematic uncertainties from selection efficiencies in the normalisation procedure described in Section~\ref{sec:Normalisation}. %\begin{landscape} %\vspace*{\fill} %\begin{table}[tbp] %\begin{center} %\begin{tabular}{ll} %%\toprule \toprule %\multicol{2}{c}{\bmumu selection criteria applied to} \\ \midrule %\bsd & $\mu^{\pm} \\ \midrule %4900 \mevcc $<$ $m_{\mu\mu}$ $<$ 6000 \mevcc &\chitrk $<$ 3 (4) \\ % DIRA $>$ 0 & Minimum \chiIP $>$ 25 \\ %\chiFD $>$ 225 & 0.25 \gevc $<$ $p_{T}$ $<$ 40 \gevc \\ %\chiIP $<$ 25 & $p$ $<$ 500 \gevc \\ %\chivtx$<$ 9 & ghost probability $<$ 0.3 (0.4) \\ %DOCA $<$ 0.3 mm & isMuon = True \\ %$\tau$ $<$ 13.248 \ps & PID$^{\mathrm{Run 1} + 2015}_{\mu}$ > 0.4 or PID$^{2016}_{\mu}> 0.8$ \\ %$p_{T}$ $>$ 500 \mevc & \\ %BDTS > 0.05 & \\ % $|m_{\mu\mu} - m_{J/\psi}| < 30$~\mevcc & \\ \midrule %Trigger requirements & L0Global = DEC\\ % & Hlt1Phys = DEC\\ % & Hlt2Phys = DEC \\ %\bottomrule \bottomrule %\end{tabular} %\vspace{0.7cm} %\caption{Selection requirements applied to select \bmumu, where selection is different between Run~1 and Run~2 the Run~2 values are shown in parenthesis.} %\label{tab:BFfullselection} %\vspace{-1.0cm} %\end{center} %\end{table} \begin{table}[tbp] \begin{center} \begin{tabular}{ll} \toprule \toprule Particle & \bmumu \\% & \bhh \\ \midrule \bsd & 4900 \mevcc $<$ $m_{\mu\mu}$ $<$ 6000 \mevcc \\% & 5000 \mevcc $<$ M $<$ 5800 \mevcc \\ & DIRA $>$ 0 \\% & DIRA $>$ 0 \\ & \chiFD $>$ 225 \\% & \chiFD $>$ 225 \\ & \chiIP $<$ 25 \\% & \chiIP $<$ 25 \\ & \chivtx$<$ 9 \\% & Vertex $\chi^{2}$/ndof $<$ 9 \\ & DOCA $<$ 0.3 mm \\% & DOCA $<$ 0.3 mm \\ & $\tau$ $<$ 13.248 \ps \\% & $\tau$ $<$ 13.248 \ps \\ & $p_{T}$ $>$ 500 \mevc \\% & $p_{T}$ $>$ 500 \mevc \\%% & BDTS > 0.05 \\% & BDTS > 0.05 \\ \hline & $|m_{\mu\mu} - m_{J/\psi}| < 30$~\mevcc \\% &$|$m_{\mu\mu} - m_{\jpsi}$| $<$ 30$~\mevcc \\ \\ $\mu$ &\chitrk $<$ 3 (4) \\% & Track $\chi^{2}$/ndof $<$ 3 (4) \\ & Minimum \chiIP $>$ 25 \\% & Minimum \chiIP $>$ 25 \\ & 0.25 \gevc $<$ $p_{T}$ $<$ 40 \gevc \\% & 0.25 \gevc $<$ $p_{T}$ $<$ 40 \gevc \\ & $p$ $<$ 500 \gevc \\% & $p$ $<$ 500 \gevc \\ & ghost probability $<$ 0.3 (0.4) \\% & ghost probability $<$ 0.3 (0.4) \\ & isMuon = True \\% & - \\ & PID$^{\mathrm{Run 1} + 2015}_{\mu}$ > 0.4 or PID$^{2016}_{\mu}> 0.8$ \\% & - \\ \\ Trigger requirements & L0Global = DEC\\ & Hlt1Phys = DEC\\ & Hlt2Phys = DEC \\ \bottomrule \bottomrule \end{tabular} \vspace{0.7cm} \caption{Selection requirements applied to select \bmumu for the \BF measurements, where selection is different between Run~1 and Run~2, the Run~2 values are shown in parenthesis.} \label{tab:BFfullselection} \vspace{-1.0cm} \end{center} \end{table} \begin{figure}[tbp] \centering \includegraphics[width=\textwidth]{./Figs/Selection/hidef_Fig3.png} \caption{Mass and global BDT values for candidates in Run~1 and Run~2 data that pass the \bmumu selection criteria. The green dashed lines show the combined \bs and \bd mass window as described at the start of this chapter.} \label{fig:BFdata} \end{figure} \clearpage %\vspace*{\fill} %\end{landscape} %\begin{landscape} %\vspace*{\fill} %\begin{table}[htbp] %\begin{center} %\begin{tabular}{lll} %\bottomrule \bottomrule %Particle & \bsmumu & \bhh \\ %\hline %\bs or $B^{+}$ & 4900 \mevcc $<$ M $<$ 6000 \mevcc & 5000 \mevcc $<$ M $<$ 5800 \mevcc \\ % & DIRA $>$ 0 & DIRA $>$ 0 \\ % & \chiFD $>$ 225 & \chiFD $>$ 225 \\ % & \chiIP $<$ 25 & \chiIP $<$ 25 \\ % & Vertex $\chi^{2}$/ndof $<$ 9 & Vertex $\chi^{2}$/ndof $<$ 9 \\ % & DOCA $<$ 0.3 mm & DOCA $<$ 0.3 mm \\ % & $\tau$ $<$ 13.248 \ps & $\tau$ $<$ 13.248 \ps \\ % & $p_{T}$ $>$ 500 \mevc & $p_{T}$ $>$ 500 \mevc \\%% %\hline %Daughter $\mu$ or $h$ & Track $\chi^{2}$/ndof $<$ 3 (4) & Track $\chi^{2}$/ndof $<$ 3 (4) \\ % & Minimum \chiIP $>$ 25 & Minimum \chiIP $>$ 25 \\ % & 0.25 \gevc $<$ $p_{T}$ $<$ 40 \gevc & 0.25 \gevc $<$ $p_{T}$ $<$ 40 \gevc \\ % & $p$ $<$ 500 \gevc & $p$ $<$ 500 \gevc \%\ % & ghost probability $<$ 0.3 (0.4) & ghost probability $<$ 0.3 (0.4) \\ % & $|$m_{\mu\mu} - m_{\jpsi}$| $<$ 30$~\mevcc &$|$m_{\mu\mu} - m_{\jpsi}$| $<$ 30$~\mevcc \\ % & isMuon = True & - \\ % & PID$_{\mu}$ > 0.4 (0.8) & - \\ %\hline % & BDTS > 0.05 & BDTS > 0.05 \\ %\hline %\end{tabular} %\vspace{0.7cm} %\caption{Selection cuts applied to select \bsmumu and \bhh decays, where selection is different between Run~1 and Run~2 the Run~2 values are shown in parenthesis.} %\label{tab:fullpreselection} %\end{center} %\end{table} %\vspace*{\fill} %\end{landscape} %\afterpage{ %\begin{landscape} %\vspace*{\fill} %\begin{table}[htbp] %\begin{center} %\begin{tabular}{l|l|l|l} %\hline % Particle &\bujpsik & Particle &\bsjpsiphi \\ %\hline %$B^{+}$ & |M - M$_{PDG}$| $<$ 500 \mevcc & \bs & |M - M$_{PDG}$| $<$ 500 \mevcc % \\ % & Vertex $\chi^{2}$/ndof < 45 & & Vertex $\chi^{2}$/ndof < 75 & % \\ % & \chiIP $<$ 25 & & \chiIP $<$ 25 \\ % %\hline %\jpsi & |M - M$_{PDG}$| $<$ 100 \mevcc & \jpsi & |M - M$_{PDG}$| $<$ 100 \mevcc \\ % & DIRA > 0 & & DIRA > 0 \\ % & \chiFD $>$ 225 & & \chiFD $>$ 225 \\ % & Vertex $\chi^{2}$/ndof < 9 & & Vertex $\chi^{2}$/ndof < 9 \\ % & DOCA $<$ 0.3 mm & & DOCA $<$ 0.3 mm \\ %\hline %$\mu^{\pm}$ & Track $\chi^{2}$/ndof < 3 &$\mu$ & Track $\chi^{2}$/ndof < 3 \\ % & isMuon = True & &isMuon = True \\ % & Minimum \chiIP $>$ 25 & & Minimum \chiIP $>$ 25 \\ % & 0.25 \gevc $<$ $p_{T}$ & & 0.25 \gevc $<$ $p_{T}$ \\ %\hline %$K^{+}$ & Track $\chi^{2}$/ndof < 3 & $\phi$ & |M - M$_{PDG}$| $<$ 20 \mevcc \\ % & $p_{T}$ $>$ 0.25 \gevc & & Minimum \chiIP $>$ 4 \\ % & Minimum \chiIP $>$ 25 & K$^{\pm}$ & Track $\chi^{2}$/ndof < 3 \\ % & & & $p_{T}$ $>$ 0.25 \gevc % & BDTS > 0.05 & \\ %\hline %\end{tabular} %\vspace{0.7cm} %\caption{Selection requirements applied during the stripping selection for Run~1 data used in the \bmumu Branching Fraction analysis~\cite{CMS:2014xfa, Aaij:2013aka} to select \bujpsik and \sjpsiphi decays. $M_{PDG}$ corresponds to the Particle Data Group~\cite{Olive:2016xmw} mass of each particle.}%The track $\chi^{2}$/ndof and isMuon cut are applied during the reconstruction.} %\label{tab:PreviousStripping} %\end{center} %\end{table} %\vspace*{\fill} %\end{landscape} %} %\clearpage \section[Selection for the \bsmumu effective lifetime measurement]{Selection for the \boldmath{\bsmumu} effective lifetime measurement} \label{sec:ELsel} The selection criteria used to identify particle decays for the \bsmumu \el measurement is based on the selection used to identify candidates for the \BF measurements. %The \el is measured from the decay time distribution of \bsmumu candidates As well as \bsmumu decays, \bdkpi, \bskk and \bsjpsiphi decays are used to develop and validate the effective lifetime analysis strategy. There are some differences in the selection of \bsmumu and \bhh decays for the \el measurement compared to the \BF measurement to account for the different measurement strategies and because only the \bs decay mode is required for the effective lifetime mesaurement. The selection of \bsjpsiphi decays is kept the same as that used for the \BF measurement. %One important consideration for the selection of candidates for the \bsmumu \el measurement is the efficiency of the selection as a function of the candidate decay time. This efficienicy is not uniform accross different decay times because the selection relies on paramters such as the IP, \chiIP and \chiFD that are correlated with the decay time of \bsmumu candidates. %Since the \el is measured from the decay time distribution of \bsmumu candidates, the selection efficiency as a function of decay time must be accurately evaluated in order to measure the \el. The efficiency is evaluated using simulated \bsmumu decays in Chapter~\ref{sec:lifetimemeasurement} and the proceedure to evalute the efficiency is validated using \bdkpi and \bskk decays. The selection criteria used for \bsmumu and \bhh decays uses the cut based selection in Section~\ref{sec:cutbasedsel} and the BDTS requirement in Section~\ref{BDTS}. Changes are made to the trigger requirements, the mass range of candidates and the particle identification requirements. The differences in these selection requirements compared to the selection for the \BF measurements and the motivation for these changes are described in Section~\ref{sec:ELtrigger},~\ref{sec:ELmass} and~\ref{sec:ELpid}, respectively. Similar to the \BF measurement, the selection for the \el measurement uses a multivariate classifier as the final step in the selection process to separate signal from combinatorial background. A study into the best classifier for the \el measurement is described in Section~\ref{sec:ELmva}. The selection criteria used to identify decays in data for the \bsmumu \el measurement are summarised in Section~\ref{sec:ELsummary}. One important consideration for the selection of candidates for the \bsmumu \el measurement is the efficiency of the selection as a function of the \bs decay time. This efficiency is not uniform across different decay times because the selection relies on parameters such as the \chiIP and \chiFD that are correlated with the decay time of \bsmumu candidates. Since the \el is measured from the decay time distribution of the candidates, the selection efficiency as a function of decay time must be accurately evaluated in order to perform the measurement. The efficiency is evaluated using simulated \bsmumu decays in Chapter~\ref{sec:lifetimemeasurement} and the procedure to evaluate the efficiency and the analysis strategy is validated using \bdkpi and \bskk decays. The impact of selection requirements on the decay time efficiency of \bsmumu and \bhh decays is important in the choice of the trigger requirements and the multivariate classifier as described in the following sections. %However, several changes are made to account for the different measurement strategies that are described in Chapters~\ref{sec:BFanalysis} and~\ref{sec:lifetimemeasurement} and because only the \bs decay mode is required for the effective lifetime mesaurement. %Similar to the \BF measurements, \bdkpi, \bskk and \bsjpsiphi decays are used to develop and validate the effective lifetime analysis strategy. %The selection of \bsjpsiphi decays is kept the same as that used for the \BF measurement but there are a few differences in the selection of \bhh decays for the effective lifetime measurement. %The majority of the selection of \bsmumu and \bhh decays is kept the same as the selection for the \BFm; the same cut based selection in Section~\ref{sec:cutbasedsel} is used and the BDTS requirement is applied. %The differences in the selection are: the trigger requirements; the mass ranges; the particle identification requirements; and the use of multivariate classifiers. The differences are outlined in the following sections and the full selection criteria are summarised in Section~\ref{sec:ELsummary}. \subsection{Trigger requirements} \label{sec:ELtrigger} The same global triggers used to select decays for the branching fraction measurements are used to select \bsmumu and \bhh candidates for the effective lifetime measurement but different trigger decisions are used. Candidates from \bsmumu decays are required to be identified as TOS or TIS at each level of the trigger. The change in trigger decisions is motivated by the use of simulated decays in evaluating the selection efficiency as a function of decay time. %de%pendence of the \elm on simulated decays. The efficiency of the selection criteria varies with the decay time of each candidate, and therefore the selection efficiency as a function of decay time must be well understood in order to measure the \el. Simulated decays are used in the determination of this efficiency, as described in Section~\ref{sec:DTpdfs}. The trigger efficiencies for candidates that are triggered as DEC, but not as TIS or TOS, are not well modelled in simulated decays because they depend on the underlying $pp$ event. Therefore only candidates triggered by TOS or TIS decisions are used so that the selection as a function of decay time can be accurately modelled. %using simulated decays. %The trigger lines, L0Global, Hlt1Phys and Hlt2Phys, are used to selected candidates for the \elm however different decisions made by these lines are used compared to the selection of candidates for the \BFm. Candidates are required to be TOS or TIS at each level of the trigger. The change in trigger decisions used for the \elm is motivated by the dependence of the measurement on simulated decays. The selection used to identify candidates is not uniform across the decay time range therefore the selection efficiency as a function of decay time is needed to measure the \el. Simulated \bsmumu decays are used in the determination of this efficiency as described in Section~\ref{sec:DTpdfs}. However the DEC trigger decisions in simulated decays are not completely accurate, % ata candidates that are triggered by DEC decisions but are not included in TIS or TOS decisions are not well modelled. This arises due to the underlying event in simulated decays not accurately describing data. Therefore these candidates are not used in data to reduce the systematic uncertainties from the evaluation of the selection efficiency as a function of decay time. Candidates triggered by DEC decisions, but not TIS or TOS do not pose the same problem for the \BF analysis because the selection and trigger efficiencies are evaluated using different methods as discussed in Section~\ref{sec:Normalisation}. Candidates from \bhh decays are required to be identified as TIS at each level of the trigger. In general trigger lines designed to select particle decays containing muons have a uniform efficiency for candidates with different decay times. However this is not the case for trigger lines designed to select \bhh decays. These lines rely on information about candidate IP and \chiIP to make decisions at the HLT level. For \bhh decays to be useful as a validation channel the efficiency of the trigger requirements as a function of the decay time should be similar to the \bsmumu triggers. This is achieved by requiring decays to be TIS at each level of the trigger. %The analysis strategy used to measure the \bsmumu effective lifetime is verified by measuring the lifetimes of the more abundant \bdkpi and \bskk decays. Although the same trigger lines are used, slightly different trigger decisions are used to select \bhh decays. To be useful as a validation channel the efficiency of the trigger requirements as a function of the decay time for \bhh decays will ideally be similar to the \bsmumu triggers. This is achieved by requiring decays to be TIS at each level of the trigger. In summary, the requirements imposed on the trigger to select \bsmumu and \bhh decays are shown in Table~\ref{tab:ELtriggers}. \begin{table}[tbp] \begin{center} \begin{tabular}{lc} \toprule \toprule Trigger Line & Trigger decision \\ \midrule %\multicolumn{2}{c}{{\it set A}} \\ \midrule %L0Global & Dec\\ %Hlt1Phys & Dec \\ %Hlt2Phys & Dec \\ \midrule \multicolumn{2}{c}{{\it \bsmumu}} \\ \midrule L0Global & TIS or TOS \\ Hlt1Phys & TIS or TOS \\ Hlt2Phys & TIS or TOS \\ \midrule \multicolumn{2}{c}{{\it \bhh}} \\ \midrule L0Global & TIS\\ Hlt1Phys & TIS \\ Hlt2Phys & TIS \\ \bottomrule \bottomrule \end{tabular} \vspace{0.7cm} \caption{Trigger lines used to select \bsmumu and \bhh decays for the \el measurement.} \label{tab:ELtriggers} \end{center} \vspace{-1.0cm} \end{table} \subsection{Mass range} \label{sec:ELmass} The mass of \bsmumu candidates is restricted to the range 5320 - 6000 \mevcc, the motivation for the narrower mass window compared to the selection of candidates for the \BFm comes from the optimisation of the measurement strategy detailed in Section~\ref{sec:toys}. The lower mass bound now lies on the low edge of the \bs mass window, therefore \bdmumu candidates and backgrounds from mis-identified decays are almost completely removed. The dominant background left in the data set is from combinatorial background. %Only background from random combinations of muon in the events is left in the data set. Similarly, \bdkpi and \bskk decays have a reduced mass range compared to the selection of \bhh decays for the branching fraction measurements; \bhh decays must be in the mass range 5100 - 5500 \mevcc in order to remove contributions from exclusive backgrounds. \subsection{Particle identification} \label{sec:ELpid} The particle identification requirements used for selecting candidates for the \BFm were optimised to give the greatest sensitivity to \bdmumu decays. Backgrounds from \bhh and \lambdab decays pollute the \bd mass window and must be reduced as much as possible to enable good sensitivity of \bdmumu decays. The requirement placed on the linear combination of ProbNN variables in Section~\ref{sec:BFpid} is a compromise between background rejection and signal efficiency. However, for the \elm, the \bd mode is not relevant and the mass region of selected candidates removes the majority of \bdmumu decays as well as backgrounds from \bhh and \lambdab decays. Therefore, looser particle identification requirements can be used leading to a higher signal efficiency.% to select signal decays. The same linear combination of ProbNN variables, PID$_{\mu}$, is used because there is still a small contribution from mis-identified \bhh and \lambdab decays within the mass range. Also the particle identification requirements help to reduce the number of combinatorial background decays. Different ProbNN tunes, and consequently cut values, are used for Run 1 and 2015 data compared to 2016 data. The cuts are chosen to give similar efficiencies for each data set at selecting signal and removing background and are listed in Table~\ref{tab:PID}. The cut values have not been optimised because there are too few candidates in data after the selection and simulated decays are not used because particle identification variables are not well modelled in simulation. However, the chosen particle identification requirements are tight enough to make the expected number of mis-identified decays in the data set after the full selection negligible, as shown in Section~\ref{sec:BKGcontaim}. %Maybe some PID plots? The separation of \bhh decays into \bskk and \bskpi decays is done using the DLL$_{K\pi}$ variable, defined in Section~\ref{PID_variables}. The DLL variables are useful to separate \bhh decays where $h$ is either a pion or kaon because the variables compare different particle hypotheses with the pion hypotheses. The selection requirements used are given in Table~\ref{tab:PID} and are the same for each year of data taking. %The validation of the measurement strategy of the \bsmumu effective lifetime is done by measuring the lifetimes of \bdkpi and \bskk decays. The selection of these two \bhh decays is identical until the particle identification requirements, where different decay modes are separated via DLL$_{K}$ variable. The DLL variables are useful to separate \bhh decays where $h$ is either a pion or kaon because the variables compare different particle hypotheses with the pion hypotheses. The selection requirements used are given in Table~\ref{} and are the same for each year of data taking. \begin{table}[tbp] \begin{center} \begin{tabular}{llll} \toprule \toprule Decay & Year & Particle & PID requirements \\ \midrule \bsmumu &2011, 2012, 2015 & $\mu^{\pm}$& PID$_{\mu}$ > 0.2 \\ \bsmumu &2016 & $\mu^{\pm}$& PID$_{\mu}$ > 0.4 \\ \midrule \bdkpi & 2011, 2012, 2015, 2016& $K^{+}$ & DLL$_{K\pi}$ $>$ 10 \\ & & $\pi{-}$ & DLL$_{K\pi}$ $<$ $-10$ \\ \midrule \bskk &2011, 2012, 2015, 2016 & $K^{\pm}$ & DLL$_{K\pi}$ $>$ 10 \\ \bottomrule \bottomrule \end{tabular} \vspace{0.7cm} \vspace{0.7cm} \caption{Particle identification requirements to select \bsmumu, \bskpi and \bskk decays for the \bsmumu \el measurement. } \label{tab:PID} \end{center} \vspace{-1.0cm} \end{table} \subsection{Multivariate classifier} \label{sec:ELmva} Two multivariate classifiers are used in the selection for the \BFm to separate signal and combinatorial background decays. The BDTS is used first to remove candidates that are very unlikely to be signal and to reduce the size of the data set. The global BDT is then used to classify candidates into separate bins and a simultaneous fit is then applied across the BDT bins to measure the \BFs. A different, simpler strategy is used to identify candidates for the \bsmumu effective lifetime measurement. Combinatorial background is reduced by placing a cut on the output of a multivariate classifier. Only candidates passing the selection cut are used to measure the \el. The measurement strategy is given in more detail in Chapter~\ref{sec:lifetimemeasurement}. As a consequence of the different selection methods, two classifiers may not be necessary for the measurement of the \el. Alternative classifiers were developed for the \el measurement in parallel to the development of the global BDT, with a particular focus on how the cuts placed on the output of the classifiers effect the selection efficiency as a function of decay time. The development of classifiers for the \el measurement is described in Section~\ref{sec:dev_BDTs} and a study into whether using data or simulated decays as the background training sample produces a more effective classifier is presented in Section~\ref{sec:pref}. The impact of cuts placed on the output of the classifiers on the selection efficiency as a function of decay time is investigated in Section~\ref{sec:seleff}. A comparison between the performances of the classifiers developed for the \el measurement and the global BDT developed for the \BF measurement is made in Section~\ref{sec:BDTcomp} and the classifier with the best performance at separating signal and combinatorial background decays is chosen. Finally, the optimal cut value placed on the chosen classifier is determined in Section~\ref{sec:globalBDToptimisation}. %The input variable used in the classifiers are correlated with the \bsmumu candidate decay time, therefore placing a cut on the output of a classifer has a significant impact on the %and the performance was compared to determine the most effective way to select candidates for the \elm. \subsubsection{Development of \el multivariate classifiers} \label{sec:dev_BDTs} Several types of multivariate classifiers were investigated for the effective lifetime selection and BDTs gave the best performance at separating signal from background. A range of boosting methods for the decision trees were studied and the adaptive boosting method once again yielded the best results. %Do I mention the Grad method? I don't think it adds anything but I did study it quite a lot. However, a boosting method of particular interest for the effective lifetime measurement was the uBoost technique~\cite{Stevens:2013dya}. The uBoost method produces a classifier output that has a uniform efficiency for a specified variable. The most effective input variables for achieving good signal and background separation with a BDT are also correlated with the decay time. These include the \bs IP, \chiIP, \chiFD and isolation criteria. %Therefore, the overall selection efficiency varies as a function of decay time. The final measurement of the \el relies on the efficiency being well understood. If the output of a BDT is correlated with the \bs decay time, the efficiency as a function of decay time may not have a smooth or easily understandable distribution. The uBoost method could provide a way to make modelling the efficiency as a function of decay time easier by requiring the algorithm output to have a uniform efficiency across the decay time distribution. The input variables used in the adaptive boosting and uBoost BDTs were chosen separately, starting from a large set of variables including kinematic and geometric variables and isolation criteria. Initially the BDTs were trained using all input variables within the set and variables that had no impact on the BDT performance were removed until removing any of the remaining variables had a negative impact on the BDT performance. The performance of each BDT was evaluated from the integrated Receiver Operating Characteristic curve, which is the signal efficiency versus background rejection. The final variable sets were different for the two boosting methods; the adaptive boosting BDT uses 11 input variables and the uBoost BDT uses 21 input variables. The full list of input variables used and the definition of those variables are given in Appendix~\ref{sec:appendix2}. Simulated \bsmumu decays were used as the signal training sample and simulated \bbbarmumux decays were used as the background training sample to determine the input variables used in the two types of BDTs. The selection requirements listed in Table~\ref{tab:BDTpresel}, except the BDTS requirement, were applied to training samples of simulated decays. The simulated decays were split into two samples for both signal and background, so that the BDTs could be trained on one sample and tested on the other. The training parameters of the adaptive BDT have been optimised by iterating over a large range of different values, whereas the training parameters of the uBoost were not optimised because changing the parameters has a small impact on the overall BDT performance~\cite{Stevens:2013dya}. The values used are given in Appendix~\ref{sec:appendix2}. %Simulated \bsmumu decays were used as the signal training sample and two different samples were tested as the background training sample. One sample consisted of simulated \bbbarmumux decays and the other was combinatorial background decays in Run 1 data. At the time of the BDT development, only Run~1 data was available. The selection requirements listed in Table~\ref{tab:BDTpresel}, except the BDTS requirement, were applied to training samples of simulated decays. Combinatorial background decays in data were identified as \bsmumu candidates that pass the selection requirements listed in Table~\ref{tab:BDTpresel}, except the BDTS requirement, and have an invariant mass of the two muon in the range 5447 - 6500 \mevcc, outside to the \bs mass window. The number of events in each training sample is given in Table~\ref{tab:trainingstats}. \begin{table}[tbp] \begin{center} \begin{tabular}{lr} \toprule \toprule Sample & Number of decays \\ \midrule Simulated \bsmumu & 668292 \\ Simulated \bbbarmumux & 586586 \\ Data & 189077\\ \bottomrule \bottomrule \end{tabular} \vspace{0.7cm} \caption{Number of candidates present in each training sample after the selection cuts have been applied. Simulated decays and decays in data were identified as candidates that pass the selection requirements listed in Table~\ref{tab:BDTpresel}, expect the BDTS cut was not applied and the decays in data must be in the mass range 5447 - 6000 \mevcc.} \label{tab:trainingstats} \end{center} \vspace{-1.0cm} \end{table} \subsubsection{Investigation of background training samples} \label{sec:pref} The performance of the BDTs with different boosting methods trained on two different background training samples is compared. One sample consists of simulated \bbbarmumux decays and the other combinatorial background decays in Run 1 data. At the time of the BDT development, only Run~1 data was available. The selection requirements listed in Table~\ref{tab:BDTpresel}, except the BDTS requirement, are applied to training samples of simulated decays. Combinatorial background decays in data are identified as \bsmumu candidates that pass the selection requirements listed in Table~\ref{tab:BDTpresel}, except the BDTS requirement, and have an invariant mass of the two muons in the range 5447 - 6500 \mevcc, outside the \bs mass window. The number of events in each training sample is given in Table~\ref{tab:trainingstats}. The different background samples are investigated to determine which would produce the BDT with the best performance. The final BDT is used to separate signal from combinatorial background in data; therefore using data as the background training sample could lead to better performance of the BDT. However there are fewer events in the data background sample than the sample from simulation which could limit the performance. Also, a BDT trained on a combination of data and simulated decays could be sensitive to differences between data and simulation which could lead to a worse performance of the BDT. The performance of the BDTs using either simulated decays or data as the background sample was evaluated using \bhh decays in Run 1 data. No particle identification variables are used as input variables of the BDTs due to the mis-modelling of particle identification variables in simulated decays, therefore the performance on the BDTs on \bhh decays should be very similar to \bsmumu decays. \bhh decays in data were identified by the same selection requirements used applied to the BDT training samples of simulated decays except the isMuon requirement was not applied. Also, no particle identification requirements were used to separate different \bhh decays. The performance of the BDTs is evaluated by determining the signal significance for a range of cuts place on the output of each BDT. The signal significance is given by \begin{equation} \mathcal{S} = \frac{S}{\sqrt{S+B}} \label{eq:SigSigf} \end{equation} where $S$ $(B)$ are the number of signal (background) decays. An unbinned \ml fit is performed on the \bhh mass distribution, where all \bhh decays are reconstructed as \bsmumu, to find the signal and background yields for each cut value. %for a range cuts on the BDT outputs and the signal significance $\mathcal{S}$ was evaluated for each cut value. In the mass fit, the \bhh mass distribution is modelled with a Gaussian function and the combinatorial background decays with an exponential function. An example of the mass fit is given in Figure~\ref{fig:massEG}. The number of signal and background decays used to calculated the signal significance are found as the signal and background yields within 3$\sigma$ of the centre of the \bhh mass peak, where $\sigma$ is the width of the Gaussian function. It is assumed that the larger number of \bhh decays present in data compared to \bmumu decays has a negligible effect on which BDT is the most effective at separating signal from background. Therefore the optimal BDT for separating \bhh decays from combinatorial background is the same as the optimal BDT to separate \bmumu decays from combinatorial background. % and the larger number of \bhh decays in data compared to \bmumu decays has a negliable effect on which BDT is the most effective at seperating signal from background. %The signal significance is given by %\begin{equation} %\mathcal{S} = \frac{S}{\sqrt{S+B}} %\label{eq:SigSigf} %\end{equation} %where $S$ $(B)$ are the number of signal (background) decays within 3 $\sigma$ of the centre of the \bhh mass peak. The signal significances as a function of the cut value placed on the BDT output for the BDTs trained on the different background samples are shown in Figure~\ref{fig:SSelBDTs}. The outputs of the BDTs are not flattened; adaptive boosting BDT gives output values between $-1$ and +1 and uBoost BDT gives output values between 0 and +1. The signal significance of BDTs trained on simulated decays is higher than that of the BDTs trained on data. %, this is due to the higher statistics available for simulated decays, as shown in Table~\ref{tab:trainingstats}. %Furthermore the adaptive boosting BDT performs better than the uBoost BDT. This is expected because the adaptive BDT is not constrained to have a uniform efficiency across the decay time range. Therefore, from now on only BDTs trained using simulated decays as the background training sample will be considered. \begin{figure}[tbp] \centering \includegraphics[width=0.85\textwidth]{./Figs/Selection/mass_example.pdf} \caption{Example of the mass fit to \bhh Run 1 data to find the signal significance for the adaptive boosting BDT with a cut value at 0.0 on the BDT output. The BDT produces a response between $-1$ and +1.} \label{fig:massEG} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=0.49 \textwidth]{./Figs/Selection/BDT_data_MC_comp.pdf} \includegraphics[width=0.49 \textwidth]{./Figs/Selection/uBoost_data_MC_comp.pdf} \caption{Signal significance from \bhh decays in Run 1 data of the a) adaptive boosting and b) uBoost BDTs trained using simulated decays and data as the background training samples.} \label{fig:SSelBDTs} \end{figure} Both the adaptive and uBoost BDTs shown in Figure~\ref{fig:SSelBDTs} were trained without applying the BDTS cut to the training samples. However, the signal significance on \bhh decays has also been evaluated with the BDTS cut applied both after the BDT training and before the BDT training. The improvement in the overall performances of the BDTs is small but applying the BDTS cut to \bhh after the BDT training produces the highest signal significances. \subsubsection{Selection efficiency with decay time} \label{sec:seleff} %The understanding of the selection efficiency as a function of decay time was the main motivation for investigating the uBoost boosting method. %, however the performance of the uBoost boosting method is worse than the adaptive BDT method. The selection efficiency as a function of decay time has been evaluated in simulated \bsmumu decays after all selection requirements and a range of different cut values on the outputs of the adaptive and uBoost BDTs trained on simulated decays. The cut values are chosen to have the same selection efficiencies for each algorithm. The efficiencies are shown in Figure~\ref{fig:accptsELBDTs}. The shape of the selection efficiency as a function of decay time for the uBoost BDT is the same for each cut placed on the BDT response whereas the shape of the decay time efficiency of the adaptive boosting BDT changes as cut placed on the BDT output changes. For the adaptive boosting BDT, tighter cuts on the output remove a greater proportion of decays with short decay times than longer decay times. Both algorithms have a smooth efficiency as a function of decay time therefore, with either algorithm the efficiency as function of decay time can be well modelled. Ideally, to reduce systematic uncertainties on the measurement of the effective lifetime described in Chapter~\ref{sec:systematics}, the selection would not bias the decay time distribution. %However, the variables that are most effective at separating \bsmumu decays from combinatorial background, such as the isolation criteria, the \bs \chiIP and \bs \chiFD, are highly correlated with the \bs lifetime and could not be used because they could bias the decay time distribution. However, the most effective variables at separating \bsmumu decays from backgrounds, such as the isolation criteria, the \bs \chiIP and \bs \chiFD, are highly correlated with the \bs decay time. Removing these variables from the selection procedure would make it less effective at removing background decays and sensitivity to \bsmumu decays would be reduced. %Therefore the overall selection would be less effective at separating signal from background and the sensitivity to \bsmumu decays would be lower. Since \bsmumu decays occur very rarely and very few are expected in the current dataset, an unbiased selection would not be appropriate at this time because it would make identifying any \bsmumu in the dataset much more challenging. %Ideally, to reduce systematic uncertainties on the measurement of the effective lifetime described in Chapter~\ref{sec:systematics}, the selection would not bias the decay time distribution. However, the variables that are most effective at seperating \bsmumu decays from combinatorial background, such as the isolation criteria, the \bs \chiIP and \chiFD, are highly correlated with the \bs lifetime and could not be used. Therefore the overall selection would be less effective at seperating signal from background and the sensitivity to \bsmumu decays would be less. Since the \bsmumu decays occur very rarely and very few are expected in the current data set an unbiased selection would not be appropriate at this time because the less effective selection would make %With the current data set using a selection strategy that does not bias the decay time distribution would not be appropriate because the less effective seperation of signal and background decays would make it %With the current data set and since \bsmumu decays occur very rarely the statistical uncertainty on the measurement will be %. Both algorithms have a smooth efficiency as a function of decay time therefore, with either algorithm the efficiency as function of decay time can be well modelled. \begin{figure}[tbp] \centering \includegraphics[width=0.49 \textwidth]{./Figs/Selection/BDT_acceptances.pdf} \includegraphics[width=0.49 \textwidth]{./Figs/Selection/uBoost_accpt.pdf} \caption{Selection efficiency as a function of decay time of simulated 2012 \bsmumu decays for the a) adaptive and b) uBoost BDTs. The selection requirements applied to the training sample are applied to the simulated decays and cuts are places on the BDT output so that the efficiency of the cut on decays passing the other selection requirements is 100, 75, 50 and 25$\%$.}% so that the efficiency of the cut on decays passing the other selection requirements is 100, 75, 50 and 25$\%$. } \label{fig:accptsELBDTs} \end{figure} \subsubsection{Classifier performance comparison} \label{sec:BDTcomp} The final classifier used to select \bsmumu candidates is the BDT that has the greatest separation power between signal and combinatorial background decays and consequently removing the most combinatorial background decays for a given signal efficiency, provided the selection efficiency as a function of decay time can be accurately modelled. The performances of the BDTs developed for the effective lifetime measurement are compared to that of the global BDT used to classify candidates for the branching fraction measurements. Two different approaches are used to evaluate the performances: the signal significance of \bhh decays as a function of BDT cut values; and the rejection of \bsmumu backgrounds in data. In order to enable easy comparison of the different BDTs, the outputs of the BDTs developed for the effective lifetime measurement have been flattened in the same way as the global BDT, described in Section~\ref{sec:MVC}. % to enable easy comparison. The signal significance for each BDT is evaluated on \bhh decays in Run~1 data and the maximum signal significance is found in the same way as described earlier for comparing BDTs trained with data or simulated decays as the background sample. The BDTS cut is applied in the selection process because the global BDT was designed to be used with the BDTS and the performance of the BDTs developed for the effective lifetime is best when the BDTS requirement is used. The results are shown in Figure~\ref{fig:SSall}; the global BDT produces the highest signal significance, but is closely followed by the adaptive boosting BDT developed for the effective lifetime measurement. \begin{figure}[tbp] \centering %\includegraphics[width=0.49 \textwidth]{./Figs/Selection/BDT_comp_full_range.pdf} \includegraphics[width=0.9 \textwidth]{./Figs/Selection/BDT_comp_zoom.pdf} \caption{Signal significance from \bhh decays in Run 1 data of the adaptive and uBoost BDTs trained using simulated decays as the background sample and the signal significance of the global BDT developed for the branching fraction measurement. The selection requirements listed in Table~\ref{tab:BDTpresel} are used, apart from the isMuon requirement. } \label{fig:SSall} \end{figure} The purpose of the BDT is to remove combinatorial background decays passing the \bsmumu selection, therefore an additional comparison of the different algorithms is made. The number of combinatorial background decays present in Run 1 data passing the \el selection criteria but in the mass range 5447 - 6550 \mevcc are found for a range of cuts on the output of the BDTs. %No particle identification requirements are applied. The same cut values are applied to each BDT and, since all the BDTs are flattened to have a uniform distribution of signal decays between 0 and 1, the cut values will have very similar signal efficiencies for each BDT. %The cut values are chosen separately for each algorithm so that the number of background decays is found for cuts with the same efficiency for selecting \bsmumu decays. The results are given in Table~\ref{tab:bkgdsC}. The global BDT is most effective at removing background decays for a given signal efficiency. The same comparisons were made with the BDT used in the previous analysis~\cite{Aaij:2013aka, CMS:2014xfa} and all BDTs described in this dissertation have a better performance at removing background decays. %Additionally all BDTs improve upon the performance of the classifier used in the last published analysis~\cite{CMS:2014xfa, Aaij:2013aka}. %\begin{table}[tbp] %\begin{center} %\begin{tabular}{lrrrrrrrrrr} %\toprule \toprule %BDT & \multicolumn{9}{c}{Number events above BDT output value} \\ %\cmidrule{2-10} % & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 \\ \midrule %Global BDT & 2261 & 597 & 229 & 89 & 34 & 13 & 4 & 1 & 0 \\ %Adaptive BDT & 4623 & 1395 & 513 & 215 & 77 & 32 & 15 & 4 & 2 \\ %uBoost BDT & 7904 & 3344 & 1535 &630 & 268 & 92 & 27 & 7 & 0 \\ %\bottomrule \bottomrule %\end{tabular} %\vspace{0.7cm} %\vspace{0.7cm} %\caption{Number of candidates in Run~1 data passing the effective lifetime selection and the BDTS cut in the mass range 5447 - 6000~\mevcc. The output of each BDT is flattened to have a uniform response between 0 and 1, therefore the cuts applied to each BDT will have approximately the same efficiency.} %\label{tab:bkgdsC} %\end{center} %\vspace{-1.0cm} %\end{table} Although the global BDT, combined with the BDTS, performs best at separating signal from background decays, the efficiency as a function of decay time must also be evaluated for this algorithm to ensure it does not exhibit any strange behaviour %which would make modelling the decay time efficiency challenging. The decay time efficiency is shown in Figure~\ref{fig:accptsBFBDTs} for several cut values on the BDT output and gives %\clearpage %\begin{figure}[tbp] % \centering % %\includegraphics[width=0.49 \textwidth]{./Figs/Selection/BDT_comp_full_range.pdf} % \includegraphics[width=0.85 \textwidth]{./Figs/Selection/BDT_comp_zoom.pdf} % \caption{Signal significance from \bhh decays in Run 1 data of the adaptive and uBoost BDTs trained using simulated decays as the background sample and the signal significance of the global BDT devel\ %oped for the branching fraction measurement. The selection requirements listed in Table~\ref{tab:BDTpresel} are used, apart from the isMuon requirement. } % \label{fig:SSall} %\end{figure} \clearpage \begin{table}[tbp] \begin{center} \begin{tabular}{lrrrrrrrrrr} \toprule \toprule BDT & \multicolumn{9}{c}{Number events above BDT output value} \\ \cmidrule{2-10} & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 \\ \midrule Global BDT & 2261 & 597 & 229 & 89 & 34 & 13 & 4 & 1 & 0 \\ Adaptive BDT & 4623 & 1395 & 513 & 215 & 77 & 32 & 15 & 4 & 2 \\ uBoost BDT & 7904 & 3344 & 1535 &630 & 268 & 92 & 27 & 7 & 0 \\ \bottomrule \bottomrule \end{tabular} \vspace{0.7cm} %\vspace{0.7cm} \caption{Number of candidates in Run~1 data passing the effective lifetime selection and the BDTS cut in the mass range 5447 - 6000~\mevcc. The output of each BDT is flattened to have a uniform response \ between 0 and 1, therefore the cuts applied to each BDT will have approximately the same efficiency.} \label{tab:bkgdsC} \end{center} \vspace{-0.5cm} \end{table} %\vspace{0.5cm} \begin{figure}[tbp] \centering \includegraphics[width=0.65 \textwidth]{./Figs/Selection/BDT1_acceptances.pdf} \caption{Selection efficiency as a function of decay time of simulated 2012 \bsmumu decays for the global BDT. The selection requirements applied to the training sample are applied to the simulated d\ ecays and cuts are places on the BDT output so that the efficiency of the cut on already selected event is 100, 75, 50 and 25$\%$. } \label{fig:accptsBFBDTs} \end{figure} \noindent which would make modelling the decay time efficiency challenging. The decay time efficiency is shown in Figure~\ref{fig:accptsBFBDTs} for several cut values on t\ he BDT output and gives a smooth distribution as a function of decay time. For the data set used to measure the \bsmumu effective lifetime, the expected number of \bsmumu decays is very low. Therefore the benefits of using the uBoost method are outweighed by its poor performance. Since the global BDT developed for the \BFm has the best performance in both tests and a smooth decay time efficiency it is the best BDT to use for the selection of events for effective lifetime measurement. %\begin{figure}[tbp] % \centering % \includegraphics[width=0.65 \textwidth]{./Figs/Selection/BDT1_acceptances.pdf} % \caption{Selection efficiency as a function of decay time of simulated 2012 \bsmumu decays for the global BDT. The selection requirements applied to the training sample are applied to the simulated decays and cuts are places on the BDT output so that the efficiency of the cut on already selected event is 100, 75, 50 and 25$\%$. } % \label{fig:accptsBFBDTs} %\end{figure} \subsubsection{Optimisation of BDT cut choice} \label{sec:globalBDToptimisation} A cut is placed on the output of the global BDT to select \bsmumu decays. The cut value is optimised to give the smallest expected uncertainty on the measurement of the \bsmumu effective lifetime, \tmumu. The optimisation is done using pseudoexperiments generating the expected number of \bsmumu and combinatorial background decays in Run~1 and Run~2 data for different cuts on the global BDT output. The fit procedure to extract \tmumu from the data is described in Chapter~\ref{sec:lifetimemeasurement}. The pseudoexperiments used to optimise the global BDT cut value are performed following the steps: \begin{itemize} \item the mass and decay time distributions for number of expected \bsmumu and combinatorial background events are generated using the expected mass and decay time probability density functions (\pdfs); \item an unbinned maximum likelihood fit is performed to the invariant mass distribution of the two muons, where the \bsmumu and combinatorial background yields are free to float in the fit along with the slope, $\lambda$, of the combinatorial background \pdf; and \item the mass fit is used to compute sWeights using the sPlot method \cite{Pivk:2004ty} and a maximum likelihood fit is performed to the sWeighted decay time distribution to extract \tmumu. \end{itemize} %Full details of the toy experiment set up and the probability density functions used are given in Appendix~\ref{sec:appendix3}. The number of expected \bsmumu and combinatorial background decays for different BDT cut values is derived from the expected number of candidates that pass the \el selection cuts and a cut on the global BDT of BDT $>$ 0.55\footnote{Initially the observed yields from published the Run~1 \BFm were used to determine the expected number of decays present in 4.4~\fb of Run~1 and Run~2 data and a global BDT cut of 0.55 was found to be optimal. However, the expected number of decays was then re-evaluated using the more sophisticated techniques described in Chapter~\ref{sec:BFanalysis} and using global BDT cut of 0.55 and the pseudoexperiments were repeated to check the optimal BDT cut was the same.}. %for Run~1 and Run~2 data in the mass range 4900 $<$ $m_{\mu^{+}\mu^{-}$ $<$ 6000 \mevcc. These predictions are given in Table~\ref{tab:expectednumbers} and assume the SM branching fraction for \bsmumu decays. % are given in Table~\ref{tab:expectednumbers}. The methods used to evaluate the expected number of each decay are detailed in Chapter~\ref{sec:BFanalysis}. %\begin{table}[btp] %\begin{center} %\begin{tabular}{lr} %\toprule \toprule %Decay & Expected number of candidates \\ \midrule %\bsmumu & 30.5 \\ %Combinatorial background & 40.6\\ %\midrule %Total & 71.1\\ %\bottomrule \bottomrule %\end{tabular} %\vspace{0.7cm} %\caption{Expected number of \bsmumu and combinatorial background candidates after the \bsmumu selection requirement and with a global BDT value greater than 0.55 in the mass range 5320 $< m_{\mu^{+}\mu^{-}} <$ 6000 \mevcc. } %\label{tab:expectednumbers} %\end{center} %\vspace{-1.0cm} %\end{table} %From the expected number of \bsmumu and combinatorial background decays given in Table~\ref{tab:expectednumbers}, the number of decays in the mass range 5320 - 6000~\mevcc The expected number of \bsmumu decays after different BDT cut values is straightforward to compute from the information in Table~\ref{tab:expectednumbers}. This is because the flattening procedure applied to the global BDT output means that the number of \bsmumu decays is evenly distributed across the BDT range. The number of combinatorial background decays expected after each BDT cut is determined from the number of decays in Table~\ref{tab:expectednumbers} and using information from simulated \bbbarmumux decays that have had all \el selection requirements applied up until the cut on the global BDT. A ratio is evaluated from \bbbarmumux decays, given by \begin{equation} R(Y) = \frac{\epsilon(BDT > Y)}{\epsilon(BDT > 0.55)}, \end{equation} where $\epsilon(BDT > Y)$ is the fraction of simulated \bbbarmumux decays that have a global BDT value greater than $Y$ and $\epsilon(BDT > 0.55)$ is the fraction of simulated \bbbarmumux decays that have a global BDT value greater than 0.55. The expected number of combinatorial background decays after a BDT cut of $Y$ is then evaluated from multiplying $R(Y)$ by the number of decays in Table~\ref{tab:expectednumbers}. %The ratio of simulated \bbbarmumux decays that have passed the other selection criteria except the BDT, using the ratio %\begin{equation} %R = \frac{\epsilon(BDT > X)}{\epsilon(BDT > 0.55)}, %\end{equation} %where $\epsilon(BDT > X)$ is the efficiency of the cut BDT $>$ X. %The \bsmumu selection requirements are applied to the simulated decays before taking the efficiency. The ratios for the different cuts values are shown in Table~\ref{tab:EfficiencyRatioCombBG} along with the expected number of \bsmumu and combinatorial background decays for each BDT cut. Simulated decays had to be used to compute the efficiencies rather than data because there are too few candidates left after tight BDT cuts in data to enable meaningful studies. %\begin{table}[tbp] %%\begin{center} %\begin{tabular}{lr} %\toprule \toprule %Global BDT cut & $\lambda$ /$c^{2}$MeV$^{-1}$\\ \midrule %0.40 & -0.00114 $\pm$ 0.00028 \\ %0.45 & -0.00129 $\pm$ 0.00041 \\ %0.50 & -0.00132 $\pm$ 0.00060 \\ %%%0.55 & -0.00004 $\pm$ 0.00089 \\ %0.60 & -0.00000 $\pm$ 0.00114 \\ %0.65 & -0.00024 $\pm$ 0.00122 \\ \bottomrule \bottomrule %\end{tabular} %\vspace{0.7cm} %\caption{The slope of the combinatorial background mass distribution for different cut value on the global BDT evaluated from \bbbarmumux simulated decays.} %\label{tab:CBGSlopeBDT} %\end{center} %%\vspace{-1.0cm} %\end{table} The mass distribution of the combinatorial background is described by an exponential function. It was observed from the simulated \bbbarmumux decays that the slope of the mass distribution changed with the BDT cut value as illustrated in Figure~\ref{fig:BDTmasses}. The change in the slope value is accounted for in the mass distribution used in the pseudoexperiments by changing the slope parameter ($\lambda$) for each BDT cut. Table~\ref{tab:EfficiencyRatioCombBG} shows the slope of the mass distribution for different BDT cuts values evaluated from \bbbarmumux simulated decays. The results from 10,000 pseudoexperiments for BDT cut values every 0.05 in the range 0.40 - 0.65 are shown in Table~\ref{tab:selOptimisation} with the median uncertainty of the fits for \tmumu and the si\ gnal significance for each BDT cut. % are given along with the signal significance and the expected number of \bsmumu and combinatorial background decays for each BDT cut. The median uncertainties are used rather than the mean because the distribution of uncertainties is asymmetric. The highest signal significance and lowest expected uncertainties occur for a BDT cut of 0.\ 55. Therefore this cut value is used to select \bsmumu decays and the same cut is applied to the global BDT to select \bhh decays. \begin{table}[btp] \begin{center} \begin{tabular}{lr} \toprule \toprule Decay & Expected number of candidates \\ \midrule \bsmumu & 30.5 \\ Combinatorial background & 40.6\\ \midrule Total & 71.1\\ \bottomrule \bottomrule \end{tabular} \vspace{1.0cm} \caption{Expected number of \bsmumu and combinatorial background candidates after the \bsmumu selection requirement and with a global BDT value greater than 0.55 in the mass range 5320 $< m_{\mu^{+}\mu^{\ -}} <$ 6000 \mevcc. } \label{tab:expectednumbers} \end{center} \vspace{-1.0cm} \end{table} \begin{figure}[tbp] \centering \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{./Figs/Selection/BDT0p4.pdf} % \caption{ } % \label{fig:BDT0p4} \end{subfigure} ~ %add desired spacing between images, e. g. ~, \quad, \qquad, \hfill etc. %(or a blank line to force the subfigure onto a new line) \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{./Figs/Selection/BDT0p55.pdf} % \caption{ } % \label{fig:BDT0p5} \end{subfigure} \caption{Mass distribution of \bbarmumux simulated decays that have passes the \bsmumu \el selection requirements and after global BDT cuts of a) 0.40 and b) 0.55.} \label{fig:BDTmasses} \end{figure} %\clearpage \begin{table}[tbp] \begin{center} \begin{tabular}{lcccc} \toprule \toprule BDT cut & $\mathcal{N}$(\bsmumu) & $R(Y)$ & $\mathcal{N}$(Comb.) & $\lambda$ /$c^{2}$MeV$^{-1}$ \\ \midrule 0.40 &40.5 & 8.69 & 269.1 & $-$0.00114 $\pm$ 0.00028\\ 0.45 &37.2 & 3.91 & 116.2 &$-$0.00129 $\pm$ 0.00041\\ 0.50 &33.8 & 1.91 & 56.3 &$-$0.00132 $\pm$ 0.00060\\ 0.55 &30.5 & 1.00 & 40.6 &$-$0.00004 $\pm$ 0.00089\\ 0.60 &27.1 & 0.55 & 22.5 & $-$0.00000 $\pm$ 0.00114\\ 0.65 &23.8& 0.32 & 12.4 & $-$0.00024 $\pm$ 0.00122\\ \bottomrule \bottomrule \end{tabular} \vspace{0.7cm} \caption{Inputs to the pseudoexperiments used to determine the optimum global BDT cut value for the effective lifetime analysis. For each BDT cut value the number of expected \bsmumu decays ($\mathcal{N}$(\bsmumu)), the ratio ($R(Y)$) used to find the number of combinatorial background decays, the expected number of combinatorial background decays ($\mathcal{N}$(Combinatorial)) and the slope of the combinatorial background mass distribution ($\lambda$) are given. } \label{tab:EfficiencyRatioCombBG} \end{center} \vspace{-1.0cm} \end{table} %The results from 10,000 pseudoexperiments for BDT cut values every 0.05 in the range 0.40 - 0.65 are shown in Table~\ref{tab:selOptimisation} with the median uncertainty of the fits for \tmumu and the signal significance for each BDT cut. % are given along with the signal significance and the expected number of \bsmumu and combinatorial background decays for each BDT cut. %The median uncertainties are used rather than the mean because the distribution of uncertainties is asymmetric. The highest signal significance and lowest expected uncertainties occur for a BDT cut of 0.55. Therefore this cut value is used to select \bsmumu decays and the same cut is applied to the global BDT to select \bhh decays. \begin{table}[tbp] \begin{center} \begin{tabular}{lrr} \toprule \toprule Global BDT cut & $\frac{S}{\sqrt{S+B}}$& $\sigma \left(\tau_{\mu\mu} \right)$ / \ps \\ \midrule %& $\sigma \left(\tau^{-}_{\mu\mu} \right)$ / \ps$^{-1}$ & $\mathcal{N}$(\bsmumu) & $\mathcal{N}$(Combinatorial) \\ 0.40 & 3.87 & 0.345 \\%& 0.128 & 40.5 & 269.1 \\ %& $-0.01 \pm 0.01$ & $1.020 \pm 0.007$ \\ 0.45 & 4.51 & 0.309 \\%&& 0.114 & 37.2 & 116.2 \\ %& $-0.02 \pm 0.01$ & $1.014 \pm 0.007$ \\ 0.50 & 4.85 & 0.291 \\%&& 0.108 & 33.8 & 56.3 \\ %& $-0.01 \pm 0.01$ & $1.029 \pm 0.007$ \\ 0.55 & 4.94 & 0.285 \\%&& 0.106 & 30.5 & 40.6 \\ %& $0.00 \pm 0.01$ & $1.010 \pm 0.007$ \\ 0.60 & 4.86 & 0.297 \\%&& 0.109 27.1 & 22.5 \\ %& $-0.02 \pm 0.01$ & $0.996 \pm 0.007$ \\ 0.65 & 4.65 & 0.309 \\%&& 0.115 23.8 & 12.4 \\ \bottomrule \bottomrule%& $-0.01 \pm 0.01$ & $1.000 \pm 0.007$ \\ \hline \bottomrule \bottomrule \end{tabular} \vspace{0.7cm} \caption{ The signal significance for each cut value in the global BDT and median of the expected uncertainties for \tmumu and \invtmumu from 10,000 pseudoexperiments for the expected number of events.}% The expected number of \bsmumu ($\mathcal{N}$(\bsmumu)) and combinatorial background ($\mathcal{N}$(Comb.)) decays that are generated in the pseudoexperiments are also listed for each BDT cut. } \label{tab:selOptimisation} \end{center} \vspace{-1.0cm} \end{table} \clearpage \subsection{Summary} \label{sec:ELsummary} The complete set of selection criteria used to identify \bsmumu decays in Run~1 and Run~2 data for the \elm are listed in Table~\ref{tab:fullpreselectionEL}. % alongside the selection to identify \bhh decays to verify the measurement str.% ands~\ref{} alongside the selection for \bhh, \bujpsik and \bsjpsiphi decays. The selection requirements do not remove all background decays from the data set but reduce them to a level at which the effective lifetime can be measured. The selection criteria for \bhh decays used to verify the measurement strategy are very similar to the selection used to identify \bmumu decays; the differences are in the mass range used and the trigger and particle identification requirements as discussed in Sections~\ref{sec:ELmass},~\ref{sec:ELtrigger} and~\ref{sec:ELpid}. The mass and decay time distributions for \bsmumu candidates passing the selection criteria in 4.4~\fb of Run~1 and Run~2 data are shown in Figure~\ref{fig:mass_DT}. \begin{figure}[h] \centering \includegraphics[width=0.65 \textwidth]{./Figs/Selection/mass_candidates.pdf} \includegraphics[width=0.65 \textwidth]{./Figs/Selection/lifetime_candidates.pdf} \caption{Dimuon invariant mass (a) and decay time (b) distributions for \bsmumu candidates in 4.4~\fb of Run~1 and Run~2 data passing the selection requirements in Table~\ref{tab:fullpreselectionEL}. } \label{fig:mass_DT} \end{figure} %\begin{landscape} %\vspace*{\fill} \begin{table}[tbp] \begin{center} \begin{tabular}{ll} \toprule \toprule Particle & \bsmumu \\% & \bhh \\ \midrule \bs & 5320 \mevcc $<$ $m_{\mu\mu}$ $<$ 6000 \mevcc \\% & 5000 \mevcc $<$ M $<$ 5800 \mevcc \\ & DIRA $>$ 0 \\% & DIRA $>$ 0 \\ & \chiFD $>$ 225 \\% & \chiFD $>$ 225 \\ & \chiIP $<$ 25 \\% & \chiIP $<$ 25 \\ & \chivtx $<$ 9 \\% & Vertex $\chi^{2}$/ndof $<$ 9 \\ & DOCA $<$ 0.3 mm \\% & DOCA $<$ 0.3 mm \\ & $\tau$ $<$ 13.248 \ps \\% & $\tau$ $<$ 13.248 \ps \\ & $p_{T}$ $>$ 500 \mevc \\% & $p_{T}$ $>$ 500 \mevc \\%% & BDTS > 0.05 \\% & BDTS > 0.05 \\ % & PID$^{\mathrm{Run 1} + 2015}_{\mu}$ > 0.2 or PID$^{2016}_{\mu}$ > 0.4 \\% & - \\ & $|m_{\mu\mu} - m_{J/\psi}| < 30$~\mevcc \\% &$|$m_{\mu\mu} - m_{\jpsi}$| $<$ 30$~\mevcc \\ % & PID$_{\mu}$ > 0.2 (0.4) \\% & - \\ & Global BDT > 0.55 \\ \\ $\mu$ &\chitrk $<$ 3 (4) \\% & Track $\chi^{2}$/ndof $<$ 3 (4) \\ & Minimum \chiIP $>$ 25 \\% & Minimum \chiIP $>$ 25 \\ & 0.25 \gevc $<$ $p_{T}$ $<$ 40 \gevc \\% & 0.25 \gevc $<$ $p_{T}$ $<$ 40 \gevc \\ & $p$ $<$ 500 \gevc \\% & $p$ $<$ 500 \gevc \\ % & ghost probability $<$ 0.3 (0.4) \\% & ghost probability $<$ 0.3 (0.4) \\ % & $|m_{\mu\mu} - m_{J/\psi}| < 30$~\mevcc \\% &$|$m_{\mu\mu} - m_{\jpsi}$| $<$ 30$~\mevcc \\ & isMuon = True \\% & - \\ % & PID$_{\mu}$ > 0.2 (0.4) \\% & - \\ % % & BDTS > 0.05 \\% & BDTS > 0.05 \\ & ghost probability $<$ 0.3 (0.4) \\% & ghost probability $<$ 0.3 (0.4) \\ & PID$^{\mathrm{Run 1} + 2015}_{\mu}$ > 0.2 or PID$^{2016}_{\mu}$ > 0.4 \\% & - \ % & Global BDT > 0.55 \\ \\ Trigger requirements & L0Global = TIS or TOS \\ & Hlt1Phys = TIS or TOS \\ & Hlt2Phys = TIS or TOS \\ \bottomrule \bottomrule \end{tabular} \vspace{0.7cm} \caption{Selection cuts applied to select \bsmumu for the \el measurement, where selection is different between Run~1 and Run~2 the Run~2 values are shown in parenthesis.} \label{tab:fullpreselectionEL} \end{center} \end{table} %\vspace*{\fill} %\end{landscape}
{ "alphanum_fraction": 0.7011021354, "avg_line_length": 99.7359173127, "ext": "tex", "hexsha": "80b7678563edc2d3f17fe562742ed71a33f6c3a9", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2018-02-19T16:03:23.000Z", "max_forks_repo_forks_event_min_datetime": "2018-02-19T16:03:23.000Z", "max_forks_repo_head_hexsha": "f930fcb2d9682beae829f11fe7c7fce4caeaee33", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "haevans/Thesis", "max_forks_repo_path": "Selection/selection.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f930fcb2d9682beae829f11fe7c7fce4caeaee33", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "haevans/Thesis", "max_issues_repo_path": "Selection/selection.tex", "max_line_length": 1241, "max_stars_count": null, "max_stars_repo_head_hexsha": "f930fcb2d9682beae829f11fe7c7fce4caeaee33", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "haevans/Thesis", "max_stars_repo_path": "Selection/selection.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 49642, "size": 192989 }
%%% Template originaly created by Karol Kozioł ([email protected]) and modified for ShareLaTeX use \documentclass[a4paper,11pt]{article} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{graphicx} \usepackage{xcolor} \usepackage{sansmath} \renewcommand\familydefault{\sfdefault} \usepackage{tgheros} \usepackage{amsmath,amssymb,amsthm,textcomp} \usepackage{enumerate} \usepackage{multicol} \usepackage{tikz} \usetikzlibrary{shapes, positioning} \usepackage{geometry} \geometry{total={210mm,297mm}, left=25mm,right=25mm,% bindingoffset=0mm, top=20mm,bottom=20mm} \linespread{1.3} \newcommand{\linia}{\rule{\linewidth}{0.5pt}} % custom theorems if needed \newtheoremstyle{mytheor} {1ex}{1ex}{\normalfont}{0pt}{\scshape}{.}{1ex} {{\thmname{#1 }}{\thmnumber{#2}}{\thmnote{ (#3)}}} \theoremstyle{mytheor} \newtheorem{defi}{Definition} % my own titles \makeatletter \renewcommand{\maketitle}{ \begin{center} \vspace{2ex} {\huge \textsc{\@title}} \vspace{1ex} \\ \linia\\ \@author \hfill \@date \vspace{4ex} \end{center} } \makeatother %%% % custom footers and headers \usepackage{fancyhdr} \pagestyle{fancy} \lhead{} \chead{} \rhead{} \lfoot{Automatic~Verification Assignment \#1} \cfoot{} \rfoot{Page \thepage} \renewcommand{\headrulewidth}{0pt} \renewcommand{\footrulewidth}{0pt} % % all section titles centered and bolded \usepackage{sectsty} \allsectionsfont{\bfseries\large} % % add section label \renewcommand\thesection{Problem~\arabic{section}:} % %%%----------%%%----------%%%----------%%%----------%%% \begin{document} \title{Homework Assignment~\#1} \author{R02943142 Hsieh, Chiao} %\date{01/01/2014} \maketitle \section{CTL Model Checking} Consider model checking the CTL property $\textbf{AG}((at\ l_0) \to \textbf{AF}(at\ CR_0))$ (using the CTL model checking procedures in Chapter 4.1 of [CGP]) against the following Kripke structure which represents a two-process mutual exclusion algorithm using an atomic read/write variable. \medskip \\ Answer. \smallskip \\ First, we normalize the property. \smallskip \\ $ \textbf{AG}(l_0 \to \textbf{AF}(CR_0)) \\ = \neg \textbf{EF} \neg (\neg l_0 \lor \neg \textbf{EG}(\neg CR_0)) \\ = \neg \textbf{E} [ true\ \textbf{U}\ \neg (\neg l_0 \lor \neg \textbf{EG}(\neg CR_0)) ] $ \smallskip \\ Then, following sub-formulae are going to be labeled one by one on each state according to the model checking algorithm.\\ $ \{ \textbf{EG}(\neg CR_0), (\neg l_0 \lor \neg \textbf{EG}(\neg CR_0)), \textbf{E} [ true\ \textbf{U}\ \neg (\neg l_0 \lor \neg \textbf{EG}(\neg CR_0)) ] \} $ \smallskip \\ To simplify the labels on graph, let \\ $ p0 = \textbf{EG}(\neg CR_0), \\ p1 = (\neg l_0 \lor \neg p0), \\ p2 = \textbf{E} [ true\ \textbf{U}\ \neg p1) \\ \Rightarrow \textbf{AG}(l_0 \to \textbf{AF}(CR_0)) = \neg p2 $ To label $p0$, the new machine $M'$ is derived by simply deleting the two states labeled with $CR_0$ and their incoming and outgoing edges. The corresponding strongly connected components~(SCC) are then computed~(See graph below). Each of them is a single node with a self-loop. Apparently, all states in $M'$ can reach a SCC, so all states in $M'$ are labeled with $p0$. \begin{center} \tikzstyle{every node}=[ellipse, inner sep=1pt, draw, font=\tiny, align=center, minimum width=40pt] \begin{tikzpicture}[->,>=latex, auto, scale=1.4] \node (P0_0) at (1, 4.2) {T=0 \\ $\bot, \bot$}; \node (P0_1) at (1, 3) {T=0 \\ $l_0, l_1$}; \node (P0_2) at (0, 2) {T=0 \\ $l_0, NC_1$}; \node [draw=none, text=red, right = -0.4 of P0_2] {SCC1}; \node (P0_3) at (2, 2) {T=0 \\ $NC_0, l_1$}; \node (P0_4) at (1, 1) {T=0 \\ $NC_0, NC_1$}; \node [draw=none, text=red, right = -0.4 of P0_4] {SCC2}; \node (P1_0) at (7, 4.2) {T=1 \\ $\bot, \bot$}; \node (P1_1) at (7, 3) {T=1 \\ $l_0, l_1$}; \node (P1_2) at (6, 2) {T=1 \\ $l_0, NC_1$}; \node (P1_3) at (8, 2) {T=1 \\ $NC_0, l_1$}; \node [draw=none, text=red, left = -0.4 of P1_3] {SCC3}; \node (P1_4) at (7, 1) {T=1 \\ $NC_0, NC_1$}; \node [draw=none, text=red, left = -0.4 of P1_4] {SCC4}; \node (P1_5) at (5, 1) {T=1 \\ $l_0, CR_1$}; \node (P1_6) at (6, 0) {T=1 \\ $NC_0, CR_1$};\ \node [draw=none, text=red, below = 0 of P1_6] {SCC5}; \path (P0_0) edge (P0_1) (P0_1) edge (P0_2) edge (P0_3) (P0_2) edge [loop left] (P0_2) edge (P0_4) (P0_3) edge (P0_4) (P0_4) edge [loop left] (P0_4) (P1_0) edge (P1_1) (P1_1) edge (P1_2) edge (P1_3) (P1_2) edge (P1_4) edge (P1_5) (P1_3) edge [loop right] (P1_3) edge (P1_4) (P1_4) edge [loop right] (P1_4) edge (P1_6) (P1_5) edge (P1_6) edge [bend right] (P0_1) (P1_6) edge [loop right] (P1_6) ; \draw (P1_6) to [bend left] (4, 1) to [bend right] (P0_3); \end{tikzpicture} \end{center} \begin{center} \tikzstyle{every node}=[ellipse, inner sep=1pt, draw, font=\tiny, align=center, minimum width=40pt] \tikzstyle{ap}=[draw=none, text=blue] \begin{tikzpicture}[->,>=latex, auto, scale=1.4] \node (P0_0) at (1, 4.2) {T=0 \\ $\bot, \bot$}; \node[ap, above = 0 of P0_0] {$\{p0, p1, p2\}$}; \node (P0_1) at (1, 3) {T=0 \\ $l_0, l_1$}; \node[ap, above left = 0 of P0_1] {$\{p0, \neg p1, p2\}$}; \node (P0_2) at (0, 2) {T=0 \\ $l_0, NC_1$}; \node[ap, above = 0 of P0_2] {$\{p0, \neg p1, p2\}$}; \node (P0_3) at (2, 2) {T=0 \\ $NC_0, l_1$}; \node[ap, above = 0 of P0_3] {$\{p0, p1, p2\}$}; \node (P0_4) at (1, 1) {T=0 \\ $NC_0, NC_1$}; \node[ap, below = 0 of P0_4] {$\{p0, p1, p2\}$}; \node (P0_5) at (3, 1) {T=0 \\ $CR_0, l_1$}; \node[ap, below = 0 of P0_5] {$\{\neg p0, p1, p2\}$}; \node (P0_6) at (2, 0) {T=0 \\ $CR_0, NC_1$}; \node[ap, below = 0 of P0_6] {$\{\neg p0, p1, p2\}$}; \node (P1_0) at (7, 4.2) {T=1 \\ $\bot, \bot$}; \node[ap, above = 0 of P1_0] {$\{p0, p1, p2\}$}; \node (P1_1) at (7, 3) {T=1 \\ $l_0, l_1$}; \node[ap, above left = 0 of P1_1] {$\{p0, \neg p1, p2\}$}; \node (P1_2) at (6, 2) {T=1 \\ $l_0, NC_1$}; \node[ap, above = 0 of P1_2] {$\{p0, \neg p1, p2\}$}; \node (P1_3) at (8, 2) {T=1 \\ $NC_0, l_1$}; \node[ap, above = 0 of P1_3] {$\{p0, p1, p2\}$}; \node (P1_4) at (7, 1) {T=1 \\ $NC_0, NC_1$}; \node[ap, below = 0 of P1_4] {$\{p0, p1, p2\}$}; \node (P1_5) at (5, 1) {T=1 \\ $l_0, CR_1$}; \node[ap, below = 0 of P1_5] {$\{p0, p1, p2\}$}; \node (P1_6) at (6, 0) {T=1 \\ $NC_0, CR_1$}; \node[ap, below = 0 of P1_6] {$\{p0, p1, p2\}$}; \path (P0_0) edge (P0_1) (P0_1) edge (P0_2) edge (P0_3) (P0_2) edge [loop left] (P0_2) edge (P0_4) (P0_3) edge (P0_4) edge (P0_5) (P0_4) edge [loop left] (P0_4) edge (P0_6) (P0_5) edge (P0_6) edge [bend left] (P1_1) (P0_6) edge [loop left] (P0_6) (P1_0) edge (P1_1) (P1_1) edge (P1_2) edge (P1_3) (P1_2) edge (P1_4) edge (P1_5) (P1_3) edge [loop right] (P1_2) edge (P1_4) (P1_4) edge [loop right] (P1_4) edge (P1_6) (P1_5) edge (P1_6) edge [bend right] (P0_1) (P1_6) edge [loop right] (P1_6) ; \draw (P0_6) to [bend right] (4, 1) to [bend left] (P1_2); \draw (P1_6) to [bend left] (4, 1) to [bend right] (P0_3); \end{tikzpicture} \end{center} For $p1$, we just label those states that without either $l_0$ or $p0$, and then for $p2$, we backward find all states that can reach states labeled with $\neg p1$, i.e., states not labeled with $p1$. The final labeled graph is shown as above. Because both initial states are labeled with $p2$, it is clear that $\textbf{AG}(l_0 \to \textbf{AF}(CR_0))$ does not hold. In order to prevent the starvation problem, fair computations should frequently visit the two states labeled with $CR_0$. Hence, the fairness constraint $F$ must have at least a set $P$ containing these two states. Under this constraint, there is no fair non-trivial SCC when computing \textit{CheckEG} procedure for $p0$. Therefore, all states are labeled with $\neg p0$ and hence $p1$. Since all states are labeled with $p1$, no state can eventually reach a state labeled with $\neg p1$; thus all states are labeled with $\neg p2$, and this implies $\textbf{AG}(l_0 \to \textbf{AF}(CR_0))$ always holds \section{LTL Model Checking} Consider another two-process mutual exclusion algorithm via the arbitration of a binary sema-phore. The Kripke structure representing this system is as follows. \begin{center} \begin{tikzpicture}[ ->,>=latex, auto, scale=2, every node/.style={ rectangle, inner sep=1pt, draw, font=\scriptsize, align=center, minimum width=50pt, minimum height=35pt }, every label/.style={ draw=none, label distance=-20pt } ] \node [draw=none, minimum width=0pt, minimum height=0pt] (init) at (2, 3.7) {}; \node [label=above right:0] (s0) at (2, 3) {sem=1 \\ $s_0=\textbf{i}, s_1=\textbf{i}$}; \node [label=above right:1] (s1) at (1, 2) {sem=1 \\ $s_0=\textbf{e}, s_1=\textbf{i}$}; \node [label=above right:2] (s2) at (3, 2) {sem=1 \\ $s_0=\textbf{i}, s_1=\textbf{e}$}; \node [label=above right:3] (s3) at (0, 1) {sem=0 \\ $s_0=\textbf{c}, s_1=\textbf{i}$}; \node [label=above right:4] (s4) at (2, 1) {sem=1 \\ $s_0=\textbf{e}, s_1=\textbf{e}$}; \node [label=above right:5] (s5) at (4, 1) {sem=0 \\ $s_0=\textbf{i}, s_1=\textbf{c}$}; \node [label=above right:6] (s6) at (1, 0) {sem=0 \\ $s_0=\textbf{c}, s_1=\textbf{e}$}; \node [label=above right:7] (s7) at (3, 0) {sem=0 \\ $s_0=\textbf{e}, s_1=\textbf{c}$}; \path (init) edge (s0) (s0) edge (s1) edge (s2) (s1) edge (s3) edge (s4) (s2) edge (s4) edge (s5) (s3) edge (s6) edge [bend left =45] (s0) (s4) edge (s6) edge (s7) (s5) edge (s7) edge [bend right=45] (s0) (s6) edge [loop left ] (s6) edge [bend left =45] (s2) (s7) edge [loop right] (s7) edge [bend right=45] (s1) ; \end{tikzpicture} \end{center} Check if the system at state 1 satisfies the LTL formula $\textbf{A}((s_0 = \textbf{e}) \textbf{U} (s_0 = \textbf{c}))$ (using the LTL model checking procedures in Chapter 4.2 of [CGP]). Please illustrate the model checking steps by giving the closure of the formula, relevant parts of the product graph (composed from the Kripke structure and the implicit tableau), etc. \medskip \\ Answer. To falsify the property $\textbf{A}[(s_0 = \textbf{e}) \textbf{U} (s_0 = \textbf{c})]$, we try to prove $\textbf{E}\neg[(s_0 = \textbf{e}) \textbf{U} (s_0 = \textbf{c})]$. Following the LTL model checking procedure, we first compute the closure of $\neg [(s_0 = \textbf{e}) \textbf{U} (s_0 = \textbf{c})]$ and list these formulae. \smallskip \\ $ CL(\neg[(s_0 = \textbf{e}) \textbf{U} (s_0 = \textbf{c})]) = \{ \\ \neg [(s_0 = \textbf{e}) \textbf{U} (s_0 = \textbf{c})], [(s_0 = \textbf{e}) \textbf{U} (s_0 = \textbf{c})], \\ (s_0 = \textbf{e}), \neg (s_0 = \textbf{e}), (s_0 = \textbf{c}), \neg (s_0 = \textbf{c}), \\ \textbf{X}[(s_0 = \textbf{e}) \textbf{U} (s_0 = \textbf{c})], \neg \textbf{X}[(s_0 = \textbf{e}) \textbf{U} (s_0 = \textbf{c})], \textbf{X} \neg [(s_0 = \textbf{e}) \textbf{U} (s_0 = \textbf{c})], \neg \textbf{X} \neg [(s_0 = \textbf{e}) \textbf{U} (s_0 = \textbf{c})] \\ \} $ Next, we compute the set of atoms for constructing the behavior graph. Observe that the property only considers the value of $s_0$; hence, those states labeled with same value of $s_0$ should derive very similar maximal consistent set of formulae, $K$, and the only difference comes from the atomic proposition on each state. Below is the computed $K$ for different value of $s_0$. For each state $j$, we can derive $K_j$ by disjunction of $L(j)$ and $K$ with the same value of $s_0$ in $L(j)$. E.g. we can derive $K_0$ for state 0 by $K_0 = L(0) \cup K_{s_0=\textbf{i}}$. \medskip \noindent State 0, 2, 5: $s_0 = \textbf{i}$\\ $ K_{s_0=\textbf{i}}' = \{ \neg(s_0 = \textbf{e}), \neg(s_0 = \textbf{c}), \neg[(s_0 = \textbf{e}) \textbf{U} (s_0 = \textbf{c})], \neg \textbf{X} [(s_0 = \textbf{e}) \textbf{U} (s_0 = \textbf{c})], \textbf{X} \neg [(s_0 = \textbf{e}) \textbf{U} (s_0 = \textbf{c})] \} $ \\ $ K_{s_0=\textbf{i}}'' = \{ \neg(s_0 = \textbf{e}), \neg(s_0 = \textbf{c}), \neg[(s_0 = \textbf{e}) \textbf{U} (s_0 = \textbf{c})], \textbf{X}[(s_0 = \textbf{e}) \textbf{U} (s_0 = \textbf{c})], \neg \textbf{X} \neg [(s_0 = \textbf{e}) \textbf{U} (s_0 = \textbf{c})] \} $ \medskip \noindent State 1, 4, 7: $s_0 = \textbf{e}$\\ $ K_{s_0=\textbf{e}}' = \{ s_0 = \textbf{e}, \neg(s_0 = \textbf{c}), \neg[(s_0 = \textbf{e}) \textbf{U} (s_0 = \textbf{c})], \neg \textbf{X} [(s_0 = \textbf{e}) \textbf{U} (s_0 = \textbf{c})], \textbf{X} \neg [(s_0 = \textbf{e}) \textbf{U} (s_0 = \textbf{c})] \} $ \\ $ K_{s_0=\textbf{e}}'' = \{ s_0 = \textbf{e}, \neg(s_0 = \textbf{c}), [(s_0 = \textbf{e}) \textbf{U} (s_0 = \textbf{c})], \textbf{X}[(s_0 = \textbf{e}) \textbf{U} (s_0 = \textbf{c})], \neg \textbf{X} \neg [(s_0 = \textbf{e}) \textbf{U} (s_0 = \textbf{c})] \} $ \medskip \noindent State 3, 6: $s_0 = \textbf{c}$\\ $ K_{s_0=\textbf{c}}' = \{ \neg(s_0 = \textbf{e}), s_0 = \textbf{c}, [(s_0 = \textbf{e}) \textbf{U} (s_0 = \textbf{c})], \neg \textbf{X} [(s_0 = \textbf{c}) \textbf{U} (s_0 = \textbf{c})], \textbf{X} \neg [(s_0 = \textbf{c}) \textbf{U} (s_0 = \textbf{c})] \} $ \\ $K_{s_0=\textbf{c}}'' = \{ \neg(s_0 = \textbf{e}), s_0 = \textbf{c}, [(s_0 = \textbf{e}) \textbf{U} (s_0 = \textbf{c})], \textbf{X}[(s_0 = \textbf{e}) \textbf{U} (s_0 = \textbf{c})], \neg \textbf{X} \neg [(s_0 = \textbf{e}) \textbf{U} (s_0 = \textbf{c})] \} $ \medskip \noindent Notice that only following transitions are allowed according to the rule of \textbf{X} operator on $K$. \medskip \noindent $ K_{s_0=\textbf{i}}' \rightarrow \{K_{s_0=\textbf{i}}', K_{s_0=\textbf{i}}'', K_{s_0=\textbf{e}}'\} \\ K_{s_0=\textbf{i}}'' \rightarrow \{K_{s_0=\textbf{e}}'', K_{s_0=\textbf{c}}', K_{s_0=\textbf{c}}''\} \\ K_{s_0=\textbf{e}}' \rightarrow \{K_{s_0=\textbf{i}}', K_{s_0=\textbf{i}}'', K_{s_0=\textbf{e}}'\} \\ K_{s_0=\textbf{e}}'' \rightarrow \{K_{s_0=\textbf{e}}'', K_{s_0=\textbf{c}}', K_{s_0=\textbf{c}}''\} \\ K_{s_0=\textbf{c}}' \rightarrow \{K_{s_0=\textbf{i}}', K_{s_0=\textbf{i}}'', K_{s_0=\textbf{e}}'\} \\ K_{s_0=\textbf{c}}'' \rightarrow \{K_{s_0=\textbf{e}}'', K_{s_0=\textbf{c}}', K_{s_0=\textbf{c}}''\} $ \medskip The corresponding behavior graph is shown at next page. The SCCs are computed and indicated with different colors. Among these two SCCs, the {\color{green!50}green SCC} is self-fulfilling because all atoms in this SCC don't contain any formula in the form of [$f_1 \textbf{U} f_2$]. The {\color{red!50}red SCC} is also self-fulfilling since all atoms can reach the atom $(3, L(3) \cup K_{s_0=\textbf{c}}')$, i.e., every atom in this can reach an atom where $s_0=\textbf{c}$, and therefore the definition of a self-fulfilling SCC is satisfied. To prove $M, 1 \models \textbf{E}\neg[(s_0 = \textbf{e}) \textbf{U} (s_0 = \textbf{c})]$, we can find the atom, $(1, L(1) \cup K_{s_0=\textbf{e}}')$. It contains $\neg[(s_0 = \textbf{e}) \textbf{U} (s_0 = \textbf{c})]$ in $K_{s_0=\textbf{e}}'$. Combined with previous result that this atom can reach the self-fulfilling {\color{green!50}green SCC}, we can conclude that $M, 1 \models \textbf{E}\neg[(s_0 = \textbf{e}) \textbf{U} (s_0 = \textbf{c})]$ is proved. Hence, the original property $\textbf{A}[(s_0 = \textbf{e}) \textbf{U} (s_0 = \textbf{c})]$ is disproved. \begin{center} \begin{tikzpicture}[ ->,>=latex, auto, scale=2, every node/.style={ rectangle, inner sep=1pt, draw, font=\scriptsize, align=center, minimum width=60pt, minimum height=35pt, SCC1/.style={fill=red!50}, SCC2/.style={fill=green!50} } ] \node [SCC1] (S0_K') at (2, 3) {$(0, L(0) \cup K_{s_0=\textbf{i}}')$}; \node [SCC2] (S1_K') at (1, 2) {$(1, L(1) \cup K_{s_0=\textbf{e}}')$}; \node [SCC1] (S2_K') at (3, 2) {$(2, L(2) \cup K_{s_0=\textbf{i}}')$}; \node [SCC1] (S3_K') at (0, 1) {$(3, L(3) \cup K_{s_0=\textbf{c}}')$}; \node [SCC2] (S4_K') at (2, 1) {$(4, L(4) \cup K_{s_0=\textbf{e}}')$}; \node [SCC1](S5_K') at (4, 1) {$(5, L(5) \cup K_{s_0=\textbf{i}}')$}; \node [SCC1] (S6_K') at (1, 0) {$(6, L(6) \cup K_{s_0=\textbf{c}}')$}; \node [SCC2] (S7_K') at (3, 0) {$(7, L(7) \cup K_{s_0=\textbf{e}}')$}; \node [SCC1] (S0_K'') at (2, -2) {$(0, L(0) \cup K_{s_0=\textbf{i}}'')$}; \node [SCC1] (S1_K'') at (1, -3) {$(1, L(1) \cup K_{s_0=\textbf{e}}'')$}; \node [SCC1] (S2_K'') at (3, -3) {$(2, L(2) \cup K_{s_0=\textbf{i}}'')$}; \node [SCC1] (S3_K'') at (0, -4) {$(3, L(3) \cup K_{s_0=\textbf{c}}'')$}; \node [SCC1] (S4_K'') at (2, -4) {$(4, L(4) \cup K_{s_0=\textbf{e}}'')$}; \node [SCC1] (S5_K'') at (4, -4) {$(5, L(5) \cup K_{s_0=\textbf{i}}'')$}; \node [SCC1] (S6_K'') at (1, -5) {$(6, L(6) \cup K_{s_0=\textbf{c}}'')$}; \node [SCC1] (S7_K'') at (3, -5) {$(7, L(7) \cup K_{s_0=\textbf{e}}'')$}; \path (S0_K') edge (S1_K') edge (S2_K') edge [bend left =75, looseness=1.2] (S2_K'') (S1_K') edge (S4_K') (S2_K') edge (S4_K') edge (S5_K') edge [bend left =75] (S5_K'') (S3_K') edge [bend left =45] (S0_K') edge [bend right=45] (S0_K'') (S4_K') edge (S7_K') (S5_K') edge (S7_K') edge [bend right=45] (S0_K') edge [bend left =45] (S0_K'') (S6_K') edge [bend left =45] (S2_K') edge [bend left =45] (S2_K'') (S7_K') edge [loop below] (S7_K') edge [bend right=45] (S1_K') (S0_K'') edge (S1_K'') (S1_K'') edge (S3_K'') edge (S4_K'') edge [bend left =45] (S3_K') (S2_K'') edge (S4_K'') (S3_K'') edge (S6_K'') edge [bend left] (S6_K') (S4_K'') edge (S6_K'') edge (S7_K'') edge [bend left =45] (S6_K') (S5_K'') edge (S7_K'') (S6_K'') edge [loop below] (S6_K'') edge [bend left =90, looseness=1.2] (S6_K') (S7_K'') edge [loop below] (S7_K'') edge [bend right=45] (S1_K'') ; \end{tikzpicture} \end{center} \end{document}
{ "alphanum_fraction": 0.5863354718, "avg_line_length": 34.8034351145, "ext": "tex", "hexsha": "5453073b99db3baf42fb9033f192d3f329848c5d", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2018-02-22T00:44:03.000Z", "max_forks_repo_forks_event_min_datetime": "2018-02-22T00:44:03.000Z", "max_forks_repo_head_hexsha": "21d2d50d7cc0ebb05f08a5ff0bdba16f6a63cccb", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "hc825b/homeworks", "max_forks_repo_path": "Automatic_Verification/R02943142_Automatic_Verification_HW1.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "21d2d50d7cc0ebb05f08a5ff0bdba16f6a63cccb", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "hc825b/homeworks", "max_issues_repo_path": "Automatic_Verification/R02943142_Automatic_Verification_HW1.tex", "max_line_length": 147, "max_stars_count": 1, "max_stars_repo_head_hexsha": "21d2d50d7cc0ebb05f08a5ff0bdba16f6a63cccb", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "hc825b/homeworks", "max_stars_repo_path": "Automatic_Verification/R02943142_Automatic_Verification_HW1.tex", "max_stars_repo_stars_event_max_datetime": "2017-12-02T02:05:22.000Z", "max_stars_repo_stars_event_min_datetime": "2017-12-02T02:05:22.000Z", "num_tokens": 8046, "size": 18237 }
%% %% Copyright 2007, 2008, 2009 Elsevier Ltd %% %% Template article for Elsevier's document class `elsarticle' %% with numbered style bibliographic references %% SP 2008/03/01 %% \documentclass[preprint,12pt,3p]{elsarticle} \usepackage{rotating} \usepackage{subfig} %% The amssymb package provides various useful mathematical symbols \usepackage{amssymb} \usepackage{amsmath} %% The amsthm package provides extended theorem environments \usepackage{amsthm} \usepackage{times} % assumes new font selection scheme installed \usepackage{graphics} % for pdf, bitmapped graphics files \usepackage{float} \usepackage{xcolor} \definecolor{verde}{rgb}{0,0.5,0} \usepackage{fancyhdr} \pagestyle{fancy} \fancyhf{} \renewcommand{\sectionmark}[1]{\markboth{\thesection. #1}{}} \renewcommand{\subsectionmark}[1]{\markright{\thesubsection. #1}} \fancyhead[L]{% \leftmark } \fancyhead[R]{\rightmark} \renewcommand{\headrulewidth}{1pt} \renewcommand{\headrule}{\hbox to\headwidth{% \color{verde}\leaders\hrule height \headrulewidth\hfill}} \usepackage[colorlinks,citecolor=blue,urlcolor=blue]{hyperref} \usepackage{cleveref} \usepackage{listings} \lstset{ %frame=tb, % draw a frame at the top and bottom of the code block tabsize=4, % tab space width showstringspaces=false, % don't mark spaces in strings numbers=left, % display line numbers on the left %commentstyle=\color{verde}, % comment color %keywordstyle=\color{blue}, % keyword color %stringstyle=\color{red} % string color belowcaptionskip=1\baselineskip, breaklines=true, frame=false, xleftmargin=\parindent, basicstyle=\scriptsize\ttfamily, %basicstyle=\footnotesize\ttfamily, %edit to get bigger font keywordstyle=\bfseries\color{green!40!black}, commentstyle=\itshape\color{purple!40!black}, identifierstyle=\color{blue}, %functions stringstyle=\color{orange}, } \lstnewenvironment{ttlisting}{\lstset{ basicstyle=\scriptsize\ttfamily\color{black}, identifierstyle=\color{darkgray},}}{} %% The lineno packages adds line numbers. Start line numbering with %% \begin{linenumbers}, end it with \end{linenumbers}. Or switch it on %% for the whole article with \linenumbers after \end{frontmatter}. \usepackage{lineno} %% natbib.sty is loaded by default. However, natbib options can be %% provided with \biboptions{...} command. Following options are %% valid: %% round - round parentheses are used (default) %% square - square brackets are used [option] %% curly - curly braces are used {option} %% angle - angle brackets are used <option> %% semicolon - multiple citations separated by semi-colon %% colon - same as semicolon, an earlier confusion %% comma - separated by comma %% numbers- selects numerical citations %% super - numerical citations as superscripts %% sort - sorts multiple citations according to order in ref. list %% sort&compress - like sort, but also compresses numerical citations %% compress - compresses without sorting %% %% \biboptions{comma,round} % \biboptions{} \journal{Final report for Spring 2018 rotation in Harvard Biorobotics Lab} \begin{document} \begin{frontmatter} \title{ Improving end joint sensing for robotic grasping\\ (Using IMUs to estimate contact force) } %\tnotetext[label0]{This is only an example} \author{Nao Ouyang} %\address[label1]{Address One} %\address[label2]{Address Two\fnref{label4}} %\author[label1,label2]{Author One\corref{cor1}\fnref{label3}} %\address[label1]{Address One} %\address[label2]{Address Two\fnref{label4}} %\cortext[cor1]{I am corresponding author} %\fntext[label3]{I also want to inform about\ldots} %\fntext[label4]{Small city} %\ead{[email protected]} %\ead[url]{author-one-homepage.com} %\author[label5]{Author Two} %\address[label5]{Some University} %\ead{[email protected]} %\author[label1,label5]{Author Three} %\ead{[email protected]} \begin{abstract} Despite decades of research, robots remain unable to reliably grasp objects in environments not designed for robots. Thus, they are not able to work alongside us in human environments or outside buildings. One approach is to use tactile sensing to predict whether a grasp will succeed or not as well as build simulations to evaluate potential grasps. In this lab, we use an underactuated three-fingered gripper, which makes it difficult to estimate the location of the fingers and and the location of the force with respect to the robot frame of reference. \\ I investigated the use of an inertial measurement unit to provide such information. By characterizing the stiffness matrix $K$ of the finger, I could investigate whether using an inexpensive inertial measurement unit (IMU) to estimate $\theta$ deflection with the linear equation $\tau = K \theta$, I could discover information about the force applied. I did very preliminary work, limiting my experiments to the case where we only desire to estimate the change from directly before to second or two after contact. I also I looked at using of off-the-shelf machine learning approaches to reduce the errors in our model. Finally, I investigated adding more sensors to the fingertip in order to better estimate contact location. \end{abstract} \end{frontmatter} %% %% Start line numbering here if you want %% % \linenumbers %% main text \section{Introduction} State-of-the art robots are bad at grasping in unstructured environmnents, limiting their use to structured environments like factory floors. An active area of research in order for robots to expand to unstructured environments (such as kitchens or hospital environments), is the ability to grasp novel objects (objects the robot has not encountered before and may not have any prior information about). Grasp stability analysis plays a critical role in enabling robots to achieve high grasping success rates (in terms of not dropping objects). Through sensing, for instance tactile sensing, we can predict whether or not we will drop the object after grasping and prior to moving. If we find our grasp to be unstable, and assuming the object has not moved during grasping such that releasing our grip might e.g. cause it to tumble, we can simply release our grasp and try again. Physical model approaches involve modeling the contact forces, contact location, and surface normals (which will differ from the force XYZ if the surface(s) are squishy). However, over the last few decades physical models have proven insufficient. This in part due to complex gripper-object interactions during grasping which are both difficult to measure and hard to model. Thus, one of the aspects of this investigation is to evaluate whether machine learning can be used to improve physical models. We can combine the good aspects of physical models (good generalizability, requiring significantly less data than in pure computer vision machine learning approaches) with (not having to have extremely precise models). I tried to charactize the residuals in the physical models and determine if these residuals could be more accurately modelled by a black box machine learning approach rather than making the physical model more complex. This is an increasingly popular approach, which makes intuitive sense. We can leverage domain knowledge, instead of using an end-to-end approach for sensor data to stability prediciton. Although there are a few compact 6 axis force torque sensors, they cost multiple thousands of dollars and are finicky and fragile. Thus, we would like to use an inexpensive nine degree-of-freedom (three-axis acceleromter, gyrometer, and magnetic heading) IMU instead. During the course of this rotation, senior graduate student Qian Wan discoverd that the current tactile sensors are insufficent for estimating contact location accurately enough to develop a good grasp stability estimate. Thus, this project includes a brief exploration of a new sensor design using the same elements (barometric pressure sensors covered by a layer of elastomer, such as urethane), but more of them. \section{Theory} \label{sec:firstpage} \subsection{Linear Model} For the physical model, I am making a few simplifying assumptions: \begin{enumerate} \item There is a linear relationship between force and deflection, or alternatively torque and angle of deflection \item The center of the axis of rotation was at the tip of the proximal joint (in reality, likely it is two or three mm shifted outward, and changes as the flex increases) \item We consider the z axis to be defined respect to the surface of the finger, so that z=0 is always at the tip of the finger \end{enumerate} Note that the consequence of \#3 is that we cannot estimate z-axis torque. \subsection{Reference Frame} \begin{ttlisting}[language=C++,breaklines] Coordinates ----> + x (roll) | | v + y (pitch) + z (up out of page) (yaw) Finger Positions ======= ====================. {CL} || --- || 15 12 9 6 3 || {A} [xy=0]-- || 14 11 8 5 2 || {MP} || --- || 13 10 7 4 1 || ======= ====================. [[---]] || WEB || || CAM || ||--- || \end{ttlisting} For the initial (3D stiffness matrix) experiment, Points 13 through 15 are at x=2.6 cm, and each x is spaced 5 mm apart. The y's are at 0.4, 0.1, and -0.2mm respectively. For the final experiment comparing two stiffness, the x spacing remains roughly the same, while the y's are now at 0.3, -0.1, and -0.4 mm respectively. ~\\ \subsection{Linear Model in the 1D case} In the one-axis case, the math is straightforward. Experimentally, this is when I am only collecting data down the center (y-axis) of the finger, which should result in almost no roll and pitch, limiting the analysis to yaw. \begin{align} \tau = k \times F \end{align} For example, one datapoint might be \begin{align} k &= F / \tau \\ k &= (20g \times 9.8 m/s^2)/0.1 \text{ degrees} \end{align} where `k` represents the stiffness of the finger. Let $c$ represent the inverse of $k$. Using least squares error, we may fit a line to find $\hat{c}$. \begin{align} \hat{c} &= \frac{1}{\hat{k}} \\ \theta &= \tau \hat{c} + b \end{align} where $b$ be a constant determined by the line of fit. \subsection{Linear model residuals} From the above, we may calculate the residuals of any of our estimated variables. For instance, from our actual data we may obtain an estimate for `k`. \begin{align} \tau_{data} &= \hat{k} \cdot \theta_{data} \\ \end{align} Using this k we can go back and calculate estimates for the "true" torque, assuming our linear model was correct. \begin{align} \hat{\tau} &= \hat{k} \cdot \theta_{data} \end{align} We would then calculate our torque residuals as \begin{align} \epsilon_{\tau} = \hat{\tau} - \tau \end{align} If we plot a graph of (torque residuals) vs (estimate residuals) and find that our points are randomly scattered around a straight line, then our model well-approximates reality as sensed by a noisy sensor. However, our residuals may instead follow a parabola, in which case we would want to amend our model to have higher-order terms. \begin{align} \hat{\tau} = \hat{k}\theta_{data} +c_1 \theta_{data} + c_2 \theta_{data} \end{align} and so forth. We may eventually use machine learning techniques to fit higher-order terms. \subsection{Linear model in the 3D case} \label{3d case} The 3d case is exactly the same, except now each of the variables are vectors or matrices. %:s@\\\[:\\begin{bmatrix}:g \begin{align} \begin{bmatrix} \vec{\tau} \end{bmatrix}_{3\times n} = \begin{bmatrix} K \end{bmatrix}_{3\times 3} \cdot \begin{bmatrix} \vec{\theta} \end{bmatrix}_{3\times n} \end{align} where \begin{align} \vec{\tau} &= \begin{bmatrix} \stackrel{3\times 1}{\tau_1} | \stackrel{3\times 1}{\tau_2} | ... |\stackrel{3\times 1}{\tau_n} \end{bmatrix} \\ \\ \vec{\theta} &= \begin{bmatrix} \stackrel{3\times 1}{\theta_1} | \stackrel{3\times 1}{\theta_2} | ... |\stackrel{3\times 1}{\theta_n} \end{bmatrix} \end{align} \subsection{Sanity Check: Simplify to 2D case} We know that $\tau = r \times F$ \begin{align} \begin{bmatrix} \tau_{x} \ \tau_{y} \ \tau_{z} \ \end{bmatrix} = \Bigg[ \; K \; \Bigg]_{3\times3} \; \begin{bmatrix} \theta_{x} \ \theta_{y} \ \theta_{z} \ \end{bmatrix} \ \end{align} Further, we can use some of the simplifying assumptions we made about to model our system, in order to have an idea of what values our math should result in for $k$. We may also see that we will only ever have x and y components for $r$; z components for $F$; and thereforce (by how cross products work) only ever have x and y components for $\tau$. \textbf{Off-axis} \begin{align} r \approx \begin{bmatrix} r_x \\ r_y \\ 0 \\ \end{bmatrix} \; \times \; f \approx \begin{bmatrix} 0 \\ 0 \\ f_z \\ \end{bmatrix} \; = \; \tau \; = \begin{bmatrix} \tau_x \\ \tau_y \\ 0 \\ \end{bmatrix}\\ \end{align} \textbf{On-Axis} In the even more simplified case, if we do not apply the force off-axis and there is only a pitch deflection \begin{align} r \approx \begin{bmatrix} r_x \\ 0 \\ 0 \\ \end{bmatrix} \; \times \; f \approx \begin{bmatrix} 0 \\ 0 \\ f_z \\ \end{bmatrix} \; = \; \tau \; = \begin{bmatrix} 0 \\ \tau_y \\ 0 \\ \end{bmatrix}\\ \end{align} We \textbf{could} have enough nonzero values for the xyz components of $\theta$ that we will be able to calculate a nondegenerate estimate for $k$. This is thanks to applying the force off-axis, causing the finger to roll and create non-zero elements for $\theta_y$ ( our roll) and $\theta_z$ (our yaw). However, after solving for K, I found that I ended up with a bottom row consisting entirely of zeroes. \section{Methods} \subsection{Setup} For the setup, I used a triple-beam balance as a way to apply force while retaining a vertical angle. I hot-glued a pumpkin awl to the bottom of the plate and put standard weights on top of of the scale. This caused the awl to contact the surface of the finger, and the finger would then deflect. \begin{figure}[H] \centering %\includegraphics[width=.3\textheight]{images/setup/webcam-edit.jpg} \includegraphics[width=.5\textheight]{images/setup/closeup.jpg} %\caption{Loaded tendon} %\label{fig:figura1} \end{figure} \subsubsection{One-Dimension Data} In the 1D case, I simply placed a ruler behind the finger and used a webcam to take pictures. By mapping pixel counts to millimeters, I could then estimate the deflection of the finger. The linear fit from these very imprecise measurements proved linear enough that I then relied on the IMU going forward and skipped the step (which should be completed eventually) of characterizing the accuracy of the IMU versus a gold standard, such as a stereo camera (the Microntracker available in lab) or other methods (such as using Apriltags with a calibrated webcam). In the 1D case I simply took pictures of the finger against a ruler in order to measure angular deflection. \begin{figure}[htbp] \centering \subfloat[Zero measurement ]{% \includegraphics[width=0.3\linewidth]{images/1d/zero.png}% }\hfil % MUST be right above next figure \subfloat[Weight (torque) applied ]{% \includegraphics[width=0.3\linewidth]{images/1d/weighted.png}% }\hfil \caption{I drew lines through the vertical (with reference to the camera image) center of the finger before and after weight was applied. The angle between the two is what I wrote down} \label{fig:myfig} \end{figure} \subsubsection{Three-Dimensional Data} In order to move to the 3D case, I then created a grid of 15 points on the finger. For each of the 15 points, I applied several known amounts of force, with three samples at each force-point combination. Each sample consisted of a "zero" measurement pre-contact and a measurement post-contact. In post-processing, I then subtracted the two to obtain my final sensor data. \begin{figure}[H] \centering \includegraphics[width=.15\textheight]{images/setup/grid2.jpg} \caption{Grid of 15 points, marked with silver sharpie applied through holes poked in a piece of tape.} \end{figure} \subsubsection{Instrumentation} I simply hot-glued an IMU to the underside of the finger. I used was the Bosch BNO055 sensor. A breakout was available from adafruit.com for \$35, which is relatively inexpensive. I chose this sensor not only for the built-in processing, and also because of the extensive beginner-friendly documentation developed by Adafruit for all of its products. It was recommended to me by a researcher at %(person) at Right Hand Robotics, which is a spinoff of this lab. I then connected it to an Arduion Duo, which then relayed the sensor readings over serial to my laptop, running Ubuntu 16.04. \begin{figure}[H] \centering \includegraphics[width=.3\textwidth]{images/setup/bno055.png} \caption{Adafruit BNO055 9-axis orientation sensor. "Bosch is the first company to get this right by taking a MEMS accelerometer, magnetometer and gyroscope and putting them on a single die with a high speed ARM Cortex-M0 based processor to digest all the sensor data, abstract the sensor fusion and real time requirements away, and spit out data you can use in quaternions, Euler angles or vectors." (quote and image source: adafruit.com)} \end{figure} \begin{figure}[H] \centering \includegraphics[width=.5\textwidth]{images/setup/IMU.jpg} \caption{IMU attached to bottom of finger. I made sure that the wires hung loosely and did not interact with the tabletop and press up against the finger} %\label{fig:figura1} \end{figure} I also attached an Apriltag (which looks like a QR code) and a Micontracker tag (which has a checkerboard like cross), both of which can simply be printed out on paper, and calibrated the webcam for use with the Apriltag C++ script. Although I have collected mictrontracker and apriltag data for comparing to the IMU deflection data, I have not had a chance to analyze this data yet. \begin{figure}[H] \centering \includegraphics[width=.6\textwidth]{images/setup/microntracker_template.jpg} \caption{Apriltag on the left and Mictrontracker "Xpoint" on the right} %\label{fig:figura1} \end{figure} Unfortnuately, the IMU acclerometer would consistently refuse to calibrate if the sensor did not start parallel to the ground. The significant slope of the finger versus the clamp and 90 degree angle of the 80/20 meant that I had to approximate a slanted 80/20 setup by increasing the degrees of freedom on the L brackets (removing t-nuts). This led to a lot of worry on my part about the creakiness and variability in the data collection as I could not guarantee angle consistency, although in retrospect this is almost completely mitigated by the fact that I am zeroing before every reading. Complications arose from the limited travel of the triple beam balance, which meant I could only measure small deflections (initially on the order of 4 mm; after discovering that I could take the end stop off the back edge of the triple beam balance, about 10mm). At the tip of the finger, I could only less than a hundred grams of "force" (about 1 N), while 4N is a normal amount of force applied by humans to grasp things. This remained a source of frustration throughout the data collecting and ate up many hours of adjusting the experimental setup. Another source of delay was that it took several attempts to find the MiconTracker manual (found on the old desktop in lab), without which the Microntracker interface was difficult to use. In order to apply load to pull on the tendon and change the stiffness of the finger, I decided on a simple setup where I hung a weight from the tendon. \begin{figure}[H] \centering \subfloat[ Loading the tendon by hanging weights in a paper basket ]{% \includegraphics[width=0.28\textwidth]{images/setup/loading.jpg}% }\hfil % MUST be right above next figure \subfloat[ Loading the end of the finger, to help bring the unloaded finger to parallel with the table. ]{% \includegraphics[width=0.25\textwidth]{images/setup/leveling.jpg}% }\hfil \caption{Experimental setup for collecting data at two different stiffnesses (by loading the tendon, as would happen in order for gripper to grasp an object)} \end{figure} Over time, I trended toward collecting coarser and coarser data as I grew confident that measurements still provided enough data for me to see overall trends in the data (such as the linearity of the data). I shifted from 2g increments, to 20g, and in the end 50g increments. This meant that for the finger when stiffness was applied, I had to change the setup so that the finger started at an angle and became parallel to the floor when load was applied to the tendon. After my deep grievances with producing angles on the 80/20 setup, I decided to use legos with popsicle stick props on one end in order to produce the correct angles. Finally, I discovered at the very end that I could calibrate the scale to have a constant force offset. Thus, instead of collecting a zero measurement with the tip starting just above the surface, which meant that I had to wait for the oscillations to settle, I could put an offset which made the scale keel all the way to up when the weight was removed. This made zero measurements 2-3x faster, which reduced the cognitive load to remember which measurement I was at, which both reduced data cleanup and needing to recollecting data. This led to significant time savings. \begin{figure}[htbp] \centering \subfloat[Scale calibration setup ]{% \includegraphics[width=0.2\textwidth]{images/setup/scale_calibration.jpg}% }\hfil % MUST be right above next figure \subfloat[ Scale calibration graph, with nominal offset of 5g. I observed that the offset is not constant and increases with increasing load ]{% \includegraphics[width=0.5\textwidth]{images/setup/scale_calibration_graph.png}% }\hfil \caption{Scales and more scales: calibrating the weight applied with the triple beam balance using an electronic scale } \end{figure} \section{Data Analysis} \subsection{Loaded tendon data} For the final dataset, I aimed to collect data quickly for two different finger tendon loads (which I'll refer to as relaxed and loaded respectively). I collected data in 50g intervals at two y locations and three x locations on the finger (total of 6 positions). Specificially, there were positions 4,5,6 and 10,11,12. x = [3.1, 4.1] and y = [0.2, -0.1, -0.4]. (Note that I defined the center line of the finger to be y=0, but as there is a mold line there, the data is collected from 0.1 mm off of the center line) First, I conducted a quick sanity check. By plotting torque versus measured theta, I saw that the data follows a linear pattern, which is what I expected based previous experiments. See \cref{fig:sanityload}. \begin{figure}[tb!] \centering \subfloat[Relaxed tendon (no load) ]{% \includegraphics[width=0.5\textheight]{images/stiff/torqtheta}% }\hfil % MUST be right above next figure \subfloat[Loaded tendon (with 900g) ]{% \includegraphics[width=0.5\textheight]{images/stiff/torqtheta_loaded}% }\hfil \caption{ Please note that there are two different y labels! The left graph is of Torque X vs Theta X (roll), while the right graph is of the Y counterparts (Torque Y vs Theta Y)} \label{fig:sanityload} \end{figure} For the Theta Y (yaw) case, the relaxed tendon fit has a root mean square error (RMSE) of 14.4 (gram centimeters), or in other words, if the average x position is ~3.5cm, then the average error (for estimating y-axis torque) is about four grams. On the other hand, the loaded tendon fit has a much higher RMSE of 80.4 g*cm, which if valid, means that the average error is about 23 grams. This is a much larger error, and a bit unfortunate given that we care more about the loaded case in real grasping scenarios. I then fit a line to the data, and double-checked my work again. The slope of this line gives us our stiffness tensor K. I can double check that we've conducted a linear fit by see that we have by comparing our newly created torque estimates to the original measurements. The datapoints do not fall precisely on a line because the K matrix and fit is in three-dimension. In picking a single dimension to plot, the projection results in datapoints that are not linear in one dimension despite a linear fit overall. The \cref{fig:sanity1} and \cref{fig:sanity2} reassure me that my code is working. \begin{figure}[tb!] \centering \includegraphics[width=0.5\textheight]{images/stiff/torqsanity.png} \caption{Relaxed tendon} \label{fig:sanity1} \end{figure} \begin{figure}[tb!] \centering \includegraphics[width=0.5\textheight]{images/stiff/torqsanity_loaded.png} \caption{Loaded tendon} \label{fig:sanity2} \end{figure} Next I looked at the residuals. Based on the previous experiment, I was expecting to see a linear relationship in the residuals between ThetaY and TorqueY. However, I did not see any such relationship in the relaxed case, which should have most closely matched the data in the previous experiment (see \cref{init-data}). However, in \cref{fig:nopattern} there does not appear to be such a pattern, and it seems like the residuals are somewhat randomly noisy. \begin{figure}[tb!] \centering \includegraphics[width=.9\textwidth]{images/stiff/GOODResid_vs_Theta.png} \caption{Relaxed tendon} \label{fig:nopattern} \end{figure} Confusingly, as shown in \cref{fig:yespattern} a pattern did appear in the loaded case. Note also that the residuals in the loaded case are much larger. Further investigation will be required to figure out what is going on here. \begin{figure}[tb!] \centering \includegraphics[width=.8\textwidth]{images/stiff/GOODResid_vs_Theta_loaded.png} \caption{Loaded tendon. NOTE: Patterns in the thetaZ may be ignored as it was not used in the fit, due to the assumption that the contact forceZ is always zero. } \label{fig:yespattern} \end{figure} Perhaps the most satisfying graph that came out of this experiment compares the two stiffness matrices in the relaxed and loaded cases. See \cref{fig:comparison} \begin{figure}[H] \centering \includegraphics[width=.9\textwidth]{images/stiff/StiffnessComparison.png} \caption{Loaded case shown in green, relaxed tendon shown in orange. This graph shows that the loaded tendon is stiffer (slope is steeper), as we expect. Both models seem mostly linear, but it's possible that in the loaded case, the fit is more exponential. Note that physically the model must go through the y-intercept, and I have graphed the actual fit used to calculated residuals in light green (albeit for a nonlinear fit). The dark green line is plotted only due to limitations in my plotting library.} \label{fig:comparison} \end{figure} \subsection{Initial Experiment: 15 positions, with the relaxed tendon only} \label{init-data} The experiment directly prior to the final experiment consisted of collecting nearly 500 points, mostly in 20g increments, from all 15 positions labelled on the finger (albeit a different finger, which did not have a tendon string in it -- I had to change fingers because of that). I started with a sanity check as before, and plotted the estimated vs measured torques. See \cref{fig:nocolor}. \begin{figure}[tb!] \centering \subfloat[ ]{% \includegraphics[width=0.48\textwidth]{images/round1/Nocolor_torqX.png}% }\hfil % MUST be right above next figure \subfloat[ ]{% \includegraphics[width=0.48\textwidth]{images/round1/Nocolor_torqY.png}% }\hfil \caption{Measured torques on the X axis, and estimated torques on the Y axis. The left graph shows the X axis (roll) measurements, note the magnitude is much smaller compared to the Y axis torque (yaw) measurements. Please ignore the erroneous title on right graph.} \label{fig:nocolor} \end{figure} %\begin{figure}[H] %\centering %\includegraphics[width=1\textwidth]{images/round1/resids_Theta_coloredX.png} %%\caption{Loaded tendon} %%\label{fig:figura1} %\end{figure} The large residuals on the left were alarming. To take a closer look at what was going on, I colored the graph in by X and by Y positions. In \cref{fig:torqxcolors} we see that all of the datapoints way off the line come from xpos4 and xpos5, specifically from y = -0.2cm (from \cref{fig:torqxcolors}). This does not seem to be a fluke, as these are not sequential measurements (positions 12 and 15 respectively). On the other hand, the yaw (torque Y) seemed like a very clean fit. \cref{fig:yfit} %\begin{figure}[H] %\centering %\subfloat[ %]{% %\includegraphics[width=0.5\textwidth]{images/round1/TorqX_Colors_X.png}% %}\hfil % MUST be right above next figure %\subfloat[ %]{% %\includegraphics[width=0.5\textwidth]{images/round1/TorqX_Colors_Y.png}% %}\hfil %\caption{Measured torques on the X axis, and estimated torques on the Y axis. %The left graph shows the X axis (roll) measurements, note the magnitude is much smaller %compared to the Y axis torque (yaw) measurements. Please ignore the erroneous title on right graph.} %\end{figure} \begin{figure}[tb!] \centering \subfloat[Torque measured vs estimated, colored in by X position ]{% \includegraphics[width=0.8\textwidth]{images/round1/TorqX_Colors_X}% }\hfil % MUST be right above next figure \subfloat[ Torque measured vs estimated, colored in by Y position ]{% \includegraphics[width=0.8\textwidth]{images/round1/TorqX_Colors_Y}% }\hfil \caption{Measured torques on the X axis, and estimated torques on the Y axis. The left graph shows the X axis (roll) measurements, note the magnitude is much smaller compared to the Y axis torque (yaw) measurements. Please ignore the erroneous title on right graph.} \label{fig:torqxcolors} \end{figure} \begin{figure}[tb!] \centering \includegraphics[width=0.8\textwidth]{images/round1/TorqY_Colors_Y.png}% \caption{Compared to the torque X fit, the torque Y fit seems very clean.} \label{fig:yfit} \end{figure} %(For the same graph colored %in by X position, see \cref{fig:yX} I then systematically plotted the residuals against the factors that might affect them: the torques, forces, and angular deflection amounts. For instance, it seems plausible that, as the amount of force applied increases, the residuals might increase. See \begin{figure}[p!] \centering \includegraphics[width=0.95\textwidth]{images/round1/resids_Torq.png} \caption{Torque estimate residuals versus torques estimates, colored by their X or Y positions. There does appear to be a suspicious trend between the residuals and the torque Y measurement.} \label{fig:figura1} \end{figure} %\begin{figure}[H] %\centering %\includegraphics[width=.9\textwidth]{images/round1/resids_Torq_coloredY.png} %%\caption{Loaded tendon} %%\label{fig:figura1} %\end{figure} \begin{figure}[tb!] \centering \includegraphics[width=1\textwidth]{images/round1/resids_Force_coloredX.png} \caption{There seems to be generally somewhat random residuals, except there's a chunk of large residuals on the upper right graph of TorqX residuals vs force applied. I never resolved the cause of this. Additionally, note that the first X position, where the finger is least stiff, has a noticeably different set of residuals than the other positions.} %\label{fig:figura1} \end{figure} For additional graphs, please see \ref{appendix-graphs}. %\begin{figure}[H] %\centering %\includegraphics[width=.3\textwidth]{fig1.jpg} %\caption{Exemplo de figura} %\label{fig:figura1} %\end{figure} \section{Attempted fit improvements} Finally, I briefly attempted to "use machine learning or other methods to fit the residuals and improve the overall torque estimate". \begin{itemize} \item For the first experiment (with 492 datapoints) the linear fit RSME was 36.63. \item With an additional term consisting of a linear fit on the residuals, where I allowed a y-intercept, RMSE = 26.65 \item With a random forest fit, RMSE = 22.56 \end{itemize} However, without any cross validation, it's likely that the improvements are due to overfitting and will not generalize at all, so really this is just a sort of hand-waving exercise. I did briefly look into the generalization problem, both by calculating cross-validation statistics (now lost to time) and a brief inspection involving taking out one of the xy positions in the training data, training on the remaining torque-theta data, and then fitting to the thetas inthe dropped-position test set (see \cref{fig:general}). I also did a brief look into fitting the residuals on the new dataset (with both loaded and relaxed tendon measurements). \begin{table}[H] \centering \caption{Brief investigation of fitting residuals on final experiment. "Black box" indicates fitting the model directly to the theta inputs and torque outputs. "Corrected linear fit" indicates taking a linear fit (with no y-intercept), calculating residuals, training another model on those residuals, and updating the model with that new residual estimate.} \label{tab:residfit} \begin{tabular}{l|c|c|c|c|c|c} \hline \hline & Linear & Polynomial & Gradient Boost & Adaboost & Random Forest\\ \hline Black Box & 36.1 & 9.2 & 15.8 & 25.1 & 80.4 \\ Corrected Linear fit & 28.5 & 18.8 & 18.8 & 25.1 & 52.9 \\ \hline \end{tabular} \end{table} I graphed the results in \cref{fig:residfit}. Note that this is a very prelimary exploratory analysis, since I only used one dimension of data for the residual calculations (the K was still fit with all three dimensions). I found the graphs really interesting in displaying what each algorithm attempts to do. Of course, each of the machine learning algorithms can be tuned and should be tested to eliminate overfitting. Notably, as shown in \cref{tab:residfit}, the polynomial fit directly on the data (torque Y vs theta Y \textbf{in the loaded case}) was very good, and did better than even models with additional residual terms. \begin{figure}[p!] \centering \includegraphics[width=\textwidth]{images/residcorrect/layout_vertical.png} \caption{Visualization of the black box approach (leftmost column), then a linear model with residual correction (middle column), and finally a look at how each algorithm is interacting with the residuals (rightmost column)} \label{fig:residfit} \end{figure} \section{Sensor Design} For datasheets, please see \ref{appendix-sensors}. \begin{figure}[htbp] \centering \subfloat[Mold for casting (I found some leftover Dragonskin 20 in lab ]{% \includegraphics[width=0.2\linewidth]{images/sensor/sensor.jpg}% }\hfil % MUST be right above next figure \subfloat[After degassing in a vacuum chamber and curing ]{% \includegraphics[width=0.2\linewidth]{images/sensor/sensor2.jpg}% }\hfil \subfloat[Sensor readings as plotted in python on my laptop ]{% \includegraphics[width=0.5\textwidth]{images/sensor/analog_plot.png}% \label{fig:sensor}% }\hfil \caption{Sensor prototype.} \end{figure} I produced Arduino and python code that allowed me to view a plot in real time of the sensor data. I was able to read data from three sensors, plot data from two in real time (more possible of course), and test the addition of some elastomer on top of the sensor. See \cref{fig:sensor} I also integrated the Arduino with an RGB LED strip, where if you pressed on the sensor, more LEDs would light up. \begin{figure}[H] \centering \includegraphics[width=.8\textwidth]{images/sensor/poke.jpg} \caption{LED setup. The left (blue) PCB is an Adafruit breakout available for \$10. The generic breakout can be had for less than a \$1 (it has fewer components, e.g. does not support 5V input} %\label{fig:figura1} \end{figure} The sensor showed a large amount of hysteresis, which should be investigated and addressed. Note that the setup required use of a separate power supply, otherwise the Arduino would refuse to talk to the computer (for real-time plotting) when the LED strip drew load on the 5V line. Evaluation of the sensor is ongoing. It has a built-in temperature sensor, which can be used to compensate for temperature changes (for instance, if you press on it with your finger, the heat compounds the reading and also resuls in a noticeably tail as the heat dissipates relatively slowly. \section{Knowledge} In the process of this research, I learned about how accelerometers are calibrated, which I found very interesting (although ultimately, thanks to the built-in sensor fusion and calibration routines of the BNO055 sensor, I did not need to write my own calibration routines). The common presence of the gravity vector across meaurements at diffent angles makes this calibration possible. I learned the most about exploratory statistical data analysis. Where previously I had not even heard of residuals (only scalar measurements such as mean-square-error), I learned about how they could be compared to a horizontal basis against which they would be randomly distributed assuming random noise around a proper physical model. Finally, I learned more about the I2C and SPI protocols for receiving (and writing) to multiple sensors. I began work on creating a custom printed circuit board to create a 3x4 array of sensors, and decided to switch and learn to use a newcomer in the PCB design space, which has CERN support: the free and open-source KiCAD tool. \section{Conclusions} The stiffness of the finger was remarkably linear, which meant that machine learning and higher order terms had little to contribute to improving the fit between measure theta deflections and force estimates. Thus, the IMU appears to have much to contribute to estimating forces applied to the finger. However, caution is necessary, since in actual grasping, as Qian demonstrated, the axis I measured becomes the stiffest axis when the tendon is pulled taut to grasp an object.It bends relatively little compared to the other axes. %However, I was only able to apply relatively small loads to the finger. It's possible that with %large loads, the stiffness becomes less linear, in which case linear fit residuals may become more %prominent or structured. If this were the case, machine learning approaches might have more to %contribute to the force estimates. As it were, the data I collected did not benefit much from the %addition of non-linear models fitted with machine learning techniques such as random forests. %Most crucially, Qian demonstrated to me the process by which grasps fail. By observing the grasping, %I discovered that when the tendon was pulled taught (as needed to grasp anything), the y-axis was now %now the stiffest axis! Thus, characterizing the finger with no tendon load applied and primarily in %the y-axis direction, while a great learning experience, does not generalize at all to practice and becomes a %toy example. Qian also showed me how the finger relied extensively on mechanical mechanism intelligence (as opposed to full actuated parallel grippers, commonly used in computer science labs). I saw that complete failures tended to occur because the finger had failed to detect contact when contact had already occured. This suggests the better instrumentation of the finger is critical. The IMU by itself can only suggest contact force, and the tactile sensors are needed to tell contact location. To evaluate the contribution of the IMU to force estimates and not just torque estimates, we must build better location sensors than the current 1x4 array. %Referências bibliográficas devem ser utilizadas dentro de um estilo uniforme e não ambíguo. A SBC sugere os seguintes formatos para referências: \cite{knuth:84}, \cite{boulic:91}, e \cite{smith:99}. %\bibliographystyle{sbc} %\bibliography{sbc-template} \subsection{Future Work} The microntracker should be used to evaluate the IMU. Evaluating the use of Apriltags with the microntracker would help reproducibility by others (or future grad students), as webcams are cheap and readily available, and documentation for apriltags is freely available online. A full sensor array on a custom PCB, manufactured into a finger unit, should be developed and characterized. By collecting data and then fusing this with the IMU data, we could physically evaluate the grasp stability preductions, producing an important tie between theory and practice to ensure the two complement each other. The important of this has been mentioned above. \section{Thanks} Thanks to Qian who fielded many of my naive questions about grasping and and had to deal with my relative over-enthusiasm, Buse, who I enjoyed comisserating with, Alperen teaching me how to be safe in lab, Yash for many admirably terrible jokes James Weaver for very late night encouragements, and to my friends at MD309 and my fellow first-year SEAS PhD students. And thanks to Prof. Robert Howe for patiently giving me a fairly gentle introduction to the world of research, and offering to advise me. \newpage \clearpage \vspace*{\fill} \begin{center} \begin{minipage}{.6\textwidth} \Huge \centering{Appendix} \normalsize \end{minipage} \end{center} \vfill % equivalent to \vspace{\fill} \clearpage \appendix \section{Apriltags} I decided to investigate apriltags, which helped me refresh on my knowledge of C++. I struggled to find an appropriate ROS driver, and ultimately was able to get the C++ working (despite some issues with the openCV version on my laptop) with the help of Patrick from Prof. Kuindersma's lab. Example data output \begin{ttlisting} 2 tags detected: Id: 1 (Hamming: 0) distance=0.079741m, x=0.000532, y=0.006102, z=-1.487915, yaw=-0.134615, pitch=0.071828, roll=-0.041146 Id: 7 (Hamming: 0) distance=0.079741m, x=0.000532, y=0.006102, z=-1.487915, yaw=-0.134615, pitch=0.071828, roll=-0.041146 14.9312 fps \end{ttlisting} My issues with the openCV version were simply solved by specifiying the openCV version the compiler should use, as I had two versions on my laptop. \begin{ttlisting} nrw@earlgrey:~/projects/apriltags$ vi CMakeLists.txt (line 14) find_package(OpenCV 2.4.9.1 EXACT REQUIRED) \end{ttlisting} \begin{figure}[H] \centering \includegraphics[width=.4\textwidth]{images/april/applications_april.png} \caption{Applications of apriltags. Figure from: Wang, John, and Edwin Olson. "AprilTag 2: Efficient and robust fiducial detection." IROS 2016.} %\label{fig:figura1} \end{figure} To calibrate the camera, I was able to find a (very!) well-documented python script online that allowed be to calibrate the camera simply by taking a few pictures of a checkerboard. \begin{ttlisting}[language=C++,breaklines] (venv) nrw@earlgrey:~/projects/video2calibration$ ./calibrate.py example_input/chessboard.avi calibration.yaml --debug-dir out \end{ttlisting} \begin{figure}[H] \centering \includegraphics[width=.4\textwidth]{images/april/webcam_calibration.png} %\caption{Loaded tendon} %\label{fig:figura1} \end{figure} Using the output of the calibration script, I could then set the values in the apriltag example code to give me (fairly accurate) estimates of the xyz location of the finger in real-world units. \begin{ttlisting}[language=C++,breaklines] nrw@earlgrey:~/projects/video2calibration$ ./calibrate.py ~/Videos/Webcam/2018-03-26-112657.webm calibration.yaml --debug-dir out Performing calibration... RMS: 0.442700776066 camera matrix: [[ 666.78668352 0. 343.73827809] [ 0. 665.79103853 227.19081685] [ 0. 0. 1. ]] distortion coefficients: [ 6.06301194e-02 -1.94620209e-02 1.45555284e-04 1.24410189e-03 -2.51439333e-01] \end{ttlisting} I did not use the distortion coefficients in the apriltags measurements. These coefficients, I believe, correct for fisheye and higher order distortions in the camera images. \begin{lstlisting}[language=C++,breaklines] public: // default constructor Demo() : // default settiwgs, most can be modified through command line options (see below) [...excerpted section...] m_width(640), m_height(480), m_tagSize(0.00944), // in meters m_fx(667), // in pixels m_fy(666), // m_px(344), // principal point m_py(227), \end{lstlisting} \newpage \section{Project Management} In order to improve time estimates in the future, I find it helpful to look back at the original timeline. \begin{figure}[H] \centering \includegraphics[width=.9\textwidth]{images/misc/timeline.png} %\caption{Loaded tendon} %\label{fig:figura1} \end{figure} From this chart, we can see that I wasn't able to get much traction on my milestones the first month. Then I spent half a month figuring out how to set up and analyze an experiment (for instance, understanding how to analyze data and plot residuals, as well as the point of fitting residuals). Finally, I performed an abbreviated version of the first 6 weeks squeezed into the last six weeks, and turned in the last two milestones a week+ late. Although this seems like a dire in terms of project management, I did spend a lot of time on items related to research but not listed directly as milestones. I spent a lot of time learning about the grasping field. Additionally designing and prototyping a new sensor was not on the milestones. As a result, by taking a longer-term view, I personally do not feel as disappointed as I might be, given my personal goals such as making progess toward understanding the research process and learning concrete skills. \section{Outside Resources} Here is a well-formatted version on Wikipedia describing the 3D stiffness tensor. \begin{figure}[H] \centering \includegraphics[width=\textwidth]{images/misc/stiffness_tensor.png} %\caption{Loaded tendon} %\label{fig:figura1} \end{figure} \newpage \section{Hexo: Online documentation} I documented my work on a blog, \url{http://orangenarwhals.github.io/}, using the Hexo flatfile content management system. I chose this because it allowed me to host for free on github, allowed for easy version control of content, and also had an admin plugin that allowed me to drag-and-drop images, which is crucial for documenting experiments where I have a lot of graphs. I was also able to get Latex to work on the site so that I could have equations rendered on there. \begin{figure}[H] \centering \includegraphics[width=0.8\textwidth]{images/misc/blog.png} %\caption{Loaded tendon} %\label{fig:figura1} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.8\textwidth]{images/misc/blog_latex.png} \caption{Getting MathJax (a javascript in-browser version that renders Latex equations in your webpages) to work was a major source of frustration.} %\label{fig:figura1} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.8\textwidth]{images/misc/hexo_editor.jpg} \caption{Hexo uses markdown, which is a nice simple markup language (although gets annoying really fast if you violate the KISS and try to do fancy things like include latex). It has a very nice GUI interface for editing posts. } \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.9\textwidth]{images/misc/hexo_image.jpg} \caption{The particular feature I was looking for was a drag-and-drop image manager. Although I could not get drag-and-drop plugin working, the default Hexo Admin editor allows you to copy paste (ctrl-c ctrl-v) pictures into the editor, and name them appropriately, which I considered good enough (and much better than the normal flat-file editor requirement of manually typing in the file names and locations.} \end{figure} In particular, since I deal with many pictures for a given post, Hexo had a built-in configuration to have a folder per post, which allowed for relatively easy image management. \begin{lstlisting} ~$ vi _config.yml post_asset_folder: true # REQUIRED even if you make folder by hand!! \end{lstlisting} I will be using Hexo in the future for my own projects (e.g. a portfolio website). \section{Extra Graphs} \label{appendix-graphs} Here are a few graphs I didn't include for the sake of space, as they do not add to the conversation. \begin{figure}[H] \centering \includegraphics[width=1\textwidth]{images/round1/resids_Torq_coloredY} \caption{A graph of the torque (linear fit) residuals vs. the measured torques. The datapoints are shaded in by their Y position} %\label{fig:figura1} \end{figure} \begin{figure}[H] \centering \includegraphics[width=1\textwidth]{images/round1/resids_Theta_coloredY.png} \caption{A graph of the torque (linear fit) residuals vs. the measured angular deflections. The datapoints are shaded in by their Y position} %\label{fig:figura1} \end{figure} \begin{figure}[tb!] \centering \includegraphics[width=0.9\textwidth]{images/round1/TorqY_Colors_X.png}% \caption{From the experiment with all 15 positions, we see that the Torque Y fit (measured vs estimated) is very clean.} \label{fig:yX} \end{figure} \begin{figure}[tb!] \centering \includegraphics[width=1\textwidth]{images/round1/generalize_residfit.png} \caption{I accidentally tested the generalizability of the residual fit, instead of the overall torque fit after using residual corrections. Thus this graph is relegated to the appendix. In faded blue we have the training dataset, and in green we have the data we removed to be the test set.The faded blue and the green together comprise all 492 datapoints I measured. In the red, we have the predictions made by the linear fit (on the training data, for the test data) and the same in purple for the random forest fit. The residuals seem somewhat more polynomial than linear, so the random forest appears to do a bit better. } \label{fig:general} \end{figure} \newpage \section{Additional Setup Pictures} \label{appendix-setup} \begin{figure}[H] \centering \includegraphics[width=0.9\textwidth]{images/setup/setup_closeup_tape.jpg} \caption{Gratuitious amounts of tape were used as strain relief, to ensure that randomly bumping into or accidentally pulling on any of the power or sensor lines would not destroy the setup of equipment. Next time, I would just buy some white masking tape right away. } \end{figure} \begin{figure}[H] \centering \includegraphics[width=1\textwidth]{images/setup/setup_overview.jpg} \caption{An overview of the entire setup. This image is not included above as it's extremely messy and confusing, but it shows the relative placement of the microntracker, the webcam, the IMU, the arduino, and the triple beam balance. } \end{figure} \section{Sensor Datasheets} \label{appendix-sensors} \begin{figure}[htbp] \centering \subfloat[Note the single I2C address given, indicating this sensor has one fixed I2C address. Howevever, it has a "RST" line which can be used to turn off I2C communications for a chip, in effect acting as a chip select. ]{% \includegraphics[width=0.8\linewidth]{images/sensor/MPL_i2c_address.png}% }\hfil % MUST be right above next figure \subfloat[Details about the BMP280 sensor, which is half the size of the MPL115 A2 sensor. ]{% \includegraphics[width=0.4\linewidth]{images/sensor/new_datasheet.png}% }\hfil \subfloat[An example of how to find out whether the I2C addresses are fixed for a given chip. Here we see that the BMP280 has 1 bit (2 addresses) to choose from. ]{% \includegraphics[width=0.5\textwidth]{images/sensor/BMP280_i2c_address.png}% }\hfil \caption{Details about the two sensors: MPL115A2 on the current Takktile fingers in lab, and the new sensor I'm evaluating, the BMP280, which supports both I2C and SPI protocols.} \label{fig:i2c} \end{figure} With regards to the BMP280 sensors: I am using the Adafruit libraries with the generic sensors. Both breakouts have schematics available online. \begin{figure}[H] \centering \includegraphics[width=0.4\textwidth]{images/sensor/generic_breakout_schematic.jpg} \caption{Schematic for generic breakout. Note that aside form the sensor there are just 2 resistors, 2 capacitors, and some header pins. Source: rainway87 seller on ebay} %\label{fig:figura1} \end{figure} \begin{sidewaysfigure} \centering \includegraphics[width=\textwidth]{images/sensor/adafruit_breakout_schematic.png} \caption{Schematic for Adafruit breakout. Source: adafruit.com} %\label{fig:figura1} \end{sidewaysfigure} % References %% %% Following citation commands can be used in the body text: %% Usage of \cite is as follows: %% \cite{key} ==>> [#] %% \cite[chap. 2]{key} ==>> [#, chap. 2] %% %% References with bibTeX database: %\bibliographystyle{elsarticle-num} %% \bibliographystyle{elsarticle-harv} %% \bibliographystyle{elsarticle-num-names} %% \bibliographystyle{model1a-num-names} %% \bibliographystyle{model1b-num-names} %% \bibliographystyle{model1c-num-names} %% \bibliographystyle{model1-num-names} %% \bibliographystyle{model2-names} %% \bibliographystyle{model3a-num-names} %% \bibliographystyle{model3-num-names} %% \bibliographystyle{model4-names} %% \bibliographystyle{model5-names} %% \bibliographystyle{model6-num-names} %\bibliography{sample} %% \newpage \clearpage \vspace*{\fill} \begin{center} \begin{minipage}{.8\textwidth} \huge \centering{Thanks for watching. Tune in next semester for another episode of \\ ~\\ \textbf{\scshape{Grad School: \\ Research in Progress!}}} \end{minipage} \end{center} \vfill % equivalent to \vspace{\fill} \clearpage \end{document}
{ "alphanum_fraction": 0.74663517, "avg_line_length": 41.4646924829, "ext": "tex", "hexsha": "016851395629250d9a94995d94fec5d72fae90df", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5ec4e23a2d808272ec541df511f4cbf60bfa4cd9", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "nouyang/howe299r", "max_forks_repo_path": "Notes/final_report/Elsevier/final_report.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5ec4e23a2d808272ec541df511f4cbf60bfa4cd9", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "nouyang/howe299r", "max_issues_repo_path": "Notes/final_report/Elsevier/final_report.tex", "max_line_length": 267, "max_stars_count": null, "max_stars_repo_head_hexsha": "5ec4e23a2d808272ec541df511f4cbf60bfa4cd9", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "nouyang/howe299r", "max_stars_repo_path": "Notes/final_report/Elsevier/final_report.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 14224, "size": 54609 }
\documentclass[a4paper]{article} % set paper size \usepackage[utf8]{inputenc} \usepackage {url} \usepackage[top=2.0cm, bottom=2.0cm, left=2.54cm, right=2.54cm]{geometry} % set margin \usepackage{amsfonts} % for set names \usepackage{amsmath} % for equation system \usepackage{amsthm} % for theorem block \usepackage{fixltx2e} % for subscript \usepackage{fancyhdr} % for footer/headline modification \usepackage{xcolor} \usepackage{graphicx,float} % for image insertion \usepackage{multicol} %for text in tow columns \usepackage{wrapfig} %figure wrapping \pagestyle{fancyplain} % for footing modification on all pages \fancyhf{} %\renewcommand{\headrulewidth}{0pt} % remove decorative lign \fancyhead[L]{Pauline Maury Laribiere\\ Alexandre Devienne} \fancyhead[R]{MT/EL-BA2 EPFL \\ \today} \fancyhead[C]{\textbf{Spring programming project : Microcosmos}} \fancyfoot[R]{\thepage\ of \pageref{lastpage}} \begin{document} \begin{multicols*}{2} % ===================================================================== \section{Program's architecture} \begin{figure}[H] \centering \includegraphics[width=0.48\textwidth]{architecture.jpg} \caption{Final architecture} \end{figure} Compared to the architecture suggested, we added two low level modules, and 2 dependencies: \begin{description} \item[Module: geometry.] Low level representation of points and vectors and functions to manipulate them (ex: distance between points, norm of a vector) %This module in included in the modules : \emph{main}, \emph{sim}, \emph{generateur}, \emph{particule}, \emph{trou\_noir}, \emph{graphic}. \item[Module: linked\_list.] Generic double linked list data structure. Some basic dictionary operations are implemented such as : first, next, add, delete, search. More complex operations exist such as : sorting and calling a function with 2 elements as arguments, for every 2-combinations possible of elements (to work out forces between every particles for example). %This module in included in the modules : \emph{sim}, \emph{generateur}, \emph{particule}, \emph{trou\_noir}. \item[Dependency: trou\_noir $\rightarrow$ particule.] This allows \emph{trou\_noir} to call \texttt{part\_applyForceField( void (*forceFieldAt) (POINT p) )} to apply the force generated by all black holes to all particles. Plus, to destroy particles too close to a black hole, it can call \texttt{int part\_closestPartOn( POINT p )} to retrieve a particle's ID to then destroy it with: \texttt{part\_delete( int partID )} \item[Dependency: generateur $\rightarrow$ particule.] This allows \emph{generateur} to delegate the validity of a generator's arguments to \emph{particule} (using \texttt{part\_validParams}) and to create new particles by a simple call of \texttt{part\_create} (but first call \texttt{part\_closestPartOn} to check if a particle covers the generator) \end{description} Working with these modules eases our work by reducing the amount of redundant code and delegating complex generic tasks (such as adding an element to a linked list or working out a linear interpolation) to specialized modules. It also eased the testing part, as we could validate the correctness of these modules before using them in the project. % ===================================================================== \section{Data structures} To store the data of particles, generators and black holes, we used arrays of size MAX\_RENDU1 for the \emph{rendu 1}. But for the \emph{rendu 2} we needed a way to store large numbers of entities, without any knowledge of the maximum amount. We first considered arrays which size doubled when they were full, but it had 2 drawbacks : spikes in calculation time (when the array is re-allocated) and a possibility that half the allocated memory isn't used (possibly even more if some elements are deleted) To avoid these drawbacks we had little choice but to use \emph{linked lists}. Because we were going to use linked lists for the particles, generators and black holes, to avoid redundant code, we decided to write a generic double linked list module. This module handles all the hassle of creating and deleting dynamically elements. %This even allowed us to code some complex behaviour in the module that we could later use with a simple function call %(ex: search, sort or even applying a function to all elements), without any \texttt{while} or \texttt{for} loop to write. % ===================================================================== \section{Function called by \emph{main.cpp} from \emph{sim.c}} Here are the different functions of \emph{sim.c} called in \emph{main.cpp} : \begin{itemize} \item Beginning and saving a simulation: \begin{itemize} \item \texttt{sim\_openFile}: begins a new simulation from a given file and mode \item \texttt{sim\_save}: saves the current state of the simulation in a file \end{itemize} \item Getting information from the simulation: \begin{itemize} \item \texttt{sim\_nbEntities}: gets the number of each entity (to display in GLUI) \item \texttt{sim\_extremPoints}: returns the outermost points of the simulation to calculate the frame of the window to open \end{itemize} \item Handling the advancement of the simulation: \begin{itemize} \item \texttt{sim\_display}: displays the current state of the simulation (with all the entities) \item \texttt{sim\_next\_step}: calculates the simulation's next step \end{itemize} \item Handling the user's inputs: \begin{itemize} \item \texttt{sim\_select}: enables the user to select an entity \item \texttt{sim\_deleteSelection}: deletes the selected entity \item \texttt{sim\_deselect}: deselects the current selection \end{itemize} \item Exiting a simulation: \begin{itemize} \item \texttt{sim\_clean}: free all the memory allocated to all the simulation sub-modules \end{itemize} \end{itemize} \label{lastpage} \end{multicols*} \end{document}
{ "alphanum_fraction": 0.7353531972, "avg_line_length": 44.9172932331, "ext": "tex", "hexsha": "b629ac55314c1b2714490710a07685e323b42415", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5af57a23538e9b46b3c1e3f7e8fb82aaa0b98c39", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "grypoB/Microcosmos", "max_forks_repo_path": "report/intermediary-report.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5af57a23538e9b46b3c1e3f7e8fb82aaa0b98c39", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "grypoB/Microcosmos", "max_issues_repo_path": "report/intermediary-report.tex", "max_line_length": 146, "max_stars_count": null, "max_stars_repo_head_hexsha": "5af57a23538e9b46b3c1e3f7e8fb82aaa0b98c39", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "grypoB/Microcosmos", "max_stars_repo_path": "report/intermediary-report.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1516, "size": 5974 }