Search is not available for this dataset
text
string
meta
dict
\section{Cancer} \label{intro-sec:cancer} For a long time in human history, the origin of cancer as a disease was a mystery and a multitude of theories, starting in ancient Egypt, were developed. These theories ranged from a curse to chemical imbalance over parasites to trauma. In this section I will outline both the history of cancer as a disease and the treatments starting with ancient times leading up right until the current times. While the first steps are very wide, because the biology itself was not understood, it is quite curious how often people with more knowledge came to worse conclusions and theories, than were already known thousands of years ago. Around 3000BC the Egyptians describe the bulging tumour of the breast as an incurable disease\cite{Breasted1930}, even then they already had some ointments, which were used, including resection, cauterisation and salting of the affected areas, all of which were still used up until the 19$^{th}$ century \cite{Hajdu2004}. This papyrus document is considered the oldest evidence of cancer in humanity. When the ancient Greeks laid the foundation for modern medicine with Hippocrates, the first hypothesis about natural causes of cancers was formulated and the terms ``cancer`` and ``carcinoma`` were coined. The abundance and accumulation of ``black bile`` in the body was thought to be the cause of the cancers. However, the treatment was still the same as before, with resection and lotions \cite{Chadwick1950}. Following Hippocrates, the Roman physician Celsus progressed the understanding of cancers significantly, by describing metastatic relapse of treated breast cancer in neighbouring armpits and even the spread to distant organs. He also was aware, that the outcome of patients was better, if the tumours were removed early and aggressively \cite{Celsus1939}. With the destruction of the western Roman Empire, the Middle East became well known for their strong advances towards modern medicine and the court physician to the Emperor of Constantinople Aetius had success with the first total mastectomy and generally was an advocate of the total excision of tumours \cite{Browne2012}. Sadly, while both the understanding of cancer and the treatment were steadily improving, the Pope prohibited bloodshed as well as surgeries and therefore lead to a slow-down of advances, especially because autopsies were also forbidden a hundred years later in 1305. However, there were still illegal experiments conducted and the general classification which is still used up to date was started, by Henri de Mondeville, who started classifying tumours by their anatomical site\cite{Pilcher1895}. After the end of the ``dark ages``, the wide availability of older medical works from both Greek and Roman due to the book print invention, led to the re-emergence of the use of chemical ointments and lotions on cancer lesions. With Paracelsus promoting the usage of chemicals, which he himself warns are poisonous in the wrong concentration, for the treatment of cancer, he laid the ground-work for the modern Chemotherapy \cite{PHT1562}. As the dissection of corpses was no longer banned by the church, more and more cases of ``hidden`` causes of death were found post mortem, which were often cancers on internal organs, like the brain but also the detection of malignant and benign tumours was a major breakthrough. This lead to the understanding, that benign tumours might turn malignant after some time and many physicians suggested removal of the benign growths \cite{Severino1632}. Due to genetic disposition of cancer, especially breast cancer, two independent physicians (Zacutus Lusitani and Nicholas Tulp) came to the conclusion, that cancer is contagious and proposed isolation of patients \cite{Lusitani1649,Tulpii1652} which shows, that while the treatment of cancer was improving steadily, but the origin of the disease was still a mystery. It took until 1700 when \citeauthor{DeshaiesGendron1701} described cancer as a transformation of a normal body part, which continues to grow without control and while he was aware of metastatic disease, he suggested no treatment, as he did not believe cancer to be curable with drugs \cite{DeshaiesGendron1701}. Another ground-breaking work published in the same year was the collection of almost three thousand autopsy reports and their clinical history, which contain a number of detailed cancer cases including: brain, head and neck, lung, breast, esophagus, stomach, colon, liver, pancreas, kidney, uterus, cervix, bladder, and prostate. Many of the terms used by Theophilus Boneti to describe the cases are still in use and the work itself was the first step toward tumour pathology \cite{Hajdu2010a} However, it took almost 150 years after the theory of cancer being contagious for \textcite{Nooth1804} to conduct experiments trying to infect himself with cancer pieces resected from another person, which proved that cancers generally are not infectious. With the invention and consequently common use of the microscope in the pathology, more and more causes of deaths were identified as caused by cancer. An example is the connection of a chronic cough to lung cancer and swollen joints with sarcoma \cite{Etmueller2018}. After more and more autopsies of cancer patients, surgeons like \textcite{Heister1747} found that breast cancer resection needs to include the breast, the axillary lymph node and the pectoralis major muscle which got to be known as the Halsted radical mastectomy and was the standard of care for a long time. While the treatment of cancers (mostly surgical) was getting more and more advanced, but the origin and cause of cancer in patients was still very much debated. As there are a manifold of causes as we now know, it is maybe not surprising that it took longer, but by the middle of the 18$^{th}$ century chronic inflammation as a cause of cancer initiation was hypothesised \cite{Hajdu2010b}. The next big step was taken, when in 1838 the concepts of cells as fundamental building blocks of organisms was published. In the following years, many cancers were dissected and microscopically analysed. This revealed that tumour cells look vastly different from normal cells, and it was thought that they morphological features could serve to identify their fate \cite{Mueller1838} and became known for defining the cellular origin of benign and malignant tumours. And while he described the tumours as a collection of abnormal cells with stroma, he thought cancer to arise from newly generated cells from a diseased organ and thought the underlying cause to be ``amorphous embryonal blastema``. With this foundation, over the next hundred years, lots of advances were made into the morphology of different tumours and many previously undetected ones, like leukemia, were found and extensively characterised. However, even then, there were researchers, which understood that the heterogeneity of cancers is so vast, that while he was convinced that the microscope will be a mandatory instrument to diagnose cancers, more effort to collect and study specimen is necessary to have a complete picture \cite{Bennett1849}. As many shared the view of \citeauthor{Bennett1849}, the second half of the 19$^{th}$ century was a rich source of surgical pathology and the oncology literature in general. Most outstanding was Rudolf Virchow's ``Die krankhaften Geschw\"ulste`` \cite{Virchow1863} which is a first landmark book on the classification of cancers, and is still a well of knowledge. From his work, the terms ``hyperplasia`` and ``metaplasia`` we derived, as pre-cancerous states of cells. He also was one of the first to hypothesis the presence of growth stimulating substances around cancers, which lead to their uncontrolled growth. While he also was the first to again oppose the ``amorphous embryonal blastema`` theory and instead was convinced that tumour cells were just abnormally changed cells, which he called ``chronic irritation theory`` and had a theory that metastasis were seeded by the original lesion (like in this melanoma \autoref{fig:drawing}), he also had major scientific impact in a number of other fields like Parasitology, Forensic and Anthropology\footnote{Maybe surprising to hear, that he was strongly opposed to Darwin's theory of evolution. In his own words: ``The intermediate form is unimaginable save in a dream... We cannot teach or consent that it is an achievement that man descended from the ape or other animal.``}. \begin{figure}[!ht] \centering \includegraphics[width=.95\linewidth]{Figures/drawingMelaMeta} \caption[Drawing of central nervous system metastasis]{Drawing of central nervous system metastasis from page 121, Volume 2 of ``Die krankhaften Geschw\"ulste`` \protect\textcite{Virchow1863}; translated original caption: Fig. 128: Multiple melanoma of the Pia mater basilaris, most pronounced around the Medulla oblongata, the Pons, the Fossa Sylvii, Fissura longit (sample No. 256a from 1858); Fig. 129: Lower end of spinal cord of Fig. 128 with multiple melanoma of the soft skin with node like growths at the nerve roots (sample No. 256b from 1858)}\label{fig:drawing} \end{figure} While the search of possible cancer causing substances started to get more and more interest, only one real cause was thought to be found in the ore of the central European mountains, where miners would have a higher prevalence of lung cancers. Nevertheless, this was later found to be caused by radio active material and not by the inhaled dust of minerals as expected. Similar, many parasites and bacteria were found as potential causes of cancer, but none of the findings could produce proof. While all these steps were getting closer together in time up until the beginning of the 20$^{th}$ century, they were still fairly minor in the contrast to the high speed and throughput results that the last hundred years brought with it. While technically Willhelm R\"ontgen discovered the X-rays just before the change of the century \cite{Roentgen1898}, both its impact on the body and cancer were only clear a few years into the last century \cite{Frieben1902,Scholtz1902}. However, similar to how X-rays can cause cancers, researcher also found quickly, that it can also treat cancer and thus the field of radiotherapy was created. This then was the first major change in cancer treatment for around five thousand years, which also could treat inoperable cancers. The next invention, that I want to highlight in the vast amount of advances made in the advent of the 20$^{th}$ century, is the cutting needle aspiration syringe, which allowed a non-traumatic biopsy of internal organs for microscopy study. Which made it possible to not have exploratory surgery and instead allow planning of necessary operations. The next mayor step in the treatment of cancers comes in the form of chemotherapy, when \textcite{Ehrlich1909} published his work ``Beitr\"age zur experimentellen Pathologie und Chemotherapy`` where he injected animals with different toxins in order to destroy cancer cells. Although, it still took another 30 years till after the second world war when the discovery, that a chemical design for warfare also had potent anti-tumour effect. In the meantime, the first long term tissue cultures of animal cancer cell lines were established and further insights like the Warburg effect \cite{Warburg1928} found, which showed, that cancer cells use glucose at a higher rate than healthy cells. This effect ultimately to the discovery of the positron emission tomography (PET) scan, which allowed a significantly more granular diagnosis and localisation of cancerous lesions than before. With the success of growing human cell lines in vitro, the USA embarked on a massive experiment to test any potential source of chemical carcinogenesis. But at the same time, multiple viruses were identified to cause cancers in the 1950s, when electron microscopy was invented \cite{Claude1947}. Only a few more years later, the biggest advance in the understanding of biology was made, when the structure of DNA was discovered \cite{Watson1953} (\autoref{intro-sec:DNA})and subsequently lead to numerous new experiments and breakthroughs. When studying how viruses are able to reverse transcribe their RNA and insert a new gene into a healthy cell, which then transformed into a cancer cell, the term ``oncogene`` was coined\cite{Huebner1969,Baltimore1970,Temin1970} and the foundation for the understanding of how genes influence the emergence of cancers was laid. This also lead to the understanding, that heritable changes in the genome could predispose a person to cancer, which was previously hypothesised \cite{Li1969}. And while the discovery of DNA was a substantial boost for the understanding of cancer, the diagnostic capabilities increased at a similar speed, with urine tests for biomarkers of certain cancers as well as antigen detection. And this is when we arrive at the ``current`` times, when a few years ago next generation sequencing (NGS) (\autoref{intro-sec:sequencing}) was introduced and sped up data generation on genomic and non-genomic diagnostic tests, from targeted amplicon sequencing to whole genome sequencing. These highly specific tests then allowed the application of highly specific drugs, like tyrosine kinase inhibitors (TKI)s, which are tailored to target a specific alteration in the genome of a cancer cell, and genetically engineered antibodies which can be homed in on the cancer. And while the therapeutic world is quickly evolving, many of the questions from previous times are still the same. We still don't know how and when the heterogeneity in cancers occurs, we just know it is a major source of resistance to treatment. We also still do not have an answer to the ``cell of origin`` question that has been asked for so long, but we do know that some cancers can de-differentiate and morph between cell types. So instead of trying to answer these questions directly, there has been an effort to define fundamental features malignancies have to be considered cancers, very similar to the early pathology descriptions. The original characteristics comprise \begin{enumerate*} \item Sustaining proliferative signalling \item Evading growth suppressors \item Activating invasion and metastasis \item Enabling replicative immortality \item Inducing angiogenesis \item Resisting cell death \end{enumerate*} (\autoref{fig:oldhallmarks}). \begin{figure}[!ht] \centering \includegraphics[width=.95\linewidth]{Figures/oldHallmarksCancer.jpg} \caption[Original hallmarks of cancer]{Acquired capabilities of cancer; Functional capabilities acquired by most cancers during their development; Figure adapted from \protect\citeauthor*{Hanahan2000}\protect\cite{Hanahan2000}}\label{fig:oldhallmarks} \end{figure} These hallmarks were for a while considered the core of tumour development and the authors themselves hypothesised, that these core mechanics allow us to condense the complexity that cancer displays, both in the clinic and in labs, with a small set of rules, which all cancers have to obey \cite{Hanahan2000}. In their exact words: ``We foresee cancer research developing into a logical science, where the complexities of the disease, described in the laboratory and clinic, will become understandable in terms of a a few underlying principles`` However, with 11 years of additional research into the topic, more hallmarks have been found, and the original list was revised by the authors to contain additional characteristics, namely \begin{enumerate*} \item Avoiding immune destruction \item Tumour-promoting inflammation \item Genome instability and mutation \item Deregulating cellular energetics \end{enumerate*} \cite{Hanahan2011}. And even then a few years later, even more hallmarks e.g. metabolic rewiring are now considered a part of the characteristics of cancer \cite{Fouad2017}. \begin{figure}[!ht] \centering \includegraphics[width=.95\linewidth]{Figures/newestHallmarksOfCancer.jpg} \caption[Newest hallmarks of cancer]{Emerging hallmarks and enabling characteristics of cancer; updated version of the hallmarks figure (\autoref{fig:oldhallmarks},\cite{Hanahan2000}); Figure adapted from \protect\citeauthor*{Hanahan2022}\protect\cite{Hanahan2022}; Left, the Hallmarks of Cancer currently embody eight hallmark capabilities and two enabling characteristics. In addition to the six acquired capabilities -- Hallmarks of Cancer -- proposed in 2000 (\autoref{fig:oldhallmarks}), the two provisional “emerging hallmarks” introduced in 2011 (\cite{Hanahan2011}) --cellular energetics (now described more broadly as “reprogramming cellular metabolism”) and “avoiding immune destruction” -- have been sufficiently validated to be considered part of the core set. Given the growing appreciation that tumors can become sufficiently vascularized either by switching on angiogenesis or by co-opting normal tissue vessels \cite{Kuczynski2019}, this hallmark is also more broadly defined as the capability to induce or otherwise access, principally by invasion and metastasis, vasculature that supports tumor growth. The 2011 sequel further incorporated “tumor-promoting inflammation” as a second enabling characteristic, complementing overarching “genome instability and mutation,” which together were fundamentally involved in activating the eight hallmark (functional) capabilities necessary for tumor growth and progression. Right, this review incorporates additional proposed emerging hallmarks and enabling characteristics involving “unlocking phenotypic plasticity,” “nonmutational epigenetic reprogramming,” “polymorphic microbiomes,” and “senescent cells.”}\label{fig:newesthallmarks} \end{figure} And even during the time of my PhD, further research revealed additional hallmarks, which got characterised by \textcite{Hanahan2022}. The newest version adds another two characteristics and hallmarks, specifically: \begin{enumerate*} \item unlocking phenotypic plastisity \item nonmutational epigenetic reprogramming \item polymorphic microbiomes \item senescent cells \end{enumerate*} (see \autoref{fig:newesthallmarks}). These evolution of these hallmarks shows, why even though lots of time and effort was invested into cancer research for multiple centuries, there still is no unifying definition and treatment for cancer. The vast heterogeneity not only between cancer types, but also between patients makes it very hard to study. But even within patient there is third type of heterogeneity, which is the main cause of treatment resistance and relapse \cite{DagogoJack2017}. And while we know, that this diversity exists and efforts have been made to measure and classify them \cite{Noorbakhsh2018}, there is still a lack of methods, which deal with the heterogeneity in their models to inform clinical approaches directly. \todo[inline, color=orange]{Write how this shows that the heterogeneity of cancer is the reason we still havent found a unifying description and treatment}
{ "alphanum_fraction": 0.8129817019, "avg_line_length": 176.6018518519, "ext": "tex", "hexsha": "bcaf0a3b7c98c0d99f4e02f82774ed6c174fbb97", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3785d53be5d4adac85660fad607c7f2d2b9e3bd1", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "SebastianHollizeck/PhDThesis", "max_forks_repo_path": "Chapters/Introduction/lungcancer.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3785d53be5d4adac85660fad607c7f2d2b9e3bd1", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "SebastianHollizeck/PhDThesis", "max_issues_repo_path": "Chapters/Introduction/lungcancer.tex", "max_line_length": 1697, "max_stars_count": null, "max_stars_repo_head_hexsha": "3785d53be5d4adac85660fad607c7f2d2b9e3bd1", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "SebastianHollizeck/PhDThesis", "max_stars_repo_path": "Chapters/Introduction/lungcancer.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4118, "size": 19073 }
\chapter{Introduction} The \numbers\ program is a shell program which reads and stores data from a finite element model described in the \exo\ database format~\cite{EXODUS}. Within this shell program are several utility routines which calculate information about the finite element model. The utilities currently implemented in \numbers\ allow the analyst to determine: \begin{itemize} \item the volume and coordinate limits of each of the materials in the model; \item the mass properties of the model; \item the minimum, maximum, and average element volumes for each material; \item the volume and change in volume of a cavity; \item the nodes or elements that are within a specified distance from a user-defined point, line, or plane; \item an estimate of the explicit central-difference timestep for each material; \item the validity of contact surfaces or slidelines, that is, whether two surfaces overlap at any point; and \item the distance between two surfaces. \end{itemize} These utilities have been developed to automate and simplify some of the tasks normally performed during an analysis. The \numbers\ program reads the finite element model and results from a file written in the \exo\ binary file format which is used in the Engineering Analysis Department at \SNLA. The capabilities of \numbers\ have evolved during the past eighteen months. Originally, it was written solely to calculate the mass properties of a body. However, once the basic function of reading and storing an \exo\ database was in place, it was realized that several tasks that were usually performed manually could easily be implemented in \numbers. Tasks such as determining node and element numbers, verifying contact surfaces, and others, are now performed more efficiently and, hopefully, more accurately since the code performs the repetitive calculations automatically. Although the original reason for developing \numbers\ was to simply calculate mass properties, the code now functions as an \exo\ shell that can be easily extended by analysts who require specific calculations or need to create information not currently available. The analyst can simply write a subroutine to perform their function, and insert it into \numbers\ without worrying about the details of reading an \exo\ database and providing a user interface. For most cases, adding a function to \numbers\ requires only writing the function subroutine, adding the command name to the table of valid commands, and adding a few statements to call the routine. The remainder of this report is organized as follows. Chapter~\ref{c:numerics} describes the numerical algorithms used by the utility functions in \numbers. A list of the commands and the command syntax are presented in Chapter~\ref{c:commands}. Chapter~\ref{c:examples} gives several examples of the use of the utilities, and Chapter~\ref{c:conclude} concludes the report. Three appendixes are included. Appendix~\ref{a:cmdsum} is a summary of the command syntax for each of the commands. The descriptions in the following chapters assume that the reader is familiar with the \gen\ and \exo\ file formats and with the analysis, preprocessing, and postprocessing codes used in the Engineering Analysis Department at \SNLA. Readers not familiar with these can check the references at the end of this report for a list of the documentation for these codes and file formats.
{ "alphanum_fraction": 0.8063948372, "avg_line_length": 52.4461538462, "ext": "tex", "hexsha": "086423f4d206757d35762ddb6f08a85332d2d5b3", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "54d9c3b68508ca96e3db1fd00c5d84a810fb330b", "max_forks_repo_licenses": [ "Zlib", "NetCDF", "MIT", "BSL-1.0", "X11", "BSD-3-Clause" ], "max_forks_repo_name": "tokusanya/seacas", "max_forks_repo_path": "packages/seacas/doc-source/numbers/intro.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "54d9c3b68508ca96e3db1fd00c5d84a810fb330b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Zlib", "NetCDF", "MIT", "BSL-1.0", "X11", "BSD-3-Clause" ], "max_issues_repo_name": "tokusanya/seacas", "max_issues_repo_path": "packages/seacas/doc-source/numbers/intro.tex", "max_line_length": 77, "max_stars_count": null, "max_stars_repo_head_hexsha": "54d9c3b68508ca96e3db1fd00c5d84a810fb330b", "max_stars_repo_licenses": [ "Zlib", "NetCDF", "MIT", "BSL-1.0", "X11", "BSD-3-Clause" ], "max_stars_repo_name": "tokusanya/seacas", "max_stars_repo_path": "packages/seacas/doc-source/numbers/intro.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 730, "size": 3409 }
%* glpk03.tex *% \chapter{Utility API routines} \section{Problem data reading/writing routines} \subsection{glp\_read\_mps---read problem data in MPS format} \subsubsection*{Synopsis} \begin{verbatim} int glp_read_mps(glp_prob *lp, int fmt, const void *parm, const char *fname); \end{verbatim} \subsubsection*{Description} The routine \verb|glp_read_mps| reads problem data in MPS format from a text file. (The MPS format is described in Appendix \ref{champs}, page \pageref{champs}.) The parameter \verb|fmt| specifies the MPS format version as follows: \begin{tabular}{@{}ll} \verb|GLP_MPS_DECK| & fixed (ancient) MPS format; \\ \verb|GLP_MPS_FILE| & free (modern) MPS format. \\ \end{tabular} The parameter \verb|parm| is reserved for use in the future and must be specified as \verb|NULL|. The character string \verb|fname| specifies a name of the text file to be read in. (If the file name ends with suffix `\verb|.gz|', the file is assumed to be compressed, in which case the routine \verb|glp_read_mps| decompresses it ``on the fly''.) Note that before reading data the current content of the problem object is completely erased with the routine \verb|glp_erase_prob|. \subsubsection*{Returns} If the operation was successful, the routine \verb|glp_read_mps| returns zero. Otherwise, it prints an error message and returns non-zero. \subsection{glp\_write\_mps---write problem data in MPS format} \subsubsection*{Synopsis} \begin{verbatim} int glp_write_mps(glp_prob *lp, int fmt, const void *parm, const char *fname); \end{verbatim} \subsubsection*{Description} The routine \verb|glp_write_mps| writes problem data in MPS format to a text file. (The MPS format is described in Appendix \ref{champs}, page \pageref{champs}.) The parameter \verb|fmt| specifies the MPS format version as follows: \begin{tabular}{@{}ll} \verb|GLP_MPS_DECK| & fixed (ancient) MPS format; \\ \verb|GLP_MPS_FILE| & free (modern) MPS format. \\ \end{tabular} The parameter \verb|parm| is reserved for use in the future and must be specified as \verb|NULL|. The character string \verb|fname| specifies a name of the text file to be written out. (If the file name ends with suffix `\verb|.gz|', the file is assumed to be compressed, in which case the routine \verb|glp_write_mps| performs automatic compression on writing it.) \subsubsection*{Returns} If the operation was successful, the routine \verb|glp_write_mps| returns zero. Otherwise, it prints an error message and returns non-zero. \subsection{glp\_read\_lp---read problem data in CPLEX LP format} \subsubsection*{Synopsis} \begin{verbatim} int glp_read_lp(glp_prob *lp, const void *parm, const char *fname); \end{verbatim} \subsubsection*{Description} The routine \verb|glp_read_lp| reads problem data in CPLEX LP format from a text file. (The CPLEX LP format is described in Appendix \ref{chacplex}, page \pageref{chacplex}.) The parameter \verb|parm| is reserved for use in the future and must be specified as \verb|NULL|. The character string \verb|fname| specifies a name of the text file to be read in. (If the file name ends with suffix `\verb|.gz|', the file is assumed to be compressed, in which case the routine \verb|glp_read_lp| decompresses it ``on the fly''.) Note that before reading data the current content of the problem object is completely erased with the routine \verb|glp_erase_prob|. \subsubsection*{Returns} If the operation was successful, the routine \verb|glp_read_lp| returns zero. Otherwise, it prints an error message and returns non-zero. \subsection{glp\_write\_lp---write problem data in CPLEX LP format} \subsubsection*{Synopsis} \begin{verbatim} int glp_write_lp(glp_prob *lp, const void *parm, const char *fname); \end{verbatim} \subsubsection*{Description} The routine \verb|glp_write_lp| writes problem data in CPLEX LP format to a text file. (The CPLEX LP format is described in Appendix \ref{chacplex}, page \pageref{chacplex}.) The parameter \verb|parm| is reserved for use in the future and must be specified as \verb|NULL|. The character string \verb|fname| specifies a name of the text file to be written out. (If the file name ends with suffix `\verb|.gz|', the file is assumed to be compressed, in which case the routine \verb|glp_write_lp| performs automatic compression on writing it.) \subsubsection*{Returns} If the operation was successful, the routine \verb|glp_write_lp| returns zero. Otherwise, it prints an error message and returns non-zero. \subsection{glp\_read\_prob---read problem data in GLPK format} \subsubsection*{Synopsis} \begin{verbatim} int glp_read_prob(glp_prob *P, int flags, const char *fname); \end{verbatim} \subsubsection*{Description} The routine \verb|glp_read_prob| reads problem data in the GLPK LP/MIP format from a text file. (For description of the GLPK LP/MIP format see below.) The parameter \verb|flags| is reserved for use in the future and should be specified as zero. The character string \verb|fname| specifies a name of the text file to be read in. (If the file name ends with suffix `\verb|.gz|', the file is assumed to be compressed, in which case the routine \verb|glp_read_prob| decompresses it ``on the fly''.) Note that before reading data the current content of the problem object is completely erased with the routine \verb|glp_erase_prob|. \subsubsection*{Returns} If the operation was successful, the routine \verb|glp_read_prob| returns zero. Otherwise, it prints an error message and returns non-zero. \subsubsection*{GLPK LP/MIP format} The GLPK LP/MIP format is a DIMACS-like format.\footnote{The DIMACS formats were developed by the Center for Discrete Mathematics and Theoretical Computer Science (DIMACS) to facilitate exchange of problem data. For details see: {\tt <http://dimacs.rutgers.edu/Challenges/>}. } The file in this format is a plain ASCII text file containing lines of several types described below. A line is terminated with the end-of-line character. Fields in each line are separated by at least one blank space. Each line begins with a one-character designator to identify the line type. The first line of the data file must be the problem line (except optional comment lines, which may precede the problem line). The last line of the data file must be the end line. Other lines may follow in arbitrary order, however, duplicate lines are not allowed. \paragraph{Comment lines.} Comment lines give human-readable information about the data file and are ignored by GLPK routines. Comment lines can appear anywhere in the data file. Each comment line begins with the lower-case character \verb|c|. \begin{verbatim} c This is an example of comment line \end{verbatim} \paragraph{Problem line.} There must be exactly one problem line in the data file. This line must appear before any other lines except comment lines and has the following format: \begin{verbatim} p CLASS DIR ROWS COLS NONZ \end{verbatim} The lower-case letter \verb|p| specifies that this is the problem line. The \verb|CLASS| field defines the problem class and can contain either the keyword \verb|lp| (that means linear programming problem) or \verb|mip| (that means mixed integer programming problem). The \verb|DIR| field defines the optimization direction (that is, the objective function sense) and can contain either the keyword \verb|min| (that means minimization) or \verb|max| (that means maximization). The \verb|ROWS|, \verb|COLS|, and \verb|NONZ| fields contain non-negative integer values specifying, respectively, the number of rows (constraints), columns (variables), and non-zero constraint coefficients in the problem instance. Note that \verb|NONZ| value does not account objective coefficients. \paragraph{Row descriptors.} There must be at most one row descriptor line in the data file for each row (constraint). This line has one of the following formats: \begin{verbatim} i ROW f i ROW l RHS i ROW u RHS i ROW d RHS1 RHS2 i ROW s RHS \end{verbatim} The lower-case letter \verb|i| specifies that this is the row descriptor line. The \verb|ROW| field specifies the row ordinal number, an integer between 1 and $m$, where $m$ is the number of rows in the problem instance. The next lower-case letter specifies the row type as follows: \verb|f| --- free (unbounded) row: $-\infty<\sum a_jx_j<+\infty$; \verb|l| --- inequality constraint of `$\geq$' type: $\sum a_jx_j\geq b$; \verb|u| --- inequality constraint of `$\leq$' type: $\sum a_jx_j\leq b$; \verb|d| --- double-sided inequality constraint: $b_1\leq\sum a_jx_j\leq b_2$; \verb|s| --- equality constraint: $\sum a_jx_j=b$. The \verb|RHS| field contains a floaing-point value specifying the row right-hand side. The \verb|RHS1| and \verb|RHS2| fields contain floating-point values specifying, respectively, the lower and upper right-hand sides for the double-sided row. If for some row its descriptor line does not appear in the data file, by default that row is assumed to be an equality constraint with zero right-hand side. \paragraph{Column descriptors.} There must be at most one column descriptor line in the data file for each column (variable). This line has one of the following formats depending on the problem class specified in the problem line: \bigskip \begin{tabular}{@{}l@{\hspace*{40pt}}l} LP class & MIP class \\ \hline \verb|j COL f| & \verb|j COL KIND f| \\ \verb|j COL l BND| & \verb|j COL KIND l BND| \\ \verb|j COL u BND| & \verb|j COL KIND u BND| \\ \verb|j COL d BND1 BND2| & \verb|j COL KIND d BND1 BND2| \\ \verb|j COL s BND| & \verb|j COL KIND s BND| \\ \end{tabular} \bigskip The lower-case letter \verb|j| specifies that this is the column descriptor line. The \verb|COL| field specifies the column ordinal number, an integer between 1 and $n$, where $n$ is the number of columns in the problem instance. The \verb|KIND| field is used only for MIP problems and specifies the column kind as follows: \verb|c| --- continuous column; \verb|i| --- integer column; \verb|b| --- binary column (in this case all remaining fields must be omitted). The next lower-case letter specifies the column type as follows: \verb|f| --- free (unbounded) column: $-\infty<x<+\infty$; \verb|l| --- column with lower bound: $x\geq l$; \verb|u| --- column with upper bound: $x\leq u$; \verb|d| --- double-bounded column: $l\leq x\leq u$; \verb|s| --- fixed column: $x=s$. The \verb|BND| field contains a floating-point value that specifies the column bound. The \verb|BND1| and \verb|BND2| fields contain floating-point values specifying, respectively, the lower and upper bounds for the double-bounded column. If for some column its descriptor line does not appear in the file, by default that column is assumed to be non-negative (in case of LP class) or binary (in case of MIP class). \paragraph{Coefficient descriptors.} There must be exactly one coefficient descriptor line in the data file for each non-zero objective or constraint coefficient. This line has the following format: \begin{verbatim} a ROW COL VAL \end{verbatim} The lower-case letter \verb|a| specifies that this is the coefficient descriptor line. For objective coefficients the \verb|ROW| field must contain 0. For constraint coefficients the \verb|ROW| field specifies the row ordinal number, an integer between 1 and $m$, where $m$ is the number of rows in the problem instance. The \verb|COL| field specifies the column ordinal number, an integer between 1 and $n$, where $n$ is the number of columns in the problem instance. If both the \verb|ROW| and \verb|COL| fields contain 0, the line specifies the constant term (``shift'') of the objective function rather than objective coefficient. The \verb|VAL| field contains a floating-point coefficient value (it is allowed to specify zero value in this field). The number of constraint coefficient descriptor lines must be exactly the same as specified in the field \verb|NONZ| of the problem line. \paragraph{Symbolic name descriptors.} There must be at most one symbolic name descriptor line for the problem instance, objective function, each row (constraint), and each column (variable). This line has one of the following formats: \begin{verbatim} n p NAME n z NAME n i ROW NAME n j COL NAME \end{verbatim} The lower-case letter \verb|n| specifies that this is the symbolic name descriptor line. The next lower-case letter specifies which object should be assigned a symbolic name: \verb|p| --- problem instance; \verb|z| --- objective function; \verb|i| --- row (constraint); \verb|j| --- column (variable). The \verb|ROW| field specifies the row ordinal number, an integer between 1 and $m$, where $m$ is the number of rows in the problem instance. The \verb|COL| field specifies the column ordinal number, an integer between 1 and $n$, where $n$ is the number of columns in the problem instance. The \verb|NAME| field contains the symbolic name, a sequence from 1 to 255 arbitrary graphic ASCII characters, assigned to corresponding object. \paragraph{End line.} There must be exactly one end line in the data file. This line must appear last in the file and has the following format: \begin{verbatim} e \end{verbatim} The lower-case letter \verb|e| specifies that this is the end line. Anything that follows the end line is ignored by GLPK routines. \subsubsection*{Example of data file in GLPK LP/MIP format} The following example of a data file in GLPK LP/MIP format specifies the same LP problem as in Subsection ``Example of MPS file''. \begin{center} \footnotesize\tt \begin{tabular}{l@{\hspace*{50pt}}} p lp min 8 7 48 \\ n p PLAN \\ n z VALUE \\ i 1 f \\ n i 1 VALUE \\ i 2 s 2000 \\ n i 2 YIELD \\ i 3 u 60 \\ n i 3 FE \\ i 4 u 100 \\ n i 4 CU \\ i 5 u 40 \\ n i 5 MN \\ i 6 u 30 \\ n i 6 MG \\ i 7 l 1500 \\ n i 7 AL \\ i 8 d 250 300 \\ n i 8 SI \\ j 1 d 0 200 \\ n j 1 BIN1 \\ j 2 d 0 2500 \\ n j 2 BIN2 \\ j 3 d 400 800 \\ n j 3 BIN3 \\ j 4 d 100 700 \\ n j 4 BIN4 \\ j 5 d 0 1500 \\ n j 5 BIN5 \\ n j 6 ALUM \\ n j 7 SILICON \\ a 0 1 0.03 \\ a 0 2 0.08 \\ a 0 3 0.17 \\ a 0 4 0.12 \\ a 0 5 0.15 \\ a 0 6 0.21 \\ a 0 7 0.38 \\ a 1 1 0.03 \\ a 1 2 0.08 \\ a 1 3 0.17 \\ a 1 4 0.12 \\ a 1 5 0.15 \\ a 1 6 0.21 \\ \end{tabular} \begin{tabular}{|@{\hspace*{80pt}}l} a 1 7 0.38 \\ a 2 1 1 \\ a 2 2 1 \\ a 2 3 1 \\ a 2 4 1 \\ a 2 5 1 \\ a 2 6 1 \\ a 2 7 1 \\ a 3 1 0.15 \\ a 3 2 0.04 \\ a 3 3 0.02 \\ a 3 4 0.04 \\ a 3 5 0.02 \\ a 3 6 0.01 \\ a 3 7 0.03 \\ a 4 1 0.03 \\ a 4 2 0.05 \\ a 4 3 0.08 \\ a 4 4 0.02 \\ a 4 5 0.06 \\ a 4 6 0.01 \\ a 5 1 0.02 \\ a 5 2 0.04 \\ a 5 3 0.01 \\ a 5 4 0.02 \\ a 5 5 0.02 \\ a 6 1 0.02 \\ a 6 2 0.03 \\ a 6 5 0.01 \\ a 7 1 0.7 \\ a 7 2 0.75 \\ a 7 3 0.8 \\ a 7 4 0.75 \\ a 7 5 0.8 \\ a 7 6 0.97 \\ a 8 1 0.02 \\ a 8 2 0.06 \\ a 8 3 0.08 \\ a 8 4 0.12 \\ a 8 5 0.02 \\ a 8 6 0.01 \\ a 8 7 0.97 \\ e o f \\ \\ \end{tabular} \end{center} \newpage \subsection{glp\_write\_prob---write problem data in GLPK format} \subsubsection*{Synopsis} \begin{verbatim} int glp_write_prob(glp_prob *P, int flags, const char *fname); \end{verbatim} \subsubsection*{Description} The routine \verb|glp_write_prob| writes problem data in the GLPK LP/MIP format to a text file. (For description of the GLPK LP/MIP format see Subsection ``Read problem data in GLPK format''.) The parameter \verb|flags| is reserved for use in the future and should be specified as zero. The character string \verb|fname| specifies a name of the text file to be written out. (If the file name ends with suffix `\verb|.gz|', the file is assumed to be compressed, in which case the routine \verb|glp_write_prob| performs automatic compression on writing it.) \subsubsection*{Returns} If the operation was successful, the routine \verb|glp_read_prob| returns zero. Otherwise, it prints an error message and returns non-zero. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \newpage \section{Routines for processing MathProg models} \subsection{Introduction} GLPK supports the {\it GNU MathProg modeling language}.\footnote{The GNU MathProg modeling language is a subset of the AMPL language. For its detailed description see the document ``Modeling Language GNU MathProg: Language Reference'' included in the GLPK distribution.} As a rule, models written in MathProg are solved with the GLPK LP/MIP stand-alone solver \verb|glpsol| (see Appendix D) and do not need any programming with API routines. However, for various reasons the user may need to process MathProg models directly in his/her application program, in which case he/she may use API routines described in this section. These routines provide an interface to the {\it MathProg translator}, a component of GLPK, which translates MathProg models into an internal code and then interprets (executes) this code. The processing of a model written in GNU MathProg includes several steps, which should be performed in the following order: \begin{enumerate} \item{\it Allocating the workspace.} The translator allocates the workspace, an internal data structure used on all subsequent steps. \item{\it Reading model section.} The translator reads model section and, optionally, data section from a specified text file and translates them into the internal code. If necessary, on this step data section may be ignored. \item{\it Reading data section(s).} The translator reads one or more data sections from specified text file(s) and translates them into the internal code. \item{\it Generating the model.} The translator executes the internal code to evaluate the content of the model objects such as sets, parameters, variables, constraints, and objectives. On this step the execution is suspended at the solve statement. \item {\it Building the problem object.} The translator obtains all necessary information from the workspace and builds the standard problem object (that is, the program object of type \verb|glp_prob|). \item{\it Solving the problem.} On this step the problem object built on the previous step is passed to a solver, which solves the problem instance and stores its solution back to the problem object. \item{\it Postsolving the model.} The translator copies the solution from the problem object to the workspace and then executes the internal code from the solve statement to the end of the model. (If model has no solve statement, the translator does nothing on this step.) \item{\it Freeing the workspace.} The translator frees all the memory allocated to the workspace. \end{enumerate} Note that the MathProg translator performs no error correction, so if any of steps 2 to 7 fails (due to errors in the model), the application program should terminate processing and go to step 8. \subsubsection*{Example 1} In this example the program reads model and data sections from input file \verb|egypt.mod|\footnote{This is an example model included in the GLPK distribution.} and writes the model to output file \verb|egypt.mps| in free MPS format (see Appendix B). No solution is performed. \begin{small} \begin{verbatim} /* mplsamp1.c */ #include <stdio.h> #include <stdlib.h> #include <glpk.h> int main(void) { glp_prob *lp; glp_tran *tran; int ret; lp = glp_create_prob(); tran = glp_mpl_alloc_wksp(); ret = glp_mpl_read_model(tran, "egypt.mod", 0); if (ret != 0) { fprintf(stderr, "Error on translating model\n"); goto skip; } ret = glp_mpl_generate(tran, NULL); if (ret != 0) { fprintf(stderr, "Error on generating model\n"); goto skip; } glp_mpl_build_prob(tran, lp); ret = glp_write_mps(lp, GLP_MPS_FILE, NULL, "egypt.mps"); if (ret != 0) fprintf(stderr, "Error on writing MPS file\n"); skip: glp_mpl_free_wksp(tran); glp_delete_prob(lp); return 0; } /* eof */ \end{verbatim} \end{small} \subsubsection*{Example 2} In this example the program reads model section from file \verb|sudoku.mod|\footnote{This is an example model which is included in the GLPK distribution along with alternative data file {\tt sudoku.dat}.} ignoring data section in this file, reads alternative data section from file \verb|sudoku.dat|, solves the problem instance and passes the solution found back to the model. \begin{small} \begin{verbatim} /* mplsamp2.c */ #include <stdio.h> #include <stdlib.h> #include <glpk.h> int main(void) { glp_prob *mip; glp_tran *tran; int ret; mip = glp_create_prob(); tran = glp_mpl_alloc_wksp(); ret = glp_mpl_read_model(tran, "sudoku.mod", 1); if (ret != 0) { fprintf(stderr, "Error on translating model\n"); goto skip; } ret = glp_mpl_read_data(tran, "sudoku.dat"); if (ret != 0) { fprintf(stderr, "Error on translating data\n"); goto skip; } ret = glp_mpl_generate(tran, NULL); if (ret != 0) { fprintf(stderr, "Error on generating model\n"); goto skip; } glp_mpl_build_prob(tran, mip); glp_simplex(mip, NULL); glp_intopt(mip, NULL); ret = glp_mpl_postsolve(tran, mip, GLP_MIP); if (ret != 0) fprintf(stderr, "Error on postsolving model\n"); skip: glp_mpl_free_wksp(tran); glp_delete_prob(mip); return 0; } /* eof */ \end{verbatim} \end{small} \subsection{glp\_mpl\_alloc\_wksp---allocate the translator workspace} \subsubsection*{Synopsis} \begin{verbatim} glp_tran *glp_mpl_alloc_wksp(void); \end{verbatim} \subsubsection*{Description} The routine \verb|glp_mpl_alloc_wksp| allocates the MathProg translator work\-space. (Note that multiple instances of the workspace may be allocated, if necessary.) \subsubsection*{Returns} The routine returns a pointer to the workspace, which should be used in all subsequent operations. \subsection{glp\_mpl\_read\_model---read and translate model section} \subsubsection*{Synopsis} \begin{verbatim} int glp_mpl_read_model(glp_tran *tran, const char *fname, int skip); \end{verbatim} \subsubsection*{Description} The routine \verb|glp_mpl_read_model| reads model section and, optionally, data section, which may follow the model section, from a text file, whose name is the character string \verb|fname|, performs translation of model statements and data blocks, and stores all the information in the workspace. The parameter \verb|skip| is a flag. If the input file contains the data section and this flag is non-zero, the data section is not read as if there were no data section and a warning message is printed. This allows reading data section(s) from other file(s). \subsubsection*{Returns} If the operation is successful, the routine returns zero. Otherwise the routine prints an error message and returns non-zero. \subsection{glp\_mpl\_read\_data---read and translate data section} \subsubsection*{Synopsis} \begin{verbatim} int glp_mpl_read_data(glp_tran *tran, const char *fname); \end{verbatim} \subsubsection*{Description} The routine \verb|glp_mpl_read_data| reads data section from a text file, whose name is the character string \verb|fname|, performs translation of data blocks, and stores the data read in the translator workspace. If necessary, this routine may be called more than once. \subsubsection*{Returns} If the operation is successful, the routine returns zero. Otherwise the routine prints an error message and returns non-zero. \subsection{glp\_mpl\_generate---generate the model} \subsubsection*{Synopsis} \begin{verbatim} int glp_mpl_generate(glp_tran *tran, const char *fname); \end{verbatim} \subsubsection*{Description} The routine \verb|glp_mpl_generate| generates the model using its description stored in the translator workspace. This operation means generating all variables, constraints, and objectives, executing check and display statements, which precede the solve statement (if it is presented). The character string \verb|fname| specifies the name of an output text file, to which output produced by display statements should be written. If \verb|fname| is \verb|NULL|, the output is sent to the terminal. \subsubsection*{Returns} If the operation is successful, the routine returns zero. Otherwise the routine prints an error message and returns non-zero. \subsection{glp\_mpl\_build\_prob---build problem instance from the model} \subsubsection*{Synopsis} \begin{verbatim} void glp_mpl_build_prob(glp_tran *tran, glp_prob *prob); \end{verbatim} \subsubsection*{Description} The routine \verb|glp_mpl_build_prob| obtains all necessary information from the translator workspace and stores it in the specified problem object \verb|prob|. Note that before building the current content of the problem object is erased with the routine \verb|glp_erase_prob|. \subsection{glp\_mpl\_postsolve---postsolve the model} \subsubsection*{Synopsis} \begin{verbatim} int glp_mpl_postsolve(glp_tran *tran, glp_prob *prob, int sol); \end{verbatim} \subsubsection*{Description} The routine \verb|glp_mpl_postsolve| copies the solution from the specified problem object \verb|prob| to the translator workspace and then executes all the remaining model statements, which follow the solve statement. The parameter \verb|sol| specifies which solution should be copied from the problem object to the workspace as follows: \begin{tabular}{@{}ll} \verb|GLP_SOL| & basic solution; \\ \verb|GLP_IPT| & interior-point solution; \\ \verb|GLP_MIP| & mixed integer solution. \\ \end{tabular} \subsubsection*{Returns} If the operation is successful, the routine returns zero. Otherwise the routine prints an error message and returns non-zero. \subsection{glp\_mpl\_free\_wksp---free the translator workspace} \subsubsection*{Synopsis} \begin{verbatim} void glp_mpl_free_wksp(glp_tran *tran); \end{verbatim} \subsubsection*{Description} The routine \verb|glp_mpl_free_wksp| frees all the memory allocated to the translator workspace. It also frees all other resources, which are still used by the translator. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \newpage \section{Problem solution reading/writing routines} \subsection{glp\_print\_sol---write basic solution in printable format} \subsubsection*{Synopsis} \begin{verbatim} int glp_print_sol(glp_prob *lp, const char *fname); \end{verbatim} \subsubsection*{Description} The routine \verb|glp_print_sol writes| the current basic solution of an LP problem, which is specified by the pointer \verb|lp|, to a text file, whose name is the character string \verb|fname|, in printable format. Information reported by the routine \verb|glp_print_sol| is intended mainly for visual analysis. \subsubsection*{Returns} If no errors occurred, the routine returns zero. Otherwise the routine prints an error message and returns non-zero. \subsection{glp\_read\_sol---read basic solution from text file} \subsubsection*{Synopsis} \begin{verbatim} int glp_read_sol(glp_prob *lp, const char *fname); \end{verbatim} \subsubsection*{Description} The routine \verb|glp_read_sol| reads basic solution from a text file whose name is specified by the parameter \verb|fname| into the problem object. For the file format see description of the routine \verb|glp_write_sol|. \subsubsection*{Returns} On success the routine returns zero, otherwise non-zero. \newpage \subsection{glp\_write\_sol---write basic solution to text file} \subsubsection*{Synopsis} \begin{verbatim} int glp_write_sol(glp_prob *lp, const char *fname); \end{verbatim} \subsubsection*{Description} The routine \verb|glp_write_sol| writes the current basic solution to a text file whose name is specified by the parameter \verb|fname|. This file can be read back with the routine \verb|glp_read_sol|. \subsubsection*{Returns} On success the routine returns zero, otherwise non-zero. \subsubsection*{File format} The file created by the routine \verb|glp_write_sol| is a plain text file, which contains the following information: \begin{verbatim} m n p_stat d_stat obj_val r_stat[1] r_prim[1] r_dual[1] . . . r_stat[m] r_prim[m] r_dual[m] c_stat[1] c_prim[1] c_dual[1] . . . c_stat[n] c_prim[n] c_dual[n] \end{verbatim} \noindent where: \noindent $m$ is the number of rows (auxiliary variables); \noindent $n$ is the number of columns (structural variables); \noindent \verb|p_stat| is the primal status of the basic solution (\verb|GLP_UNDEF| = 1, \verb|GLP_FEAS| = 2, \verb|GLP_INFEAS| = 3, or \verb|GLP_NOFEAS| = 4); \noindent \verb|d_stat| is the dual status of the basic solution (\verb|GLP_UNDEF| = 1, \verb|GLP_FEAS| = 2, \verb|GLP_INFEAS| = 3, or \verb|GLP_NOFEAS| = 4); \noindent \verb|obj_val| is the objective value; \noindent \verb|r_stat[i]|, $i=1,\dots,m$, is the status of $i$-th row (\verb|GLP_BS| = 1, \verb|GLP_NL| = 2, \verb|GLP_NU| = 3, \verb|GLP_NF| = 4, or \verb|GLP_NS| = 5); \noindent \verb|r_prim[i]|, $i=1,\dots,m$, is the primal value of $i$-th row; \noindent \verb|r_dual[i]|, $i=1,\dots,m$, is the dual value of $i$-th row; \noindent \verb|c_stat[j]|, $j=1,\dots,n$, is the status of $j$-th column (\verb|GLP_BS| = 1, \verb|GLP_NL| = 2, \verb|GLP_NU| = 3, \verb|GLP_NF| = 4, or \verb|GLP_NS| = 5); \noindent \verb|c_prim[j]|, $j=1,\dots,n$, is the primal value of $j$-th column; \noindent \verb|c_dual[j]|, $j=1,\dots,n$, is the dual value of $j$-th column. \subsection{glp\_print\_ipt---write interior-point solution in printable format} \subsubsection*{Synopsis} \begin{verbatim} int glp_print_ipt(glp_prob *lp, const char *fname); \end{verbatim} \subsubsection*{Description} The routine \verb|glp_print_ipt| writes the current interior point solution of an LP problem, which the parameter \verb|lp| points to, to a text file, whose name is the character string \verb|fname|, in printable format. Information reported by the routine \verb|glp_print_ipt| is intended mainly for visual analysis. \subsubsection*{Returns} If no errors occurred, the routine returns zero. Otherwise the routine prints an error message and returns non-zero. \subsection{glp\_read\_ipt---read interior-point solution from text file} \subsubsection*{Synopsis} \begin{verbatim} int glp_read_ipt(glp_prob *lp, const char *fname); \end{verbatim} \subsubsection*{Description} The routine \verb|glp_read_ipt| reads interior-point solution from a text file whose name is specified by the parameter \verb|fname| into the problem object. For the file format see description of the routine \verb|glp_write_ipt|. \subsubsection*{Returns} On success the routine returns zero, otherwise non-zero. \subsection{glp\_write\_ipt---write interior-point solution to text file} \subsubsection*{Synopsis} \begin{verbatim} int glp_write_ipt(glp_prob *lp, const char *fname); \end{verbatim} \subsubsection*{Description} The routine \verb|glp_write_ipt| writes the current interior-point solution to a text file whose name is specified by the parameter \verb|fname|. This file can be read back with the routine \verb|glp_read_ipt|. \subsubsection*{Returns} On success the routine returns zero, otherwise non-zero. \subsubsection*{File format} The file created by the routine \verb|glp_write_ipt| is a plain text file, which contains the following information: \begin{verbatim} m n stat obj_val r_prim[1] r_dual[1] . . . r_prim[m] r_dual[m] c_prim[1] c_dual[1] . . . c_prim[n] c_dual[n] \end{verbatim} \noindent where: \noindent $m$ is the number of rows (auxiliary variables); \noindent $n$ is the number of columns (structural variables); \noindent \verb|stat| is the solution status (\verb|GLP_UNDEF| = 1 or \verb|GLP_OPT| = 5); \noindent \verb|obj_val| is the objective value; \noindent \verb|r_prim[i]|, $i=1,\dots,m$, is the primal value of $i$-th row; \noindent \verb|r_dual[i]|, $i=1,\dots,m$, is the dual value of $i$-th row; \noindent \verb|c_prim[j]|, $j=1,\dots,n$, is the primal value of $j$-th column; \noindent \verb|c_dual[j]|, $j=1,\dots,n$, is the dual value of $j$-th column. \subsection{glp\_print\_mip---write MIP solution in printable format} \subsubsection*{Synopsis} \begin{verbatim} int glp_print_mip(glp_prob *lp, const char *fname); \end{verbatim} \subsubsection*{Description} The routine \verb|glp_print_mip| writes a best known integer solution of a MIP problem, which is specified by the pointer \verb|lp|, to a text file, whose name is the character string \verb|fname|, in printable format. Information reported by the routine \verb|glp_print_mip| is intended mainly for visual analysis. \subsubsection*{Returns} If no errors occurred, the routine returns zero. Otherwise the routine prints an error message and returns non-zero. \newpage \subsection{glp\_read\_mip---read MIP solution from text file} \subsubsection*{Synopsis} \begin{verbatim} int glp_read_mip(glp_prob *mip, const char *fname); \end{verbatim} \subsubsection*{Description} The routine \verb|glp_read_mip| reads MIP solution from a text file whose name is specified by the parameter \verb|fname| into the problem object. For the file format see description of the routine \verb|glp_write_mip|. \subsubsection*{Returns} On success the routine returns zero, otherwise non-zero. \subsection{glp\_write\_mip---write MIP solution to text file} \subsubsection*{Synopsis} \begin{verbatim} int glp_write_mip(glp_prob *mip, const char *fname); \end{verbatim} \subsubsection*{Description} The routine \verb|glp_write_mip| writes the current MIP solution to a text file whose name is specified by the parameter \verb|fname|. This file can be read back with the routine \verb|glp_read_mip|. \subsubsection*{Returns} On success the routine returns zero, otherwise non-zero. \subsubsection*{File format} The file created by the routine \verb|glp_write_sol| is a plain text file, which contains the following information: \begin{verbatim} m n stat obj_val r_val[1] . . . r_val[m] c_val[1] . . . c_val[n] \end{verbatim} \noindent where: \noindent $m$ is the number of rows (auxiliary variables); \noindent $n$ is the number of columns (structural variables); \noindent \verb|stat| is the solution status (\verb|GLP_UNDEF| = 1, \verb|GLP_FEAS| = 2, \verb|GLP_NOFEAS| = 4, or \verb|GLP_OPT| = 5); \noindent \verb|obj_val| is the objective value; \noindent \verb|r_val[i]|, $i=1,\dots,m$, is the value of $i$-th row; \noindent \verb|c_val[j]|, $j=1,\dots,n$, is the value of $j$-th column. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \newpage \section{Post-optimal analysis routines} \subsection{glp\_print\_ranges---print sensitivity analysis report} \subsubsection*{Synopsis} \begin{verbatim} int glp_print_ranges(glp_prob *P, int len, const int list[], int flags, const char *fname); \end{verbatim} \subsubsection*{Description} The routine \verb|glp_print_ranges| performs sensitivity analysis of current optimal basic solution and writes the analysis report in human-readable format to a text file, whose name is the character string {\it fname}. (Detailed description of the report structure is given below.) The parameter {\it len} specifies the length of the row/column list. The array {\it list} specifies ordinal number of rows and columns to be analyzed. The ordinal numbers should be passed in locations {\it list}[1], {\it list}[2], \dots, {\it list}[{\it len}]. Ordinal numbers from 1 to $m$ refer to rows, and ordinal numbers from $m+1$ to $m+n$ refer to columns, where $m$ and $n$ are, resp., the total number of rows and columns in the problem object. Rows and columns appear in the analysis report in the same order as they follow in the array list. It is allowed to specify $len=0$, in which case the array {\it list} is not used (so it can be specified as \verb|NULL|), and the routine performs analysis for all rows and columns of the problem object. The parameter {\it flags} is reserved for use in the future and must be specified as zero. On entry to the routine \verb|glp_print_ranges| the current basic solution must be optimal and the basis factorization must exist. The application program can check that with the routine \verb|glp_bf_exists|, and if the factorization does not exist, compute it with the routine \verb|glp_factorize|. Note that if the LP preprocessor is not used, on normal exit from the simplex solver routine \verb|glp_simplex| the basis factorization always exists. \subsubsection*{Returns} If the operation was successful, the routine \verb|glp_print_ranges| returns zero. Otherwise, it prints an error message and returns non-zero. \subsubsection*{Analysis report example} An example of the sensitivity analysis report is shown on the next two pages. This example corresponds to the example of LP problem described in Subsection ``Example of MPS file''. \subsubsection*{Structure of the analysis report} For each row and column specified in the array {\it list} the routine prints two lines containing generic information and analysis information, which depends on the status of corresponding row or column. Note that analysis of a row is analysis of its auxiliary variable, which is equal to the row linear form $\sum a_jx_j$, and analysis of a column is analysis of corresponding structural variable. Therefore, formally, on performing the sensitivity analysis there is no difference between rows and columns. \bigskip \noindent {\it Generic information} \medskip \noindent {\tt No.} is the row or column ordinal number in the problem object. Rows are numbered from 1 to $m$, and columns are numbered from 1 to $n$, where $m$ and $n$ are, resp., the total number of rows and columns in the problem object. \medskip \noindent {\tt Row name} is the symbolic name assigned to the row. If the row has no name assigned, this field contains blanks. \medskip \noindent {\tt Column name} is the symbolic name assigned to the column. If the column has no name assigned, this field contains blanks. \medskip \noindent {\tt St} is the status of the row or column in the optimal solution: {\tt BS} --- non-active constraint (row), basic column; {\tt NL} --- inequality constraint having its lower right-hand side active (row), non-basic column having its lower bound active; {\tt NU} --- inequality constraint having its upper right-hand side active (row), non-basic column having its upper bound active; {\tt NS} --- active equality constraint (row), non-basic fixed column. {\tt NF} --- active free row, non-basic free (unbounded) column. (This case means that the optimal solution is dual degenerate.) \medskip \noindent {\tt Activity} is the (primal) value of the auxiliary variable (row) or structural variable (column) in the optimal solution. \medskip \noindent {\tt Slack} is the (primal) value of the row slack variable. \medskip \noindent {\tt Obj coef} is the objective coefficient of the column (structural variable). \begin{landscape} \begin{scriptsize} \begin{verbatim} GLPK 4.42 - SENSITIVITY ANALYSIS REPORT Page 1 Problem: PLAN Objective: VALUE = 296.2166065 (MINimum) No. Row name St Activity Slack Lower bound Activity Obj coef Obj value at Limiting Marginal Upper bound range range break point variable ------ ------------ -- ------------- ------------- ------------- ------------- ------------- ------------- ------------ 1 VALUE BS 296.21661 -296.21661 -Inf 299.25255 -1.00000 . MN . +Inf 296.21661 +Inf +Inf 2 YIELD NS 2000.00000 . 2000.00000 1995.06864 -Inf 296.28365 BIN3 -.01360 2000.00000 2014.03479 +Inf 296.02579 CU 3 FE NU 60.00000 . -Inf 55.89016 -Inf 306.77162 BIN4 -2.56823 60.00000 62.69978 2.56823 289.28294 BIN3 4 CU BS 83.96751 16.03249 -Inf 93.88467 -.30613 270.51157 MN . 100.00000 79.98213 .21474 314.24798 BIN5 5 MN NU 40.00000 . -Inf 34.42336 -Inf 299.25255 BIN4 -.54440 40.00000 41.68691 .54440 295.29825 BIN3 6 MG BS 19.96029 10.03971 -Inf 24.74427 -1.79618 260.36433 BIN1 . 30.00000 9.40292 .28757 301.95652 MN 7 AL NL 1500.00000 . 1500.00000 1485.78425 -.25199 292.63444 CU .25199 +Inf 1504.92126 +Inf 297.45669 BIN3 8 SI NL 250.00000 50.00000 250.00000 235.32871 -.48520 289.09812 CU .48520 300.00000 255.06073 +Inf 298.67206 BIN3 \end{verbatim} \end{scriptsize} \end{landscape} \begin{landscape} \begin{scriptsize} \begin{verbatim} GLPK 4.42 - SENSITIVITY ANALYSIS REPORT Page 2 Problem: PLAN Objective: VALUE = 296.2166065 (MINimum) No. Column name St Activity Obj coef Lower bound Activity Obj coef Obj value at Limiting Marginal Upper bound range range break point variable ------ ------------ -- ------------- ------------- ------------- ------------- ------------- ------------- ------------ 1 BIN1 NL . .03000 . -28.82475 -.22362 288.90594 BIN4 .25362 200.00000 33.88040 +Inf 304.80951 BIN4 2 BIN2 BS 665.34296 .08000 . 802.22222 .01722 254.44822 BIN1 . 2500.00000 313.43066 .08863 301.95652 MN 3 BIN3 BS 490.25271 .17000 400.00000 788.61314 .15982 291.22807 MN . 800.00000 -347.42857 .17948 300.86548 BIN5 4 BIN4 BS 424.18773 .12000 100.00000 710.52632 .10899 291.54745 MN . 700.00000 -256.15524 .14651 307.46010 BIN1 5 BIN5 NL . .15000 . -201.78739 .13544 293.27940 BIN3 .01456 1500.00000 58.79586 +Inf 297.07244 BIN3 6 ALUM BS 299.63899 .21000 . 358.26772 .18885 289.87879 AL . +Inf 112.40876 .22622 301.07527 MN 7 SILICON BS 120.57762 .38000 . 124.27093 .14828 268.27586 BIN5 . +Inf 85.54745 .46667 306.66667 MN End of report \end{verbatim} \end{scriptsize} \end{landscape} \noindent {\tt Marginal} is the reduced cost (dual activity) of the auxiliary variable (row) or structural variable (column). \medskip \noindent {\tt Lower bound} is the lower right-hand side (row) or lower bound (column). If the row or column has no lower bound, this field contains {\tt -Inf}. \medskip \noindent {\tt Upper bound} is the upper right-hand side (row) or upper bound (column). If the row or column has no upper bound, this field contains {\tt +Inf}. \bigskip \noindent {\it Sensitivity analysis of active bounds} \medskip \noindent The sensitivity analysis of active bounds is performed only for rows, which are active constraints, and only for non-basic columns, because inactive constraints and basic columns have no active bounds. For every auxiliary (row) or structural (column) non-basic variable the routine starts changing its active bound in both direction. The first of the two lines in the report corresponds to decreasing, and the second line corresponds to increasing of the active bound. Since the variable being analyzed is non-basic, its activity, which is equal to its active bound, also starts changing. This changing leads to changing of basic (auxiliary and structural) variables, which depend on the non-basic variable. The current basis remains primal feasible and therefore optimal while values of all basic variables are primal feasible, i.e. are within their bounds. Therefore, if some basic variable called the {\it limiting variable} reaches its (lower or upper) bound first, before any other basic variables, it thereby limits further changing of the non-basic variable, because otherwise the current basis would become primal infeasible. The point, at which this happens, is called the {\it break point}. Note that there are two break points: the lower break point, which corresponds to decreasing of the non-basic variable, and the upper break point, which corresponds to increasing of the non-basic variable. In the analysis report values of the non-basic variable (i.e. of its active bound) being analyzed at both lower and upper break points are printed in the field `{\tt Activity range}'. Corresponding values of the objective function are printed in the field `{\tt Obj value at break point}', and symbolic names of corresponding limiting basic variables are printed in the field `{\tt Limiting variable}'. If the active bound can decrease or/and increase unlimitedly, the field `{\tt Activity range}' contains {\tt -Inf} or/and {\tt +Inf}, resp. For example (see the example report above), row SI is a double-sided constraint, which is active on its lower bound (right-hand side), and its activity in the optimal solution being equal to the lower bound is 250. The activity range for this row is $[235.32871,255.06073]$. This means that the basis remains optimal while the lower bound is increasing up to 255.06073, and further increasing is limited by (structural) variable BIN3. If the lower bound reaches this upper break point, the objective value becomes equal to 298.67206. Note that if the basis does not change, the objective function depends on the non-basic variable linearly, and the per-unit change of the objective function is the reduced cost (marginal value) of the non-basic variable. \bigskip \noindent {\it Sensitivity analysis of objective coefficients at non-basic variables} \medskip \noindent The sensitivity analysis of the objective coefficient at a non-basic variable is quite simple, because in this case change in the objective coefficient leads to equivalent change in the reduced cost (marginal value). For every auxiliary (row) or structural (column) non-basic variable the routine starts changing its objective coefficient in both direction. (Note that auxiliary variables are not included in the objective function and therefore always have zero objective coefficients.) The first of the two lines in the report corresponds to decreasing, and the second line corresponds to increasing of the objective coefficient. This changing leads to changing of the reduced cost of the non-basic variable to be analyzed and does affect reduced costs of all other non-basic variables. The current basis remains dual feasible and therefore optimal while the reduced cost keeps its sign. Therefore, if the reduced cost reaches zero, it limits further changing of the objective coefficient (if only the non-basic variable is non-fixed). In the analysis report minimal and maximal values of the objective coefficient, on which the basis remains optimal, are printed in the field `\verb|Obj coef range|'. If the objective coefficient can decrease or/and increase unlimitedly, this field contains {\tt -Inf} or/and {\tt +Inf}, resp. For example (see the example report above), column BIN5 is non-basic having its lower bound active. Its objective coefficient is 0.15, and reduced cost in the optimal solution 0.01456. The column lower bound remains active while the column reduced cost remains non-negative, thus, minimal value of the objective coefficient, on which the current basis still remains optimal, is $0.15-0.01456=0.13644$, that is indicated in the field `\verb|Obj coef range|'. \bigskip \noindent {\it Sensitivity analysis of objective coefficients at basic variables} \medskip \noindent To perform sensitivity analysis for every auxiliary (row) or structural (column) variable the routine starts changing its objective coefficient in both direction. (Note that auxiliary variables are not included in the objective function and therefore always have zero objective coefficients.) The first of the two lines in the report corresponds to decreasing, and the second line corresponds to increasing of the objective coefficient. This changing leads to changing of reduced costs of non-basic variables. The current basis remains dual feasible and therefore optimal while reduced costs of all non-basic variables (except fixed variables) keep their signs. Therefore, if the reduced cost of some non-basic non-fixed variable called the {\it limiting variable} reaches zero first, before reduced cost of any other non-basic non-fixed variable, it thereby limits further changing of the objective coefficient, because otherwise the current basis would become dual infeasible (non-optimal). The point, at which this happens, is called the {\it break point}. Note that there are two break points: the lower break point, which corresponds to decreasing of the objective coefficient, and the upper break point, which corresponds to increasing of the objective coefficient. Let the objective coefficient reach its limit value and continue changing a bit further in the same direction that makes the current basis dual infeasible (non-optimal). Then the reduced cost of the non-basic limiting variable becomes ``a bit'' dual infeasible that forces the limiting variable to enter the basis replacing there some basic variable, which leaves the basis to keep its primal feasibility. It should be understood that if we change the current basis in this way exactly at the break point, both the current and adjacent bases will be optimal with the same objective value, because at the break point the limiting variable has zero reduced cost. On the other hand, in the adjacent basis the value of the limiting variable changes, because there it becomes basic, that leads to changing of the value of the basic variable being analyzed. Note that on determining the adjacent basis the bounds of the analyzed basic variable are ignored as if it were a free (unbounded) variable, so it cannot leave the current basis. In the analysis report lower and upper limits of the objective coefficient at the basic variable being analyzed, when the basis remains optimal, are printed in the field `{\tt Obj coef range}'. Corresponding values of the objective function at both lower and upper break points are printed in the field `{\tt Obj value at break point}', symbolic names of corresponding non-basic limiting variables are printed in the field `{\tt Limiting variable}', and values of the basic variable, which it would take on in the adjacent bases (as was explained above) are printed in the field `{\tt Activity range}'. If the objective coefficient can increase or/and decrease unlimitedly, the field `{\tt Obj coef range}' contains {\tt -Inf} and/or {\tt +Inf}, resp. It also may happen that no dual feasible adjacent basis exists (i.e. on entering the basis the limiting variable can increase or decrease unlimitedly), in which case the field `{\tt Activity range}' contains {\tt -Inf} and/or {\tt +Inf}. \newpage For example (see the example report above), structural variable (column) BIN3 is basic, its optimal value is 490.25271, and its objective coefficient is 0.17. The objective coefficient range for this column is $[0.15982,0.17948]$. This means that the basis remains optimal while the objective coefficient is decreasing down to 0.15982, and further decreasing is limited by (auxiliary) variable MN. If we make the objective coefficient a bit less than 0.15982, the limiting variable MN will enter the basis, and in that adjacent basis the structural variable BIN3 will take on new optimal value 788.61314. At the lower break point, where the objective coefficient is exactly 0.15982, the objective function takes on the value 291.22807 in both the current and adjacent bases. Note that if the basis does not change, the objective function depends on the objective coefficient at the basic variable linearly, and the per-unit change of the objective function is the value of the basic variable. %* eof *%
{ "alphanum_fraction": 0.7086139404, "avg_line_length": 34.1577946768, "ext": "tex", "hexsha": "68e7958ffba33bf6c8df241dda57ba096408c169", "lang": "TeX", "max_forks_count": 12, "max_forks_repo_forks_event_max_datetime": "2022-01-04T13:06:21.000Z", "max_forks_repo_forks_event_min_datetime": "2017-06-07T23:51:09.000Z", "max_forks_repo_head_hexsha": "32da9eab253cb88fc1882e59026e8b5b40900a25", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "hectormartinez/rougexstem", "max_forks_repo_path": "taln2016/icsisumm-primary-sys34_v1/solver/glpk-4.43/doc/glpk03.tex", "max_issues_count": 23, "max_issues_repo_head_hexsha": "32da9eab253cb88fc1882e59026e8b5b40900a25", "max_issues_repo_issues_event_max_datetime": "2021-11-03T16:43:39.000Z", "max_issues_repo_issues_event_min_datetime": "2017-05-08T15:02:39.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "hectormartinez/rougexstem", "max_issues_repo_path": "taln2016/icsisumm-primary-sys34_v1/solver/glpk-4.43/doc/glpk03.tex", "max_line_length": 120, "max_stars_count": 22, "max_stars_repo_head_hexsha": "32da9eab253cb88fc1882e59026e8b5b40900a25", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "hectormartinez/rougexstem", "max_stars_repo_path": "taln2016/icsisumm-primary-sys34_v1/solver/glpk-4.43/doc/glpk03.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-23T09:14:41.000Z", "max_stars_repo_stars_event_min_datetime": "2016-10-05T12:19:01.000Z", "num_tokens": 14133, "size": 53901 }
%\subsection{WMProxy Client API Description} The WMProxy client API supplies the client applications with a set of interfaces over the job submission and control services made available by the gLite WMS through a web service based interface. The API provides the corresponding method for each operation published in the WSDL description of the WMProxy Service ~\cite{wmproxy-wsdl}. The request types supported by the WMProxy Service are: \begin{itemize} \item Job: a simple application \item DAG: a direct acyclic graph of dependent jobs \item Collection: a set of independent jobs \end{itemize} Jobs in turn can be batch, interactive, MPI-based, checkpointable, partitionable and parametric. The specification of the JDL for describing the mentioned request types is available at ~\cite{JDL}. Besides requests submission, the WMProxy also exposes additional functionality for request management and control such as cancellation, job files perusal and output retrieval. Requests status follow-up can be instead achieved through the functionality exposed by the Logging \& Bookkeeping (LB) service ~\cite{LB}. The documentation describing the WMProxy Client API providing C++, Java and Python bindings can be found at \url{http://egee-jra1-wm.mi.infn.it/egee-jra1-wm/glite-wmproxy-api-index.shtml}. Pointers to usage examples are provided in the above mentioned web page.
{ "alphanum_fraction": 0.8012958963, "avg_line_length": 49.6071428571, "ext": "tex", "hexsha": "e4005fd16a4e8db4e0b20e3e4c4ae812f670c69b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5b2adda72ba13cf2a85ec488894c2024e155a4b5", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "italiangrid/wms", "max_forks_repo_path": "users-guide/WMPROXY/api.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5b2adda72ba13cf2a85ec488894c2024e155a4b5", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "italiangrid/wms", "max_issues_repo_path": "users-guide/WMPROXY/api.tex", "max_line_length": 96, "max_stars_count": 1, "max_stars_repo_head_hexsha": "5b2adda72ba13cf2a85ec488894c2024e155a4b5", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "italiangrid/wms", "max_stars_repo_path": "users-guide/WMPROXY/api.tex", "max_stars_repo_stars_event_max_datetime": "2019-01-18T02:19:18.000Z", "max_stars_repo_stars_event_min_datetime": "2019-01-18T02:19:18.000Z", "num_tokens": 310, "size": 1389 }
\clearpage \section{Brain Non-Myeloid FACS} \subsection{All Cells, labeled by \emph{Cell Ontology Class}} \subsubsection{Table of cell counts in All Cells, per \emph{Cell Ontology Class}}\begin{table}[h] \centering \label{my-label} \begin{tabular}{@{}ll@{}} \toprule \emph{Cell Ontology Class}& Number of cells \\ \midrule Bergmann glial cell & 40 \\ astrocyte & 432 \\ brain pericyte & 156 \\ endothelial cell & 715 \\ neuron & 281 \\ oligodendrocyte & 1574 \\ oligodendrocyte precursor cell & 203 \\ \bottomrule \end{tabular} \caption{Cell counts for All Cells, per \emph{Cell Ontology Class}.} \end{table} \clearpage \subsubsection{t-SNE plot} \begin{figure}[h] \centering \includegraphics[height=.35\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/allcells_cell_ontology_class_tsneplot"}.pdf} \includegraphics[height=.35\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/allcells_cell_ontology_class_tsneplot_legend"}.pdf} \caption{Top, t-Distributed stochastic neighbor embedding (tSNE) plot \emph{Cell Ontology Class} labels in All Cells of Brain Non-Myeloid FACS. Bottom, legend mapping \emph{Cell Ontology Class} (and letter abbreviation) to colors} \end{figure} \clearpage \subsubsection{Violinplot (1 of 3, \emph{Aldh1l1}--\emph{Gjc2})} \begin{figure}[h] \centering \includegraphics[width=.6\textwidth]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/allcells_cell_ontology_class_violinplot_1-of-3"}.pdf} \caption{ Violinplot (1 of 3) showing gene expression enrichment in \emph{Cell Ontology Class} labels in All Cells of Brain Non-Myeloid FACS. A: astrocyte, B: Bergmann glial cell, C: brain pericyte, D: endothelial cell, E: neuron, F: oligodendrocyte, G: oligodendrocyte precursor cell.} \end{figure} \clearpage \subsubsection{Violinplot (2 of 3, \emph{Ly6c1}--\emph{Pecam1})} \begin{figure}[h] \centering \includegraphics[width=.6\textwidth]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/allcells_cell_ontology_class_violinplot_2-of-3"}.pdf} \caption{ Violinplot (2 of 3) showing gene expression enrichment in \emph{Cell Ontology Class} labels in All Cells of Brain Non-Myeloid FACS. A: astrocyte, B: Bergmann glial cell, C: brain pericyte, D: endothelial cell, E: neuron, F: oligodendrocyte, G: oligodendrocyte precursor cell.} \end{figure} \clearpage \subsubsection{Violinplot (3 of 3, \emph{Rbfox3}--\emph{Susd5})} \begin{figure}[h] \centering \includegraphics[width=.6\textwidth]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/allcells_cell_ontology_class_violinplot_3-of-3"}.pdf} \caption{ Violinplot (3 of 3) showing gene expression enrichment in \emph{Cell Ontology Class} labels in All Cells of Brain Non-Myeloid FACS. A: astrocyte, B: Bergmann glial cell, C: brain pericyte, D: endothelial cell, E: neuron, F: oligodendrocyte, G: oligodendrocyte precursor cell.} \end{figure} \clearpage \subsubsection{Dotplot (1 of 3, \emph{Aldh1l1}--\emph{Gjc2})} \begin{figure}[h] \centering \includegraphics[angle=90, height=.6\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/allcells_cell_ontology_class_dotplot_1-of-3"}.pdf} \caption{ Dotplot (1 of 3) showing gene expression enrichment in \emph{Cell Ontology Class} labels in All Cells of Brain Non-Myeloid FACS. A: astrocyte, B: Bergmann glial cell, C: brain pericyte, D: endothelial cell, E: neuron, F: oligodendrocyte, G: oligodendrocyte precursor cell.} \end{figure} \clearpage \subsubsection{Dotplot (2 of 3, \emph{Ly6c1}--\emph{Pecam1})} \begin{figure}[h] \centering \includegraphics[angle=90, height=.6\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/allcells_cell_ontology_class_dotplot_2-of-3"}.pdf} \caption{ Dotplot (2 of 3) showing gene expression enrichment in \emph{Cell Ontology Class} labels in All Cells of Brain Non-Myeloid FACS. A: astrocyte, B: Bergmann glial cell, C: brain pericyte, D: endothelial cell, E: neuron, F: oligodendrocyte, G: oligodendrocyte precursor cell.} \end{figure} \clearpage \subsubsection{Dotplot (3 of 3, \emph{Rbfox3}--\emph{Susd5})} \begin{figure}[h] \centering \includegraphics[angle=90, height=.6\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/allcells_cell_ontology_class_dotplot_3-of-3"}.pdf} \caption{ Dotplot (3 of 3) showing gene expression enrichment in \emph{Cell Ontology Class} labels in All Cells of Brain Non-Myeloid FACS. A: astrocyte, B: Bergmann glial cell, C: brain pericyte, D: endothelial cell, E: neuron, F: oligodendrocyte, G: oligodendrocyte precursor cell.} \end{figure} \clearpage \subsection{All Cells, labeled by \emph{Cluster IDs}} \subsubsection{Table of cell counts in All Cells, per \emph{Cluster IDs}}\begin{table}[h] \centering \label{my-label} \begin{tabular}{@{}ll@{}} \toprule \emph{Cluster IDs}& Number of cells \\ \midrule 0 & 989 \\ 1 & 520 \\ 2 & 472 \\ 3 & 398 \\ 4 & 203 \\ 5 & 195 \\ 6 & 194 \\ 7 & 187 \\ 8 & 156 \\ 9 & 87 \\ \bottomrule \end{tabular} \caption{Cell counts for All Cells, per \emph{Cluster IDs}.} \end{table} \clearpage \subsubsection{t-SNE plot} \begin{figure}[h] \centering \includegraphics[height=.35\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/allcells_cluster-ids_tsneplot"}.pdf} \includegraphics[height=.35\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/allcells_cluster-ids_tsneplot_legend"}.pdf} \caption{Top, t-Distributed stochastic neighbor embedding (tSNE) plot \emph{Cluster IDs} labels in All Cells of Brain Non-Myeloid FACS. Bottom, legend mapping \emph{Cluster IDs} to colors} \end{figure} \clearpage \subsubsection{Violinplot (1 of 3, \emph{Aldh1l1}--\emph{Gjc2})} \begin{figure}[h] \centering \includegraphics[width=.6\textwidth]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/allcells_cluster-ids_violinplot_1-of-3"}.pdf} \caption{ Violinplot (1 of 3) showing gene expression enrichment in \emph{Cluster IDs} labels in All Cells of Brain Non-Myeloid FACS. } \end{figure} \clearpage \subsubsection{Violinplot (2 of 3, \emph{Ly6c1}--\emph{Pecam1})} \begin{figure}[h] \centering \includegraphics[width=.6\textwidth]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/allcells_cluster-ids_violinplot_2-of-3"}.pdf} \caption{ Violinplot (2 of 3) showing gene expression enrichment in \emph{Cluster IDs} labels in All Cells of Brain Non-Myeloid FACS. } \end{figure} \clearpage \subsubsection{Violinplot (3 of 3, \emph{Rbfox3}--\emph{Susd5})} \begin{figure}[h] \centering \includegraphics[width=.6\textwidth]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/allcells_cluster-ids_violinplot_3-of-3"}.pdf} \caption{ Violinplot (3 of 3) showing gene expression enrichment in \emph{Cluster IDs} labels in All Cells of Brain Non-Myeloid FACS. } \end{figure} \clearpage \subsubsection{Dotplot (1 of 3, \emph{Aldh1l1}--\emph{Gjc2})} \begin{figure}[h] \centering \includegraphics[angle=90, height=.6\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/allcells_cluster-ids_dotplot_1-of-3"}.pdf} \caption{ Dotplot (1 of 3) showing gene expression enrichment in \emph{Cluster IDs} labels in All Cells of Brain Non-Myeloid FACS. } \end{figure} \clearpage \subsubsection{Dotplot (2 of 3, \emph{Ly6c1}--\emph{Pecam1})} \begin{figure}[h] \centering \includegraphics[angle=90, height=.6\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/allcells_cluster-ids_dotplot_2-of-3"}.pdf} \caption{ Dotplot (2 of 3) showing gene expression enrichment in \emph{Cluster IDs} labels in All Cells of Brain Non-Myeloid FACS. } \end{figure} \clearpage \subsubsection{Dotplot (3 of 3, \emph{Rbfox3}--\emph{Susd5})} \begin{figure}[h] \centering \includegraphics[angle=90, height=.6\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/allcells_cluster-ids_dotplot_3-of-3"}.pdf} \caption{ Dotplot (3 of 3) showing gene expression enrichment in \emph{Cluster IDs} labels in All Cells of Brain Non-Myeloid FACS. } \end{figure} \clearpage \subsection{All Cells, labeled by \emph{Free Annotation}} \subsubsection{Table of cell counts in All Cells, per \emph{Free Annotation}}\begin{table}[h] \centering \label{my-label} \begin{tabular}{@{}ll@{}} \toprule \emph{Free Annotation}& Number of cells \\ \midrule Bergmann glial cell & 40 \\ astrocyte & 432 \\ brain pericyte & 156 \\ endothelial cell & 715 \\ neuron: excitatory neurons and some neuronal stem cells & 194 \\ neuron: inhibitory neurons & 87 \\ oligodendrocyte & 1574 \\ oligodendrocyte precursor cell & 203 \\ \bottomrule \end{tabular} \caption{Cell counts for All Cells, per \emph{Free Annotation}.} \end{table} \clearpage \subsubsection{t-SNE plot} \begin{figure}[h] \centering \includegraphics[height=.35\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/allcells_free_annotation_tsneplot"}.pdf} \includegraphics[height=.35\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/allcells_free_annotation_tsneplot_legend"}.pdf} \caption{Top, t-Distributed stochastic neighbor embedding (tSNE) plot \emph{Free Annotation} labels in All Cells of Brain Non-Myeloid FACS. Bottom, legend mapping \emph{Free Annotation} (and letter abbreviation) to colors} \end{figure} \clearpage \subsubsection{Violinplot (1 of 3, \emph{Aldh1l1}--\emph{Gjc2})} \begin{figure}[h] \centering \includegraphics[width=.6\textwidth]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/allcells_free_annotation_violinplot_1-of-3"}.pdf} \caption{ Violinplot (1 of 3) showing gene expression enrichment in \emph{Free Annotation} labels in All Cells of Brain Non-Myeloid FACS. A: astrocyte, B: Bergmann glial cell, C: brain pericyte, D: endothelial cell, E: neuron: excitatory neurons and some neuronal stem cells, F: neuron: inhibitory neurons, G: oligodendrocyte, H: oligodendrocyte precursor cell.} \end{figure} \clearpage \subsubsection{Violinplot (2 of 3, \emph{Ly6c1}--\emph{Pecam1})} \begin{figure}[h] \centering \includegraphics[width=.6\textwidth]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/allcells_free_annotation_violinplot_2-of-3"}.pdf} \caption{ Violinplot (2 of 3) showing gene expression enrichment in \emph{Free Annotation} labels in All Cells of Brain Non-Myeloid FACS. A: astrocyte, B: Bergmann glial cell, C: brain pericyte, D: endothelial cell, E: neuron: excitatory neurons and some neuronal stem cells, F: neuron: inhibitory neurons, G: oligodendrocyte, H: oligodendrocyte precursor cell.} \end{figure} \clearpage \subsubsection{Violinplot (3 of 3, \emph{Rbfox3}--\emph{Susd5})} \begin{figure}[h] \centering \includegraphics[width=.6\textwidth]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/allcells_free_annotation_violinplot_3-of-3"}.pdf} \caption{ Violinplot (3 of 3) showing gene expression enrichment in \emph{Free Annotation} labels in All Cells of Brain Non-Myeloid FACS. A: astrocyte, B: Bergmann glial cell, C: brain pericyte, D: endothelial cell, E: neuron: excitatory neurons and some neuronal stem cells, F: neuron: inhibitory neurons, G: oligodendrocyte, H: oligodendrocyte precursor cell.} \end{figure} \clearpage \subsubsection{Dotplot (1 of 3, \emph{Aldh1l1}--\emph{Gjc2})} \begin{figure}[h] \centering \includegraphics[angle=90, height=.6\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/allcells_free_annotation_dotplot_1-of-3"}.pdf} \caption{ Dotplot (1 of 3) showing gene expression enrichment in \emph{Free Annotation} labels in All Cells of Brain Non-Myeloid FACS. A: astrocyte, B: Bergmann glial cell, C: brain pericyte, D: endothelial cell, E: neuron: excitatory neurons and some neuronal stem cells, F: neuron: inhibitory neurons, G: oligodendrocyte, H: oligodendrocyte precursor cell.} \end{figure} \clearpage \subsubsection{Dotplot (2 of 3, \emph{Ly6c1}--\emph{Pecam1})} \begin{figure}[h] \centering \includegraphics[angle=90, height=.6\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/allcells_free_annotation_dotplot_2-of-3"}.pdf} \caption{ Dotplot (2 of 3) showing gene expression enrichment in \emph{Free Annotation} labels in All Cells of Brain Non-Myeloid FACS. A: astrocyte, B: Bergmann glial cell, C: brain pericyte, D: endothelial cell, E: neuron: excitatory neurons and some neuronal stem cells, F: neuron: inhibitory neurons, G: oligodendrocyte, H: oligodendrocyte precursor cell.} \end{figure} \clearpage \subsubsection{Dotplot (3 of 3, \emph{Rbfox3}--\emph{Susd5})} \begin{figure}[h] \centering \includegraphics[angle=90, height=.6\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/allcells_free_annotation_dotplot_3-of-3"}.pdf} \caption{ Dotplot (3 of 3) showing gene expression enrichment in \emph{Free Annotation} labels in All Cells of Brain Non-Myeloid FACS. A: astrocyte, B: Bergmann glial cell, C: brain pericyte, D: endothelial cell, E: neuron: excitatory neurons and some neuronal stem cells, F: neuron: inhibitory neurons, G: oligodendrocyte, H: oligodendrocyte precursor cell.} \end{figure} \clearpage \subsection{All Cells, labeled by \emph{Subtissue}} \subsubsection{Table of cell counts in All Cells, per \emph{Subtissue}}\begin{table}[h] \centering \label{my-label} \begin{tabular}{@{}ll@{}} \toprule \emph{Subtissue}& Number of cells \\ \midrule Cerebellum & 553 \\ Cortex & 1149 \\ Hippocampus & 976 \\ Striatum & 723 \\ \bottomrule \end{tabular} \caption{Cell counts for All Cells, per \emph{Subtissue}.} \end{table} \clearpage \subsubsection{t-SNE plot} \begin{figure}[h] \centering \includegraphics[height=.35\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/allcells_subtissue_tsneplot"}.pdf} \includegraphics[height=.35\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/allcells_subtissue_tsneplot_legend"}.pdf} \caption{Top, t-Distributed stochastic neighbor embedding (tSNE) plot \emph{Subtissue} labels in All Cells of Brain Non-Myeloid FACS. Bottom, legend mapping \emph{Subtissue} (and letter abbreviation) to colors} \end{figure} \clearpage \subsubsection{Violinplot (1 of 3, \emph{Aldh1l1}--\emph{Gjc2})} \begin{figure}[h] \centering \includegraphics[width=.6\textwidth]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/allcells_subtissue_violinplot_1-of-3"}.pdf} \caption{ Violinplot (1 of 3) showing gene expression enrichment in \emph{Subtissue} labels in All Cells of Brain Non-Myeloid FACS. A: Cerebellum, B: Cortex, C: Hippocampus, D: Striatum.} \end{figure} \clearpage \subsubsection{Violinplot (2 of 3, \emph{Ly6c1}--\emph{Pecam1})} \begin{figure}[h] \centering \includegraphics[width=.6\textwidth]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/allcells_subtissue_violinplot_2-of-3"}.pdf} \caption{ Violinplot (2 of 3) showing gene expression enrichment in \emph{Subtissue} labels in All Cells of Brain Non-Myeloid FACS. A: Cerebellum, B: Cortex, C: Hippocampus, D: Striatum.} \end{figure} \clearpage \subsubsection{Violinplot (3 of 3, \emph{Rbfox3}--\emph{Susd5})} \begin{figure}[h] \centering \includegraphics[width=.6\textwidth]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/allcells_subtissue_violinplot_3-of-3"}.pdf} \caption{ Violinplot (3 of 3) showing gene expression enrichment in \emph{Subtissue} labels in All Cells of Brain Non-Myeloid FACS. A: Cerebellum, B: Cortex, C: Hippocampus, D: Striatum.} \end{figure} \clearpage \subsubsection{Dotplot (1 of 3, \emph{Aldh1l1}--\emph{Gjc2})} \begin{figure}[h] \centering \includegraphics[angle=90, height=.6\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/allcells_subtissue_dotplot_1-of-3"}.pdf} \caption{ Dotplot (1 of 3) showing gene expression enrichment in \emph{Subtissue} labels in All Cells of Brain Non-Myeloid FACS. A: Cerebellum, B: Cortex, C: Hippocampus, D: Striatum.} \end{figure} \clearpage \subsubsection{Dotplot (2 of 3, \emph{Ly6c1}--\emph{Pecam1})} \begin{figure}[h] \centering \includegraphics[angle=90, height=.6\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/allcells_subtissue_dotplot_2-of-3"}.pdf} \caption{ Dotplot (2 of 3) showing gene expression enrichment in \emph{Subtissue} labels in All Cells of Brain Non-Myeloid FACS. A: Cerebellum, B: Cortex, C: Hippocampus, D: Striatum.} \end{figure} \clearpage \subsubsection{Dotplot (3 of 3, \emph{Rbfox3}--\emph{Susd5})} \begin{figure}[h] \centering \includegraphics[angle=90, height=.6\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/allcells_subtissue_dotplot_3-of-3"}.pdf} \caption{ Dotplot (3 of 3) showing gene expression enrichment in \emph{Subtissue} labels in All Cells of Brain Non-Myeloid FACS. A: Cerebellum, B: Cortex, C: Hippocampus, D: Striatum.} \end{figure} \clearpage \subsection{Endothelial Cells, labeled by \emph{Cluster IDs}} \subsubsection{Table of cell counts in Endothelial Cells, per \emph{Cluster IDs}}\begin{table}[h] \centering \label{my-label} \begin{tabular}{@{}ll@{}} \toprule \emph{Cluster IDs}& Number of cells \\ \midrule 1 & 520 \\ 5 & 195 \\ \bottomrule \end{tabular} \caption{Cell counts for Endothelial Cells, per \emph{Cluster IDs}.} \end{table} \clearpage \subsubsection{t-SNE plot} \begin{figure}[h] \centering \includegraphics[height=.35\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/EndothelialCells_cluster-ids_tsneplot"}.pdf} \caption{Subclustering of endothelial cells grouped by cluster ID. } \end{figure} \clearpage \subsection{Endothelial Cells, labeled by \emph{Function And Vessel Type}} \clearpage \subsubsection{t-SNE plot} \begin{figure}[h] \centering \includegraphics[height=.35\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/EndothelialCells_function_and_vessel_type_tsneplot"}.pdf} \caption{Subclustering of endothelial cells colored by function and vessel type. } \end{figure} \clearpage \subsubsection{Dotplot} \begin{figure}[h] \centering \includegraphics[angle=90, height=.6\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/EndothelialCells_function_and_vessel_type_dotplot"}.pdf} \caption{Key defining genes for function and vessel type. } \end{figure} \clearpage \subsubsection{Featureplot} \begin{figure}[h] \centering \includegraphics[width=.6\textwidth]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/EndothelialCells_function_and_vessel_type_featureplot"}.pdf} \caption{Key defining genes for function and vessel type. } \end{figure} \clearpage \subsection{Endothelial Cells, labeled by \emph{Subtissue}} \subsubsection{Table of cell counts in Endothelial Cells, per \emph{Subtissue}}\begin{table}[h] \centering \label{my-label} \begin{tabular}{@{}ll@{}} \toprule \emph{Subtissue}& Number of cells \\ \midrule Cerebellum & 188 \\ Cortex & 142 \\ Hippocampus & 273 \\ Striatum & 112 \\ \bottomrule \end{tabular} \caption{Cell counts for Endothelial Cells, per \emph{Subtissue}.} \end{table} \clearpage \subsubsection{t-SNE plot} \begin{figure}[h] \centering \includegraphics[height=.35\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/EndothelialCells_subtissue_tsneplot"}.pdf} \caption{Subclustering of endothelial cells colored by brain region. } \end{figure} \clearpage \subsubsection{Violinplot} \begin{figure}[h] \centering \includegraphics[width=.6\textwidth]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/EndothelialCells_subtissue_violinplot_0-of-0"}.pdf} \caption{Key defining genes for the Inflamed (Venous) and Notch (Arterial) populations. } \end{figure} \clearpage \subsection{Endothelial Cells, labeled by \emph{Venous Arterial}} \clearpage \subsubsection{Featureplot} \begin{figure}[h] \centering \includegraphics[width=.6\textwidth]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/EndothelialCells_venous-arterial_featureplot"}.pdf} \caption{Key defining genes for the Inflamed (Venous) and Notch (Arterial) populations. } \end{figure} \clearpage \subsection{Subset A, highlighted from All Cells tSNE} \subsubsection{t-SNE plot (Allcells)} \begin{figure}[h] \centering \includegraphics[height=.35\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/SubsetA_highlighted_tsneplot_allcells"}.pdf} \includegraphics[height=.35\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/SubsetA_highlighted_tsneplot_allcells_legend"}.pdf} \caption{Top, t-Distributed stochastic neighbor embedding (tSNE) plot \emph{Highlighted} labels in Subset A of Brain Non-Myeloid FACS. Bottom, legend mapping \emph{Highlighted} (and letter abbreviation) to colors} \end{figure} \clearpage \subsection{Subset A (Astrocytes), labeled by \emph{Cell Ontology Class}} \subsubsection{Table of cell counts in Subset A (Astrocytes), per \emph{Cell Ontology Class}}\begin{table}[h] \centering \label{my-label} \begin{tabular}{@{}ll@{}} \toprule \emph{Cell Ontology Class}& Number of cells \\ \midrule Bergmann glial cell & 40 \\ astrocyte & 432 \\ \bottomrule \end{tabular} \caption{Cell counts for Subset A (Astrocytes), per \emph{Cell Ontology Class}.} \end{table} \clearpage \subsubsection{t-SNE plot} \begin{figure}[h] \centering \includegraphics[height=.35\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/SubsetA-Astrocytes_cell_ontology_class_tsneplot"}.pdf} \includegraphics[height=.35\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/SubsetA-Astrocytes_cell_ontology_class_tsneplot_legend"}.pdf} \caption{Top, t-Distributed stochastic neighbor embedding (tSNE) plot \emph{Cell Ontology Class} labels in Subset A (Astrocytes) of Brain Non-Myeloid FACS. Bottom, legend mapping \emph{Cell Ontology Class} (and letter abbreviation) to colors} \end{figure} \clearpage \subsubsection{Violinplot} \begin{figure}[h] \centering \includegraphics[width=.6\textwidth]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/SubsetA-Astrocytes_cell_ontology_class_violinplot_1-of-1"}.pdf} \caption{ Violinplot showing gene expression enrichment in \emph{Cell Ontology Class} labels in Subset A (Astrocytes) of Brain Non-Myeloid FACS. A: astrocyte, B: Bergmann glial cell.} \end{figure} \clearpage \subsubsection{Dotplot} \begin{figure}[h] \centering \includegraphics[angle=90, height=.6\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/SubsetA-Astrocytes_cell_ontology_class_dotplot_1-of-1"}.pdf} \caption{ Dotplot showing gene expression enrichment in \emph{Cell Ontology Class} labels in Subset A (Astrocytes) of Brain Non-Myeloid FACS. A: astrocyte, B: Bergmann glial cell.} \end{figure} \clearpage \subsection{Subset A (Astrocytes), labeled by \emph{Cluster IDs}} \subsubsection{Table of cell counts in Subset A (Astrocytes), per \emph{Cluster IDs}}\begin{table}[h] \centering \label{my-label} \begin{tabular}{@{}ll@{}} \toprule \emph{Cluster IDs}& Number of cells \\ \midrule 2 & 472 \\ \bottomrule \end{tabular} \caption{Cell counts for Subset A (Astrocytes), per \emph{Cluster IDs}.} \end{table} \clearpage \subsubsection{t-SNE plot} \begin{figure}[h] \centering \includegraphics[height=.35\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/SubsetA-Astrocytes_cluster-ids_tsneplot"}.pdf} \includegraphics[height=.35\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/SubsetA-Astrocytes_cluster-ids_tsneplot_legend"}.pdf} \caption{Top, t-Distributed stochastic neighbor embedding (tSNE) plot \emph{Cluster IDs} labels in Subset A (Astrocytes) of Brain Non-Myeloid FACS. Bottom, legend mapping \emph{Cluster IDs} to colors} \end{figure} \clearpage \subsubsection{Violinplot} \begin{figure}[h] \centering \includegraphics[width=.6\textwidth]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/SubsetA-Astrocytes_cluster-ids_violinplot_1-of-1"}.pdf} \caption{ Violinplot showing gene expression enrichment in \emph{Cluster IDs} labels in Subset A (Astrocytes) of Brain Non-Myeloid FACS. } \end{figure} \clearpage \subsubsection{Dotplot} \begin{figure}[h] \centering \includegraphics[angle=90, height=.6\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/SubsetA-Astrocytes_cluster-ids_dotplot_1-of-1"}.pdf} \caption{ Dotplot showing gene expression enrichment in \emph{Cluster IDs} labels in Subset A (Astrocytes) of Brain Non-Myeloid FACS. } \end{figure} \clearpage \subsection{Subset A (Astrocytes), labeled by \emph{Free Annotation}} \subsubsection{Table of cell counts in Subset A (Astrocytes), per \emph{Free Annotation}}\begin{table}[h] \centering \label{my-label} \begin{tabular}{@{}ll@{}} \toprule \emph{Free Annotation}& Number of cells \\ \midrule Bergmann glial cell & 40 \\ astrocyte & 432 \\ \bottomrule \end{tabular} \caption{Cell counts for Subset A (Astrocytes), per \emph{Free Annotation}.} \end{table} \clearpage \subsubsection{t-SNE plot} \begin{figure}[h] \centering \includegraphics[height=.35\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/SubsetA-Astrocytes_free_annotation_tsneplot"}.pdf} \includegraphics[height=.35\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/SubsetA-Astrocytes_free_annotation_tsneplot_legend"}.pdf} \caption{Top, t-Distributed stochastic neighbor embedding (tSNE) plot \emph{Free Annotation} labels in Subset A (Astrocytes) of Brain Non-Myeloid FACS. Bottom, legend mapping \emph{Free Annotation} (and letter abbreviation) to colors} \end{figure} \clearpage \subsubsection{Violinplot} \begin{figure}[h] \centering \includegraphics[width=.6\textwidth]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/SubsetA-Astrocytes_free_annotation_violinplot_1-of-1"}.pdf} \caption{ Violinplot showing gene expression enrichment in \emph{Free Annotation} labels in Subset A (Astrocytes) of Brain Non-Myeloid FACS. A: astrocyte, B: Bergmann glial cell.} \end{figure} \clearpage \subsubsection{Dotplot} \begin{figure}[h] \centering \includegraphics[angle=90, height=.6\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/SubsetA-Astrocytes_free_annotation_dotplot_1-of-1"}.pdf} \caption{ Dotplot showing gene expression enrichment in \emph{Free Annotation} labels in Subset A (Astrocytes) of Brain Non-Myeloid FACS. A: astrocyte, B: Bergmann glial cell.} \end{figure} \clearpage \subsection{Subset A (Astrocytes), labeled by \emph{SubsetA-Astrocytes Cluster IDs}} \subsubsection{Table of cell counts in Subset A (Astrocytes), per \emph{SubsetA-Astrocytes Cluster IDs}}\begin{table}[h] \centering \label{my-label} \begin{tabular}{@{}ll@{}} \toprule \emph{SubsetA-Astrocytes Cluster IDs}& Number of cells \\ \midrule 0 & 147 \\ 1 & 124 \\ 2 & 117 \\ 3 & 44 \\ 4 & 40 \\ \bottomrule \end{tabular} \caption{Cell counts for Subset A (Astrocytes), per \emph{SubsetA-Astrocytes Cluster IDs}.} \end{table} \clearpage \subsubsection{t-SNE plot} \begin{figure}[h] \centering \includegraphics[height=.35\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/SubsetA-Astrocytes_subsetA_cluster-ids_tsneplot"}.pdf} \includegraphics[height=.35\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/SubsetA-Astrocytes_subsetA_cluster-ids_tsneplot_legend"}.pdf} \caption{Top, t-Distributed stochastic neighbor embedding (tSNE) plot \emph{SubsetA-Astrocytes Cluster IDs} labels in Subset A (Astrocytes) of Brain Non-Myeloid FACS. Bottom, legend mapping \emph{SubsetA-Astrocytes Cluster IDs} (and letter abbreviation) to colors} \end{figure} \clearpage \subsubsection{Violinplot} \begin{figure}[h] \centering \includegraphics[width=.6\textwidth]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/SubsetA-Astrocytes_subsetA_cluster-ids_violinplot_1-of-1"}.pdf} \caption{ Violinplot showing gene expression enrichment in \emph{SubsetA-Astrocytes Cluster IDs} labels in Subset A (Astrocytes) of Brain Non-Myeloid FACS. } \end{figure} \clearpage \subsubsection{Dotplot} \begin{figure}[h] \centering \includegraphics[angle=90, height=.6\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/SubsetA-Astrocytes_subsetA_cluster-ids_dotplot_1-of-1"}.pdf} \caption{ Dotplot showing gene expression enrichment in \emph{SubsetA-Astrocytes Cluster IDs} labels in Subset A (Astrocytes) of Brain Non-Myeloid FACS. } \end{figure} \clearpage \subsection{Subset A (Astrocytes), labeled by \emph{Subtissue}} \subsubsection{Table of cell counts in Subset A (Astrocytes), per \emph{Subtissue}}\begin{table}[h] \centering \label{my-label} \begin{tabular}{@{}ll@{}} \toprule \emph{Subtissue}& Number of cells \\ \midrule Cerebellum & 47 \\ Cortex & 258 \\ Hippocampus & 90 \\ Striatum & 77 \\ \bottomrule \end{tabular} \caption{Cell counts for Subset A (Astrocytes), per \emph{Subtissue}.} \end{table} \clearpage \subsubsection{t-SNE plot} \begin{figure}[h] \centering \includegraphics[height=.35\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/SubsetA-Astrocytes_subtissue_tsneplot"}.pdf} \includegraphics[height=.35\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/SubsetA-Astrocytes_subtissue_tsneplot_legend"}.pdf} \caption{Top, t-Distributed stochastic neighbor embedding (tSNE) plot \emph{Subtissue} labels in Subset A (Astrocytes) of Brain Non-Myeloid FACS. Bottom, legend mapping \emph{Subtissue} (and letter abbreviation) to colors} \end{figure} \clearpage \subsubsection{Violinplot} \begin{figure}[h] \centering \includegraphics[width=.6\textwidth]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/SubsetA-Astrocytes_subtissue_violinplot_1-of-1"}.pdf} \caption{ Violinplot showing gene expression enrichment in \emph{Subtissue} labels in Subset A (Astrocytes) of Brain Non-Myeloid FACS. A: Cerebellum, B: Cortex, C: Hippocampus, D: Striatum.} \end{figure} \clearpage \subsubsection{Dotplot} \begin{figure}[h] \centering \includegraphics[angle=90, height=.6\textheight]{{"../30_tissue_supplement_figures/Brain_Non-Myeloid/facs/SubsetA-Astrocytes_subtissue_dotplot_1-of-1"}.pdf} \caption{ Dotplot showing gene expression enrichment in \emph{Subtissue} labels in Subset A (Astrocytes) of Brain Non-Myeloid FACS. A: Cerebellum, B: Cortex, C: Hippocampus, D: Striatum.} \end{figure}
{ "alphanum_fraction": 0.7733267064, "avg_line_length": 36.8948655257, "ext": "tex", "hexsha": "06ae2db54bf952b643919cf69db7ac39770b3cca", "lang": "TeX", "max_forks_count": 81, "max_forks_repo_forks_event_max_datetime": "2022-01-25T07:04:22.000Z", "max_forks_repo_forks_event_min_datetime": "2018-02-07T00:03:13.000Z", "max_forks_repo_head_hexsha": "c1a7b7854b7b9a191141c6f2c4d89179ec41603b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mjoppich/tabula-muris", "max_forks_repo_path": "31_tissue_supplement_tex/Brain_Non-Myeloid_facs_auto_generated.tex", "max_issues_count": 180, "max_issues_repo_head_hexsha": "c1a7b7854b7b9a191141c6f2c4d89179ec41603b", "max_issues_repo_issues_event_max_datetime": "2022-02-25T21:13:57.000Z", "max_issues_repo_issues_event_min_datetime": "2018-02-07T22:23:38.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mjoppich/tabula-muris", "max_issues_repo_path": "31_tissue_supplement_tex/Brain_Non-Myeloid_facs_auto_generated.tex", "max_line_length": 363, "max_stars_count": 147, "max_stars_repo_head_hexsha": "c1a7b7854b7b9a191141c6f2c4d89179ec41603b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mjoppich/tabula-muris", "max_stars_repo_path": "31_tissue_supplement_tex/Brain_Non-Myeloid_facs_auto_generated.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-11T15:33:10.000Z", "max_stars_repo_stars_event_min_datetime": "2018-02-21T21:20:19.000Z", "num_tokens": 10074, "size": 30180 }
\chapter{The Stability of the FLUTE Electron Gun and Proposed Solution} This chapter deals with the electron gun of \gls{flute} and its power supply. Then, based on fundamental equations of electron gun's microwave cavity, the dependence of the electron energy from the \gls{rf} supply is derived, which motivates why the \gls{rf} supply should be stable. Finally, a solution to stabilize the \gls{rf} is proposed. The electron gun is powered by a \SI{50}{\MW} klystron, a high-power vacuum tube \gls{rf} amplifier. The input signal for the klystron is a $\SI{2.99855}{\GHz}\approx\SI{3}{\GHz}$ harmonic oscillator pre-amplified to \SI{200}{\watt}. The supply input is generated by a pulse forming network and a transformer. The pulse forming network mainly consists of capacitors to store electrical energy and is charged with a constant current source. The connection of these devices is shown in \autoref{fig:fluteEgun-rfschematic}. A \SI{5}{\hertz} master clock (``trigger'') is used to switch on the output of the pulse forming network to the klystron and the oscillator every \SI{0.2}{\second} for \SI{4.5}{\micro\second}. During this time, the laser could also be triggered causing a stimulated emission of an electron bunch from the cathode. But even without the laser being active, powering the electron gun with the klystron generates an electron beam through thermionic emission of electrons. This undesired effect is called \textit{dark current}. The current source to charge the pulse forming network is powered by mains voltage. This makes it susceptible to noise on the mains and also causes slowly time varying drifts of the klystron power due the pulse forming network being triggered at different relations to the mains \SI{50}{\hertz}. This issue has been remedied in \cite{Nasse2019} by adding synchronization to the mains phase. \begin{figure}[tbh] \centering \includegraphics[]{chap/StabilityOfTheElectronGun/img/fluteSchem.tikz} \caption[FLUTE RF schematic]{Schematic of the \gls{flute} \gls{rf} system} \label{fig:fluteEgun-rfschematic} \end{figure} \section{The Electron Gun} The electron gun of \gls{flute} was originally designed and operated in CTF II at \gls{cern}. \cite{Schuh2014} It is of the ``BNL type'' (see \cite{Batchelor1988}, based on the original design by \cite{fraser1987}) and was developed at \gls{cern}. \cite{Bossart:288412} The gun is made up of a 2.5 cell microwave cavity with a removable copper cathode embedded in the cone shaped back at the end of the half cell (see \autoref{fig:fluteEgun-gunDraw}). Cooling is achieved with a two-stage water cooling system: A temperature control unit uses a short water circuit to cool the gun while itself uses a heat exchanger to a bigger outside climate unit. Applying \gls{rf} power to the cavity through the hole-coupled wave guide causes a standing wave inside the cavity. Because of the cavity's dimensions, only the fundamental mode $\text{TM}_{010}$ is excited, for which the relation between resonance frequency $f_{010}$ and radius $a$ of the cavity is given by \begin{equation} \frac{f_{010}}{2\pi}=\frac{2.405 \cdot c}{a}. \end{equation} For the $\text{TM}_{010}$ mode there is only an electrical field in the $z$ direction, i.e. along the beam axis. This $E_z(z)$ field is used to accelerate the electrons. For the \gls{flute} gun, $E_z(z)$ has been measured in \cite{Bossart:clic}, see \autoref{fig:fluteEgun-Ezplot}. These measurements are also verified in \cite{Schuh2014}. To tune the resonance frequency $f_{010}$, which depends on the cavity's radius $a$, to the target design frequency of \SI{2.99855}{\GHz}, two methods are used. The cavity is equipped with piston tuners that allow changing the geometries of each cell slightly. Because of the expansion and contraction of the copper body due to temperature changes, the set-point of the water cooling system can also be changed to alter the cavity geometry. \begin{figure}[tbh] \centering \includegraphics[width=\textwidth,height=0.7\textwidth]{chap/StabilityOfTheElectronGun/img/Ez.tikz} \caption[Plot of the electric field $E_z(z)$ in the electron gun]{Plot of the electrical field in $z$ direction over the length of the gun cavity (redrawn from \cite{Bossart:clic} using geometrical measurements from \cite{Hoeninger2014})} \label{fig:fluteEgun-Ezplot} \end{figure} \begin{figure}[tbh] \centering \includegraphics[]{chap/StabilityOfTheElectronGun/img/gun.tikz} \caption[Cross-section of the electron gun]{Cross section drawing of the electron gun together with the solenoid (which is used for focusing the electron beam) showing the photo-cathode (red) and the electron and laser beam trajectories \\(modified version from \cite{Bossart:clic} and \cite{Bossart:288412})} \label{fig:fluteEgun-gunDraw} \end{figure} \section{Relation between RF power and Electron Energy} A standing wave inside a \gls{rf} cavity for a $TM_{010}$ mode can be written as \begin{equation} E_z(z,t) = E(z)\,\cos(\omega t + \phi). \end{equation} The time $t$ has to be expressed in terms of the electron velocity $v(z)$ as \begin{equation} t=t(z)=\int_{0}^{z} \frac{\d{z}}{v(z)}, \end{equation} which is the arrival time of the electron at location $z$. If moving through an accelerating gap of length $L$ inside a cavity, an electron with charge $q$ gains the energy \begin{equation} \Delta W = q \int_{-L/2}^{L/2} E(z)\,\cos(\omega t(z) + \phi) \d{z} \end{equation} This can be rewritten as \begin{equation}\label{eq:W} \Delta W = q V_0 T \cos(\phi) \end{equation} using the axial \gls{rf} voltage \begin{equation} V_o := \int_{-L/2}^{L/2} E(z) \d{z} \end{equation} and the travel time factor $T$. \cite[p.~32]{Wangler2008} With the \textit{shunt impedance} $R_s$, the axial \gls{rf} voltage can be brought into relation with the \gls{rf} power which needs to be induced into the cavity to compensate losses in the non-perfect conducting walls and power lost to the electron beam. \cite{burtRF} The shunt impedance is defined as \begin{equation}\label{eq:rs} R_s = \frac{V^2_0}{P_{\text{RF}}} \end{equation} \autoref{eq:W} and \autoref{eq:rs} show that the \gls{rf} supply has a great impact on the electron energy, so it needs to be stable. Additionally, there is the so called \textit{R over Q}, defined as \begin{equation} \frac{R}{Q} = \frac{(V_0T)^2}{\omega U}\qquad \text{with: }R=R_s T^2\;\text{(effective shunt impedance)} \end{equation} using the total stored electromagnetic energy $U$ and the quality factor $Q=\nicefrac{\omega U}{P_{\text{RF}}}$. This shows the gained energy also depends on the properties of the cavity. \section{Current RF Stability and Proposed Solution} To get an overview of the current stability of the cavity \gls{rf} power, the deviation of the cavity power process value (\gls{epics}\footnote{See \autoref{sec:inputs}} \gls{pv} name F:RF:LLRF:01:GunCav1:Power:Out Value $=:P_\text{cavity}$) from its mean is plotted over one hour, see \autoref{fig:fluteEgun-deviation}. \begin{figure}[tb] \centering \includegraphics[width=\textwidth,height=0.5\textwidth]{chap/StabilityOfTheElectronGun/img/assesment.tikz} \caption{Deviation of the cavity \gls{rf} power over the course of one hour} \label{fig:fluteEgun-deviation} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=\textwidth,height=0.5\textwidth]{chap/StabilityOfTheElectronGun/img/periodo.tikz} \caption{Periodogram of \autoref{fig:fluteEgun-deviation}; calculated using the Welch method} \label{fig:fluteEgun-periodo} \end{figure} With the metrics defined in \autoref{sec:metrics}, over a time interval of $T=\SI{5}{\hour}$, the metrics $\op{\%STD}_{P_\text{cavity}}=\SI{0.15}{\percent}$, $\op{MSE}_{P_\text{cavity}}=38.54$ and $\op{MPN}_{P_\text{cavity}}=0.01677$ are calculated. By utilizing simulation data and approximate analytical calculations, it is known that for \gls{thz} generation using the chicane at the end of the compressor (see \autoref{sec:intro}) and to generate chirped \gls{thz} pulses for \gls{cstart}, a stability of $\op{\%STD}_{P_\text{cavity}}=\SI{0.10}{\percent}$ would suffice. For time resolved \gls{thz} experiments, a higher stability of $\op{\%STD}_{P_\text{cavity}}=\SI{0.01}{\percent}$ is necessary. \cite{Nigel2021} As this is a very demanding task, the goal at the moment is \begin{equation}\label{eq:goalstd} \op{\%STD}_{P_\text{cavity}} \overset{!}{\in} [\SI{0.01}{\percent},\,\SI{0.10}{\percent}]. \end{equation} From the time plot in \autoref{fig:fluteEgun-deviation} and the periodogram in \autoref{fig:fluteEgun-periodo} it becomes clear that there is random white noise, but also a periodic part and a slow drift in the signal. While it is not possible to counteract the random fluctuations by any means of the control system developed in this thesis, it is possible to compensate for the slower disturbances that have effects over several pulses. To get more insight into these slowly changing disturbances, the time signals of several available sensors in \gls{epics} are compared to the cavity \gls{rf} power. This shows similar trends in both the cavity \gls{rf} power $P_\text{cavity}$ and the electron gun temperature $\vartheta_\text{gun}$. To analyze them in more detail, the signals are both normalized to zero mean and unity standard deviation as they have different units and would be otherwise difficult to compare (see \autoref{fig:fluteEgun-corrTime}). \begin{figure}[tb] \centering \includegraphics[width=\textwidth,height=0.5\textwidth]{chap/StabilityOfTheElectronGun/img/corrTime.tikz} \caption[Normalized cavity power and gun temperature]{Cavity \gls{rf} power and electron gun temperature in a normalized plot} \label{fig:fluteEgun-corrTime} \end{figure} To quantify their relation, the normalized cross covariance is used. It is calculated by using \autoref{eq:crosscovariance} with the normalized cavity \gls{rf} power and the normalized electron gun temperature and the result is shown in \autoref{fig:fluteEgun-corrCorr}. This shows the two signals are anti-similar ($\rho=r_\text{norm}(0)=-0.7$) for zero lag, i.e. no shift in time. This suggests a strong relation between them. \begin{figure}[tb] \centering \includegraphics[width=\textwidth,height=0.5\textwidth]{chap/StabilityOfTheElectronGun/img/corrCorr.tikz} \caption[Cross covariance analysis of cavity power and gun temperature]{Normalized cross covariance of the signals in \autoref{fig:fluteEgun-corrTime}} \label{fig:fluteEgun-corrCorr} \end{figure} Hence in the next chapters, the control system is developed to counteract these noise components, especially the changes in the electron gun temperature. To be autonomous from existing components, the control system should be made up with a controllable \gls{rf} attenuator added to the existing \gls{rf} system. This way there is no modification to the proprietary \gls{llrf}\footnote{The \gls{llrf} is visualized as only the oscillator in \autoref{fig:fluteEgun-rfschematicControl} and \autoref{fig:fluteEgun-rfschematic} but contains also its own feedback system and a vector modulator} necessary. With the addition of the control unit (see \autoref{fig:fluteEgun-rfschematicControl}), which is designed in later chapters and contains the controller $G(s)$ and the filter $H(s)$, a closed-loop for feedback control is formed. \begin{figure}[tb] \centering \includegraphics[]{chap/StabilityOfTheElectronGun/img/fluteSchemControlUnit.tikz} \caption[FLUTE RF schematic with control unit]{Schematic of the \gls{flute} \gls{rf} system with the proposed control unit and the controllable \gls{rf} attenuator added} \label{fig:fluteEgun-rfschematicControl} \end{figure}
{ "alphanum_fraction": 0.768031523, "avg_line_length": 76.3006535948, "ext": "tex", "hexsha": "3b56d5c09f235f5ccd695987472b77ebdf236e6a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "34e58b00e6df11f79a38a3e6c394892bed687be2", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "youcann/thesisvorlage-latex", "max_forks_repo_path": "chap/StabilityOfTheElectronGun/stability-of-the-electron-gun.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "34e58b00e6df11f79a38a3e6c394892bed687be2", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "youcann/thesisvorlage-latex", "max_issues_repo_path": "chap/StabilityOfTheElectronGun/stability-of-the-electron-gun.tex", "max_line_length": 674, "max_stars_count": null, "max_stars_repo_head_hexsha": "34e58b00e6df11f79a38a3e6c394892bed687be2", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "youcann/thesisvorlage-latex", "max_stars_repo_path": "chap/StabilityOfTheElectronGun/stability-of-the-electron-gun.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3311, "size": 11674 }
%% CHAPTER 2 % Initialization of the system % - Analysis of the image in the HSV space % - Writing down the classifier. Implementing it in class ParkSpot \section{Initialization of the System} \subsection{Classification with Histograms} Here is presented the algorithm used to initialize the system. The algorithm is a classifier developed upon the classification of two main parameters that tends to separate busy parking spot, from free ones. Taken an image converted in HSV color space, we could extract the mean and the standard deviation for each component. The plot of the standard deviation of the Saturation components against the mean of Value component, for a single frame, will give us this single situation. Parking spot status is known, and we can see a strong separation (see figure \ref{fig:separate}, on the left). The two different status could be separated with two degrees of freedom of a line. The inclination and offset of the line could be used to derive rotation plus translation equation that will help us to discriminate between the busy and the free parking spot. Given a point ${\sigma_{2},\mu_{3}}^{T}$, and a separation line in the form $\mu_{3} = \alpha \sigma_{2}+\eta$, we could derive this transformation: \begin{equation} \left\{ \begin{array}{c} \xi_{1}\\ \xi_{2} \end{array}\right\} =\left[\begin{array}{cc} \cos(\alpha) & \sin(\alpha)\\ -\sin(\alpha) & \cos(\alpha) \end{array}\right]\left\{ \begin{array}{c} \sigma_{2}\\ \mu_{3} \end{array}\right\} -\left\{ \begin{array}{c} 1\\ 1 \end{array}\right\} \eta \end{equation} the algorithm has only to check: \begin{equation} \xi_{2} \geq 0 \end{equation} if this condition is true, than the park spot is busy, else the parking spot is free. % TODO Figura separazione singola e rispetto al tempo label: fig:separate \begin{figure}[H] \label{fig:separate} \centering \includegraphics[keepaspectratio, scale=0.4]{img/img1.pdf} \caption{Classifier data in 2D representation} \end{figure} Referring to the image on the right, in figure \ref{fig:separate}, it is easy to understand that this classification is not robust in time, if not expanded with tuning of the parameters each frame (learning algorithm). Means tend to remain almost equal, but standard deviations tend to change in time. The learning method should change the angle of the separation line (and also the discrimination algorithm) to get a good classification. We have decided to leave this method, for something that is little more sophisticated, and to discover more about the \verb+openCV+ libraries. As a drawback, we can also consider the fact that there will be no control on the evolution of the classifier discrimination parameters. \subsection{Diving in the Code} In the code, this initialization script is called when a new object \verb+ParkSpotObj+ is created, as \verb+int ParkSpotObj::initialStatus()+ method. The projection of the two characteristics is made by the single object, to follow the self-containment philosophy. The parameters that drive the algorithm are the number 3 and 4 in the \verb+param+ element of the configuration script. The code is almost at a good level of optimization, because the number of bins extracted for the histograms are 32, and the area on which is evaluated the status is relatively small.
{ "alphanum_fraction": 0.7580261593, "avg_line_length": 43.1282051282, "ext": "tex", "hexsha": "371c50b4f8392931341bc73d23ada9d53903dd66", "lang": "TeX", "max_forks_count": 8, "max_forks_repo_forks_event_max_datetime": "2019-02-10T20:39:53.000Z", "max_forks_repo_forks_event_min_datetime": "2015-04-30T13:10:17.000Z", "max_forks_repo_head_hexsha": "5d3941515ce2666d188564f8a65e0684d6979586", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "MatteoRagni/ParkAssistant", "max_forks_repo_path": "doc/src/ch2.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "5d3941515ce2666d188564f8a65e0684d6979586", "max_issues_repo_issues_event_max_datetime": "2018-04-11T06:58:32.000Z", "max_issues_repo_issues_event_min_datetime": "2018-04-11T03:51:24.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "MatteoRagni/ParkAssistant", "max_issues_repo_path": "doc/src/ch2.tex", "max_line_length": 84, "max_stars_count": 5, "max_stars_repo_head_hexsha": "5d3941515ce2666d188564f8a65e0684d6979586", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "MatteoRagni/ParkAssistant", "max_stars_repo_path": "doc/src/ch2.tex", "max_stars_repo_stars_event_max_datetime": "2018-10-11T21:00:44.000Z", "max_stars_repo_stars_event_min_datetime": "2015-04-30T13:10:15.000Z", "num_tokens": 875, "size": 3364 }
\section{Privacy Preserving Voting} \todo{ consider the liquid democracy requirement that individual voters' votes remain private, while delegates' votes are public. This can be achieved by blinding the votes but still allowing a final tally. }
{ "alphanum_fraction": 0.8155737705, "avg_line_length": 61, "ext": "tex", "hexsha": "e8c4582913cf11594d15d0546d074579e7657b28", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2021-05-16T10:39:00.000Z", "max_forks_repo_forks_event_min_datetime": "2019-07-18T13:38:25.000Z", "max_forks_repo_head_hexsha": "89f5873f82c0ff438e2cd3fff83cc030a46e29da", "max_forks_repo_licenses": [ "ECL-2.0", "Apache-2.0" ], "max_forks_repo_name": "MitchellTesla/decentralized-software-updates", "max_forks_repo_path": "papers/working-document/privacy.tex", "max_issues_count": 120, "max_issues_repo_head_hexsha": "89f5873f82c0ff438e2cd3fff83cc030a46e29da", "max_issues_repo_issues_event_max_datetime": "2021-06-24T10:20:09.000Z", "max_issues_repo_issues_event_min_datetime": "2019-03-06T18:29:25.000Z", "max_issues_repo_licenses": [ "ECL-2.0", "Apache-2.0" ], "max_issues_repo_name": "MitchellTesla/decentralized-software-updates", "max_issues_repo_path": "papers/working-document/privacy.tex", "max_line_length": 199, "max_stars_count": 10, "max_stars_repo_head_hexsha": "89f5873f82c0ff438e2cd3fff83cc030a46e29da", "max_stars_repo_licenses": [ "ECL-2.0", "Apache-2.0" ], "max_stars_repo_name": "MitchellTesla/decentralized-software-updates", "max_stars_repo_path": "papers/working-document/privacy.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-06T02:08:38.000Z", "max_stars_repo_stars_event_min_datetime": "2019-01-25T19:38:49.000Z", "num_tokens": 48, "size": 244 }
\section{Synthesis} Finally, the VHDL design has been synthesized for the ZyBo board. This section reports the main metrics: timing, power and utilization. \subsection{Timing} \begin{center}\vspace*{\baselineskip} \def\arraystretch{1.5} \begin{tabular}{|c|c|}\hline \textbf{Setup Worst Negative Slack (WNS)} & 57.119\si{\nano\second}\\\hline \textbf{Setup Total Negative Slack (TNS)} & 0.000\si{\nano\second}\\\hline \textbf{Worst Hold Slack (WHS)} & 0.215\si{\nano\second}\\\hline \textbf{Total Hold Slack (THS)} & 0.000\si{\nano\second}\\\hline \end{tabular}\vspace*{\baselineskip} \end{center} Since the clock frequency was set at 16\si{\mega\hertz}, the critical path is $62.5-57.119 = 5.381 \si{\nano\second}$ long and therefore the maximum operating frequency is $1/5.381 \simeq 185 \si{\mega\hertz}$. However, there is no point in going too fast since it would just increase the power consumption (samples come in at a 16\si{\kilo\hertz} rate). The critical path is represented by the multiplier that is used to square the sample, hence using the approximated absolute value brings no benefit to timing. \subsection{Power} The power report has been generated in Vivado using the default settings. \begin{center}\vspace*{\baselineskip} \def\arraystretch{1.5} \begin{tabular}{|c|c|}\hline \textbf{Total On-Chip Power} & 0.09\si{\watt}\\\hline \textbf{Junction Temperature} & 26.0\si{\celsius}\\\hline \textbf{Thermal Margin} & 59.0\si{\celsius} (5.0\si{\watt})\\\hline \textbf{Effective $\theta$\si{\joule\ampere}} & 11.5\si{\celsius/\watt}\\\hline \textbf{Power supplied to off-chip devices} & 0\si{\watt}\\\hline \textbf{Confidence level} & Low\\\hline \end{tabular}\vspace*{\baselineskip} \end{center} \begin{figure}[h!] \centering \includegraphics[width=0.75\textwidth]{figs/power_report_master.png} \caption{Power report} \label{fig:power_master} \end{figure} \subsection{Utilization} \begin{center}\vspace*{\baselineskip} \def\arraystretch{1.5} \begin{tabular}{|c|c|c|c|c|}\hline \textbf{Resource} & \textbf{Utilization} & \textbf{Utilization ``opt''} & \textbf{Difference} & \textbf{Available}\\\hline LUT & 118 (0.67\%) & 110 (0.63\%) & \textbf{8} & 17600 \\\hline FF & 74 (0.21\%) & 73 (0.21\%) & \textbf{1} & 35200 \\\hline DSP & 1 (1.25\%) & 1 (1.25\%) & 0 & 80 \\\hline IO & 20 (20.00\%) & 20 (20.00\%) & 0 & 100 \\\hline \end{tabular}\vspace*{\baselineskip} \end{center} From the utilization we can see how 8 LUTs would be saved if we used the approximation of the absolute value. \subsection{Warnings} {\footnotesize \begin{verbatim} [Constraints 18-5210] No constraints selected for write. Resolution: This message can indicate that there are no constraints for the design, or it can indicate that the used_in flags are set such that the constraints are ignored. This later case is used when running synth_design to not write synthesis constraints to the resulting checkpoint. Instead, project constraints are read when the synthesized design is opened. \end{verbatim} } Constraints were correctly set, thus this might be a bug of Vivado. In fact, timing report works fine with the set timing constraints. {\footnotesize \begin{verbatim} NSTD #1 Critical Warning 20 out of 20 logical ports use I/O standard (IOSTANDARD) value 'DEFAULT', instead of a user assigned specific value. This may cause I/O contention or incompatibility with the board power or connectivity affecting performance, signal integrity or in extreme cases cause damage to the device or the components to which it is connected. To correct this violation, specify all I/O standards. This design will fail to generate a bitstream unless all logical ports have a user specified I/O standard value defined. To allow bitstream creation with unspecified I/O standard values (not recommended), use this command: set_property SEVERITY {Warning} [get_drc_checks NSTD-1]. NOTE: When using the Vivado Runs infrastructure (e.g. launch_runs Tcl command), add this command to a .tcl file and add that file as a pre-hook for write_bitstream step for the implementation run. Problem ports: x[15], x[14], x[13], x[12], x[11], x[10], x[9], x[8], x[7], x[6], x[5], x[4], x[3], x[2], x[1] (the first 15 of 20 listed). \end{verbatim} } I/O mapping was not set since it should be done by whom integrates the VAD component with his own project. {\footnotesize \begin{verbatim} DPIP #1 Warning DSP squarepowernet_component/n_sq_repr input squarepowernet_component/n_sq_repr/A[29:0] is not pipelined. Pipelining DSP48 input will improve performance. \end{verbatim} } Pipeline registers have already been added. {\footnotesize \begin{verbatim} ZPS7 #1 Warning The PS7 cell must be used in this Zynq design in order to enable correct default configuration. \end{verbatim} } The PS7 block probably refers to the ARM Processor environment, thus is not required in our design.\footnote{\url{https://forums.xilinx.com/t5/Welcome-Join/How-to-instantiate-the-PS7-block/m-p/333953\#M4847}}.
{ "alphanum_fraction": 0.71215311, "avg_line_length": 43.1818181818, "ext": "tex", "hexsha": "16603e6afaff6c8f822ead344d3e7d2526d13639", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2e65173e329101df7b31478106066532a2e6929b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "moriglia/VAD", "max_forks_repo_path": "report/30_synthesis.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2e65173e329101df7b31478106066532a2e6929b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "moriglia/VAD", "max_issues_repo_path": "report/30_synthesis.tex", "max_line_length": 124, "max_stars_count": 2, "max_stars_repo_head_hexsha": "2e65173e329101df7b31478106066532a2e6929b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "moriglia/VAD", "max_stars_repo_path": "report/30_synthesis.tex", "max_stars_repo_stars_event_max_datetime": "2021-05-11T23:14:53.000Z", "max_stars_repo_stars_event_min_datetime": "2020-03-26T14:19:01.000Z", "num_tokens": 1577, "size": 5225 }
\documentclass[11pt, twoside]{report} \usepackage{fontspec} \usepackage[utf8]{inputenc} \usepackage[bitstream-charter]{mathdesign} \usepackage{bbding} \usepackage{ragged2e} \usepackage{parskip} \usepackage{enumitem} \usepackage{titlesec} \usepackage{paracol} \usepackage{mdframed} \usepackage[margin=1in]{geometry} \usepackage[autocompile]{gregoriotex} \titleformat{\chapter}[block]{\huge\scshape\filcenter}{}{1em}{} \titleformat{\section}[block]{\Large\bfseries\filcenter}{}{1em}{} \mdfsetup{skipabove=\topskip, skipbelow=\topskip} \newcommand{\rubric}[1]{ \switchcolumn[0] { \itshape #1 } } \newcommand{\latinenglish}[2]{ \switchcolumn[0]* { #1 } \switchcolumn[1] { \itshape\small #2 } } \newcommand{\latinenglishequal}[2]{ \switchcolumn[0]* { #1 } \switchcolumn[1] { \itshape #2 } } \newenvironment{latinenglishsection} {\columnratio{.7, .3} \begin{paracol}{2}} {\end{paracol}} \newenvironment{latinenglishequalsection} {\columnratio{.5, .5}\begin{paracol}{2}} {\end{paracol}} \setlength{\columnseprule}{0.4pt} \newcommand{\heading}[1]{ \begin{leftcolumn} #1 \end{leftcolumn} } \newcommand{\spanning}[1]{ \switchcolumn*[#1] } \newenvironment{verses}[1] {\begin{flushleft} \begin{enumerate}[leftmargin=*] \setcounter{enumi}{#1}} {\end{enumerate} \end{flushleft}} \newenvironment{versicles}{\par\leavevmode\parskip=0pt}{} \newenvironment{collect} { \leavevmode \parindent=1em \parskip=0pt \noindent Orémus.\par }{} \newenvironment{optionbox} { \switchcolumn[0] \begin{mdframed} % \begin{minipage}{0.8\linewidth} }{ % \end{minipage} \end{mdframed} } \newcommand{\optionrule}{ \begin{center} \rule{0.5\linewidth}{0.6pt} \end{center} } \newenvironment{optionruled} { \optionrule } { \optionrule } % for use inside the collect environment \newcommand{\Amen}{\par\noindent \Rbar. Amen.} \begin{document} \vspace*{4cm} \begin{center} \textbf{\Huge Vespers of the Blessed Virgin Mary}\\ {\LARGE According to the Washtenaw Use} \end{center} \vspace*{1cm} %\maketitle \hspace{0pt} \vfill \pagebreak \vspace*{7.5cm} ``At \textit{Evensong} time, Our Lord Jesus Christ on Maundy Thursday supped with His Apostles, and ordained the Holy Sacrament of His Body and Blood. The same hour on Good Friday, He was taken down from The Cross. And on Easter Day, the same hour, He met with two of His disciples going towards Emmaus, and made Himself known to them in the breaking of bread.'' (Paraphrased from the \textit{Mirror of Our Lady}.) O most blessed Virgin, you grieved bitterly as your divine son's body was lowered from The Cross, having wrought fo us our redemption; pray that we may ever be mindful that the same glorious body is given to us in the Holy Eucharist and always treat this Sacrament with due reverence. Amen. \vfill \pagebreak \chapter*{Before Vespers} \section*{Preparatory Prayers} \textit{ All kneel and pray silently. As you say the prayer \textnormal{Aperi, Domine}, make the sign of the cross with your thumb first over your lips, and then over your heart.} \begin{latinenglishequalsection} \latinenglishequal{ Áperi, \maltese\ Dómine, os meum ad benedicéndum\linebreak nomen sanctum tuum: \maltese\ munda quoque cor meum ab ómnibus vanis, pervérsis et aliénis cogitatiónibus; intelléctum illúmina, afféctum inflámma, ut digne, atténte ac devóte hoc Offícium beátæ Vírginis Maríæ recitáre váleam, et exaudíri mérear ante conspéctum divínæ Majestátis tuæ. Per Christum Dóminum nostrum. Amen. }{ Open, \maltese\ O Lord, my mouth to bless Thy holy Name; \maltese\ cleanse also my heart from all vain, evil, and wandering thoughts; enlighten my understanding and kindle my affections; that I may worthily, attentively, and devoutly say this Office of the Blessed Virgin Mary, and so merit to be heard before the presence of Thy divine Majesty. Through Christ our Lord. Amen. } \latinenglishequal{ Domine, in unióne illíus divínæ intentiónis, qua ipse in terris laudes Deo persolvísti, has tibi Horas persólvo. }{ O Lord, in union with that divine intention wherewith thou, whilst here on earth, didst render praises unto God, I desire to offer this my Office of prayer unto thee. } \latinenglishequal{ Pater Noster, qui es in cælis, sanctificétur nomen tuum. Advéniat regnum tuum. Fiat volúntas tua, sicut in cælo et in terra. Panem nostrum quoti\-diánum da nobis hódie, et dimítte nobis débita nostra, sicut et nos dimíttimus debitóribus nostris. Et ne nos indúcas in tentatiónem: sed líbera nos a malo. Amen. }{ Our Father, who art in heaven, hallowed be thy name. Thy kingdom come. Thy will be done, on earth as it is in heaven. Give us this day our daily bread, and forgive us our trespasses, as we forgive those who trespass against us. And lead us not into temptation: but deliver us from evil. Amen. } \latinenglishequal{ Ave María, grátia plena, Dóminus tecum. Benedíc\-ta tu in muliéribus, et benedíctus fructus ventris tui, Jesus. Sancta María, Mater Dei, ora pro nobis peccatóribus, nunc et in hora mortis nostræ. Amen. }{ Hail Mary, full of grace, the Lord is with thee. Blessed art thou among women, and blessed is the fruit of thy womb, Jesus. Holy Mary, Mother of God, pray for us sinners, now and at the hour of our death. Amen. } \end{latinenglishequalsection} \chapter*{Vespers} \begin{latinenglishsection} \rubric{All make the Sign of the Cross as the Officiant says (All continue together with the entire ``Gloria Patri'' after the response): } \latinenglish{ \gresetinitiallines{1} \gregorioscore{deus_in_adjutorium} }{ O God, come to my assistance. O Lord, make haste to help me. Glory be to the Father, and to the Son, and to the Holy Spirit, as it was in the beginning, is now, and ever shall be, world without end. Amen. Alleluia. } \rubric{From Septuagesima until Easter, \textnormal{Alleluia} is replaced with:} \latinenglish{ \gresetinitiallines{0} \gabcsnippet{ (c3)Lau(h)s ti(h)bi(h) Dó(h)mi(h)ne(h), Re(h)x æ(h)té(h)rné(i) gló(h)ri(h)æ(g). (::) } }{ Praise to thee, O Lord, King of everlasting glory. } \rubric{For `Throughout the Year', see page 5.} \rubric{For Advent, see page [].'} \rubric{For Christmastide, see page [].'} \rubric{For Eastertide, see page [].'} \end{latinenglishsection} \pagebreak \begin{latinenglishsection} % Psalm cix: Dicit Dominus (with antiphon) \heading{\section*{Psalm 109}} \rubric{The Cantor intones the antiphon, bows, and all sit. The officiant's side then chants the first Psalm verse, with the Cantor's side responding with the second verse. The sides then alternate verses. At the Gloria Patri, all stand and bow, and remain standing after its conclusion.} \latinenglish{ \gresetinitiallines{1} \gregorioscore{dum_esset_antiphon} }{ While the king... } \latinenglish{ \gresetinitiallines{0} \gregorioscore{psalm_109_1_vespers} \begin{verses}{1} \item Donec ponam ini\textbf{mí}cos \textbf{tu}os, *\\ scabéllum pedum \textit{tu}\textbf{ó}rum. \item Virgam virtútis tuæ emíttet Dómi\textbf{nus} ex \textbf{Si}on: *\\ domináre in médio inimicórum \textit{tu}\textbf{ó}rum. \item Tecum princípium in die virtútis tuæ in splendóri\textbf{bus} sanc\textbf{tó}rum: *\\ ex útero ante lucíferum gé\textit{nu}\textbf{i} te. \item Jurávit Dóminus, et non pœni\textbf{té}bit \textbf{e}um: *\\ Tu es sacérdos in ætérnum secúndum órdinem \textit{Mel}\textbf{chí}sedech. \item Dóminus a \textbf{dex}tris \textbf{tu}is, *\\ confrégit in die iræ su\textit{æ} \textbf{re}ges. \item Judicábit in natiónibus, im\textbf{plé}bit ru\textbf{í}nas: *\\ conquassábit cápita in terra \textit{mul}\textbf{tó}rum. \item De torrénte in \textbf{vi}a \textbf{bi}bet: *\\ proptérea exaltá\textit{bit} \textbf{ca}put. \item Glória \textbf{Pa}tri, et \textbf{Fíli}o, *\\ et Spirítu\textit{i} \textbf{Sanc}to. \item Sicut erat in princípio, et \textbf{nunc}, et \textbf{sem}per, *\\ et in sǽcula sæculó\textit{rum}. \textbf{A}men. \end{verses} \gresetinitiallines{1} \gregorioscore{dum_esset_rex} }{ 1. The Lord said to my Lord: Sit thou at my right hand: 2. Until I make thine enemies: thy footstool. 3. The Lord shall send forth the rod of thy power from out of Sion: rule thou in the midst of thine enemies. 4. Thine shall be the dominion in the day of thy power, amid the brightness of the saints: from the womb, before the day-star, have I begotten thee. 5. The Lord hath sworn, and will not repent: Thou art a priest for ever according to the order of Melchisedech. 6. The Lord upon thy right hand: hath overthrown kings in the day of his wrath. 7. He shall judge among the nations, he shall fulfill destructions: he shall smit in sunder the heads in the land of many. 8. He shall drink of the brook in the way: therefore shall he lift of up his head. 9. Glory be to the Father, and to the Son, and to the Holy Spirit. 10. As it was in the beginning, is now, and shall be forever. Amen. While the King was reposing, my spikenard yielded the odour of sweetness. } \rubric{The remaining antiphons and psalms are sung in the same manner as the first.} % Psalm cxii: Laudate, pueri (with antiphon) \heading{\section*{Psalm 112}} \latinenglish{ \gresetinitiallines{1} \gregorioscore{læva_ejus_antiphon} }{ His left hand... } \latinenglish{ \gresetinitiallines{0} \gregorioscore{psalm_112_1_vespers} \begin{verses}{1} \item Sit nomen Dómini \textit{bene}\textbf{díc}tum, *\\ ex hoc nunc, et \textit{usque} \textit{in} \textit{sæ}cu\textbf{lum}. \item A solis ortu usque \textit{ad} \textit{oc}\textbf{cá}sum, *\\ laudábi\textit{le} \textit{nomen} \textbf{Dó}mini. \item Excélsus super omnes \textit{gentes} \textbf{Dó}minus, *\\ et super cælos \textit{glória} \textbf{e}jus. \item Quis sicut Dóminus, Deus noster, qui in \textit{altis} \textbf{há}bitat, *\\ et humília réspicit in cæ\textit{lo} \textit{et} \textit{in} \textbf{ter}ra? \item Súscitans a \textit{terra} \textbf{ín}opem, *\\ et de stércore \textit{érigens} \textbf{páu}perem: \item Ut cóllocet eum \textit{cum} \textit{prin}\textbf{cí}pibus, *\\ cum princípibus \textit{pópuli} \textbf{su}i. \item Qui habitáre facit stéri\textit{lem} \textit{in} \textbf{do}mo, *\\ matrem fili\textit{órum} \textit{læ}\textbf{tán}tem. \item Glória Pa\textit{tri}, \textit{et} \textbf{Fí}lio, *\\ et Spi\textit{rítui} \textbf{Sanc}to. \item Sicut erat in princípio, et \textit{nunc}, \textit{et} \textbf{sem}per, *\\ et in sǽcula sæ\textit{culórum}. \textbf{A}men. \end{verses} \gresetinitiallines{1} \gregorioscore{læva_ejus} }{ 1. Praise the Lord, ye children: praise ye the name of the Lord: 2. Blessed be the name of the Lord: from this time forth, forevermore. 3. From the rising up of the sun unto the going down of the sameL the name of the Lord is worthy to be praised. 4. The Lord is high above all nations: and his glory above the heavens. 5. Who is like unto the Lord our God, who dwelleth on high: and regardeth the things that are lowly in heaven and on earth? 6. Who raiseth up the needy from the earth: and lifteth the poor from off the dunghill. 7. That he may set him with the princes: even with the princes of his people. 8. Who maketh the barren woman to dwell in her houseL the joyful mother of children. 9. Glory be to the Father, and to the Son, and to the Holy Spirit. 10. As it was in the beginning, is now, and shall be forever. Amen. His left hand under my head, and his right hand shall embrace me. } \end{latinenglishsection} \pagebreak \begin{latinenglishsection} % Psalm cxxi: Lætatus sum in his (with antiphon) \heading{\section*{Psalm 121}} \latinenglish{ \gresetinitiallines{1} \gregorioscore{nigra_sum_antiphon} }{ I am black but beautiful... } \latinenglish{ \gresetinitiallines{0} \gregorioscore{psalm_121_1_vespers} \begin{verses}{1} \item Stantes erant \textbf{pe}des \textbf{no}stri, *\\ in átriis tuis, \textit{Je}\textbf{rú}salem. \item Jerúsalem, quæ ædifi\textbf{cá}tur ut \textbf{cívi}tas: *\\ cujus participátio ejus in \textit{id}\textbf{íp}sum. \item Illuc enim ascendérunt \textbf{tri}bus, tribus \textbf{Dómi}ni: *\\ testimónium Israël ad confiténdum nómi\textit{ni} \textbf{Dó}mini. \item Quia illic sedérunt sedes \textbf{in} ju\textbf{díci}o, *\\ sedes super do\textit{mum} \textbf{Da}vid. \item Rogáte quæ ad pacem \textbf{sunt} Je\textbf{rúsa}lem: *\\ et abundántia diligén\textit{ti}\textbf{bus} te: \item Fiat pax in vir\textbf{tú}te \textbf{tu}a: *\\ et abundántia in túrri\textit{bus} \textbf{tu}is. \item Propter fratres meos, et \textbf{pró}ximos \textbf{me}os, *\\ loquébar pa\textit{cem} \textbf{de} te: \item Propter domum Dómini, \textbf{De}i \textbf{nos}tri, *\\ quæsívi bo\textit{na} \textbf{ti}bi. \item Glória \textbf{Pa}tri, et \textbf{Fíli}o, * \\ et Spirítu\textit{i} \textbf{Sanc}to. \item Sicut erat in princípio, et \textbf{nunc}, et \textbf{sem}per, *\\ et in sǽcula sæculó\textit{rum}. \textbf{A}men. \end{verses} \gresetinitiallines{1} \gregorioscore{nigra_sum} }{ 1. I was glad at the things that were said unto me : We will go into the house of the Lord. 2. Our feet were wont to stand: in thy courts, O Jerusalem. 3. Jerusalem, which is built as a city: that is at unity with itself. 4. For thither did the tribes go up, the tribes of the Lord: the testimony of Israel, to praise the name of the Lord. 5. For there are set the seats of judgement: the seats over the house of David. 6. Pray ye for the things that are for the peace of Jerusaelm: and plenteousness be to them that love thee. 7. Let peace be in thy strength: and plenteousness in thy towers. 8. For my brethren and companions' sake: I spake peace concerning thee. 9. Because of the house of the Lord our God: I have sought good things for thee. 10. Glory be to the Father, and to the Son, and to the Holy Spirit. 11. As it was in the beginning, is now, and shall be forever. Amen. I am black, but beautiful, O daughters of Jerusalem: therefore hath the king loved me, and brought me into his chamber. } \end{latinenglishsection} \pagebreak \begin{latinenglishsection} % Psalm cxxvi: Nisi Dominus (with antiphon) \heading{\section*{Psalm 126}} \latinenglish{ \gresetinitiallines{1} \gregorioscore{jam_hiems_antiphon} }{ Now is the winter past... } \latinenglish{ \gresetinitiallines{0} \gregorioscore{psalm_126_1_vespers} \begin{verses}{1} \item Nisi Dóminus custodíerit civi\textbf{tá}tem, *\\ frustra vígilat qui cus\textit{tódit} \textbf{e}am. \item Vanum est vobis ante lucem \textbf{súr}gere: *\\ súrgite postquam sedéritis, qui manducátis pa\textit{nem} \textit{do}\textbf{ló}ris. \item Cum déderit diléctis suis \textbf{som}num: *\\ ecce heréditas Dómini fílii : merces, \textit{fructus} \textbf{ven}tris. \item Sicut sagíttæ in manu pot\textbf{én}tis: *\\ ita fílii \textit{excus}\textbf{só}rum. \item Beátus vir qui implévit desidérium suum ex \textbf{ip}sis: *\\ non confundétur cum loquétur inimícis su\textit{is} \textit{in} \textbf{po}rta. \item Glória Patri, et \textbf{Fí}lio, * \\ et Spirí\textit{tui} \textbf{Sanc}to. \item Sicut erat in princípio, et nunc, et \textbf{sem}per, *\\ et in sǽcula sæcu\textit{lórum}. \textbf{A}men. \end{verses} \gresetinitiallines{1} \gregorioscore{jam_hiems} }{ 1. Unless the Lord build the house: they labour in vain that build it. 2. UNless the Lord keep the city: he watcheth in vain that keepeth it. 3. In vain ye rise before the light: rise not till ye have rested, O ye that eat the bread of sorrow. 4. When he hath given sleep to his beloved: lo, children are a heritage from the Lord, and the fruit of the womb a reward. 5. Like as arrows in the hand of the mighty one: so are the children of those who have been cast out. 6. Blessed is the man whose desire is satisfied with the: he shall not be confounded, when he speaketh with his enemies in the gate. 7. Glory be to the Father, and to the Son, and to the Holy Spirit. 8. As it was in the beginning, is now, and shall be forever. Amen. Now is the winter past, the rain is over and gone: arise, my beloved, and come. } \end{latinenglishsection} \pagebreak \begin{latinenglishsection} % Psalm cxlvii: Lauda Jerusalem (with antiphon) \end{latinenglishsection} \chapter*{Concluding Prayers} \begin{latinenglishequalsection} \rubric{All kneel and pray silently.} \latinenglishequal{ \begin{versicles} Sacrosánctæ et indivíduæ Trinitáti, Crucifíxi\linebreak Dómini nostri Jesu Christi humanitáti, beatíssimæ et gloriosíssimæ sempérque Vírginis Maríæ fecúndae integritáti, et ómnium Sanctórum universitáti sit sempitérna laus, honor, virtus et glória ab omni creatúra, nobísque remíssio ómnium peccatórum, per infiníta s\'{\ae}cula sæculórum. Amen. \Vbar. Beáta víscera Maríæ Vírginis, quæ portavérunt ætérni Patris Fílium. \Rbar. Et beáta úbera quæ lactavérunt Christum\linebreak Dóminum. \end{versicles} }{ \begin{versicles} Everlasting praise, honor, power, and glory be given by all creatures to the most holy and undivided Trinity, to the Humanity of our crucified Lord Jesus Christ, to the fruitful purity of the most blessed and most glorious Mary ever Virgin, and to the company of all the Saints; and may we obtain the remission of all our sins through all eternity. Amen. \Vbar. Blessed is the womb of the Virgin Mary, that bore the Son of the eternal Father. \Rbar. And blessed are the paps that gave suck to Christ our Lord. \end{versicles} } \latinenglishequal{ Pater Noster, qui es in cælis, sanctificétur nomen tuum. Advéniat regnum tuum. Fiat volúntas tua, sicut in cælo et in terra. Panem nostrum quoti\-diánum da nobis hódie, et dimítte nobis débita nostra, sicut et nos dimíttimus debitóribus nostris. Et ne nos indúcas in tentatiónem: sed líbera nos a malo. Amen. }{ Our Father, who art in heaven, hallowed be thy name. Thy kingdom come. Thy will be done, on earth as it is in heaven. Give us this day our daily bread, and forgive us our trespasses, as we forgive those who trespass against us. And lead us not into temptation: but deliver us from evil. Amen. } \latinenglishequal{ Ave María, grátia plena, Dóminus tecum. Benedíc\-ta tu in muliéribus, et benedíctus fructus ventris tui, Jesus. Sancta María, Mater Dei, ora pro nobis peccatóribus, nunc et in hora mortis nostræ. Amen. }{ Hail Mary, full of grace, the Lord is with thee. Blessed art thou among women, and blessed is the fruit of thy womb, Jesus. Holy Mary, Mother of God, pray for us sinners, now and at the hour of our death. Amen. } \end{latinenglishequalsection} \end{document}
{ "alphanum_fraction": 0.7348755397, "avg_line_length": 31.6374367622, "ext": "tex", "hexsha": "d829536aa24af00878e40e63b75bfbed36b3be15", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "cb410133e674292cfcde92972a606c469e46f5b8", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "DanBrandt/LittleOfficeWashtenaw", "max_forks_repo_path": "Vespers/Little_Office_Vespers.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "cb410133e674292cfcde92972a606c469e46f5b8", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "DanBrandt/LittleOfficeWashtenaw", "max_issues_repo_path": "Vespers/Little_Office_Vespers.tex", "max_line_length": 705, "max_stars_count": null, "max_stars_repo_head_hexsha": "cb410133e674292cfcde92972a606c469e46f5b8", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "DanBrandt/LittleOfficeWashtenaw", "max_stars_repo_path": "Vespers/Little_Office_Vespers.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 6503, "size": 18761 }
% Siconos is a program dedicated to modeling, simulation and control % of non smooth dynamical systems. % % Copyright 2016 INRIA. % % Licensed under the Apache License, Version 2.0 (the "License"); % you may not use this file except in compliance with the License. % You may obtain a copy of the License at % % http://www.apache.org/licenses/LICENSE-2.0 % % Unless required by applicable law or agreed to in writing, software % distributed under the License is distributed on an "AS IS" BASIS, % WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. % See the License for the specific language governing permissions and % limitations under the License. % \documentclass[10pt]{article} \input{./macro.tex} \usepackage{psfrag} \usepackage{fancyhdr} \usepackage{subfigure} %\renewcommand{\baselinestretch}{1.2} \textheight 23cm \textwidth 16cm \topmargin 0cm %\evensidemargin 0cm \oddsidemargin 0cm \evensidemargin 0cm \usepackage{layout} \usepackage{mathpple} \usepackage[T1]{fontenc} %\usepackage{array} \makeatletter \renewcommand\bibsection{\paragraph{References \@mkboth{\MakeUppercase{\bibname}}{\MakeUppercase{\bibname}}}} \makeatother %% style des entetes et des pieds de page \fancyhf{} % nettoie le entetes et les pieds \fancyhead[L]{} \fancyhead[R]{\thepage} \fancyfoot[C]{}% \begin{document} \thispagestyle{empty} \title{Short tutorial on STL tools and functions used in Siconos} \author{F. P\'erignon} \date{For Kernel version 3.0.0 \\ \today} \maketitle \pagestyle{fancy} \section{Introduction} This paragraph is supposed to give to Siconos users a minimal knowledge on how to handle the STL functions and objects that are used in Siconos and thus necessary to built properly a Non Smooth Dynamical System or a Strategy. \\ For more details on the STL tools, see for example \textit{http://www.sgi.com/tech/stl/table\_of\_contents.html} or \textit{ http://cplus.about.com/od/stl/ }. \\ Three main objects are widely used in Siconos: \bei \item map<key,object>: object that maps a ``key'' of any type to an ``object'' of any type. \item set<object>: a set of objects ... \item vector<object>: a list of objects, with a specific sorting; data can also be accessed thanks to indices. Moreover, objects handled by a vector are contiguous in memory. \ei \section{Iterators and find function}\label{ItAndFind} Iterators are a generalization of pointers: they are objects that point to other objects. They are used to iterate over a range of objects. For example, if an iterator points to one element in a range, then it is possible to increment it so that it points to the next element. For example, suppose you have a set<DynamicalSystem*>, then you can define an iterator in the following way: \\ set<DynamicalSystem*>::iterator iter ; \\ iter will be used to access data in the set. \\ It is not necessary to give more details about iterators in that paragraph. Mainly, in Siconos users .cpp input files, they will only be used as a return value for find function. Then just used them as described in examples for map or set in the paragraphs below. \section{Map handling} Let A be a map<string,DynamicalSystem*>. \\ Data access: \bei \item A[name] is the DynamicalSystem* that corresponds to the string name. Thus to fill a map in, just used A[name] = B, with B a DynamicalSystem*. \item A.erase(name) : (name of type string) erases the element whose key is name. \item A.clear(): removes all elements in the map. \item A.size() : number of elements in the map. \item A.find(name) or A.find(DS), name a string and DS a DynamicalSystem*: returns an iterator that points to DS. See example below. \ei Example:\\ \expandafter\ifx\csname indentation\endcsname\relax% \newlength{\indentation}\fi \setlength{\indentation}{0.5em} \begin{flushleft} \mbox{}\\ {$//$\it{} creates a map of DynamicalSystem$\ast${}\mbox{}\\ }\mbox{}\\ map$<$string, DynamicalSystem$\ast$$>$ A;\mbox{}\\ \mbox{}\\ DynamicalSystem $\ast$ DS = {\bf new} DynamicalSystem($\ldots$);\mbox{}\\ \mbox{}\\ {$//$\it{} Add some DynamicalSystem$\ast$ in the map{}\mbox{}\\ }A[{\tt"FirstDS"}] = DS;\mbox{}\\ A[{\tt"SecondDS"}] = {\bf new} DynamicalSystem($\ldots$);\mbox{}\\ $\ldots$\mbox{}\\ \mbox{}\\ map$<$string,DynamicalSystem$\ast$$>$::iterator iter;\mbox{}\\ \mbox{}\\ {$//$\it{} Find a DynamicalSystem$\ast$ in the map{}\mbox{}\\ }iter = A.find(DS);\mbox{}\\ \mbox{}\\ {$//$\it{} Then iter points to DS{}\mbox{}\\ }($\ast$iter)$\rightarrow$display() ; {$//$\it{} display data of DS{}\mbox{}\\ }\mbox{}\\ DynamicalSystem $\ast$ DS2 = {\bf new} DynamicalSystem($\ldots$);\mbox{}\\ iter = A.find(DS2);\mbox{}\\ \mbox{}\\ {$//$\it{} In that case, since DS2 is not in the map, {}\mbox{}\\ }{$//$\it{} iter is equal to A.end();{}\mbox{}\\ }{$//$\it{} It is then easy to test if a DynamicalSystem$\ast$ is in the map or not.{}\mbox{}\\ }\mbox{}\\ {\bf delete} DS;\mbox{}\\ {\bf delete} DS2;\mbox{}\\ {\bf delete} A[{\tt"secondDS"}];\mbox{}\\ A.clear();\mbox{}\\ \hspace*{1\indentation}\mbox{}\\ \end{flushleft} \section{Set handling} Let A<DynamicalSystem*> be a set. \\ Data access (DS being a DynamicalSystem*): \bei \item A.insert(DS) adds DS into the set \item A.erase(DS) removes DS from the set \item A.find(DS) returns an iterator that points to DS. See example below. \ei Example:\\ \expandafter\ifx\csname indentation\endcsname\relax% \newlength{\indentation}\fi \setlength{\indentation}{0.5em} \begin{flushleft} {$//$\it{} creates a set of DynamicalSystem$\ast${}\mbox{}\\ }set$<$DynamicalSystem$\ast$$>$ A;\mbox{}\\ \mbox{}\\ DynamicalSystem$\ast$ DS = {\bf new} DynamicalSystem($\ldots$);\mbox{}\\ DynamicalSystem$\ast$ DS2 = {\bf new} DynamicalSystem($\ldots$);\mbox{}\\ \mbox{}\\ {$//$\it{} add elements into the set{}\mbox{}\\ }A.insert(DS);\mbox{}\\ A.insert(DS2);\mbox{}\\ \mbox{}\\ A.size(); {$//$\it{} is equal to 2. {}\mbox{}\\ }\mbox{}\\ {$//$\it{} find an element:{}\mbox{}\\ }set$<$DynamicalSystem$\ast$$>$::iterator iter;\mbox{}\\ iter = A.find(DS);\mbox{}\\ \mbox{}\\ {$//$\it{} then iter points to DS;{}\mbox{}\\ }($\ast$iter)$\rightarrow$display(); {$//$\it{} display DS data. {}\mbox{}\\ }\mbox{}\\ {$//$\it{} remove an element{}\mbox{}\\ }A.remove(DS2); \mbox{}\\ {$//$\it{} A.size() is then equal to 1.{}\mbox{}\\ }\mbox{}\\ iter = A.find(DS2);\mbox{}\\ {$//$\it{} then iter $=$ A.end(), which means that DS2 is not in the set anymore.{}\mbox{}\\ }\mbox{}\\ {\bf delete} DS;\mbox{}\\ {\bf delete} DS2;\mbox{}\\ \end{flushleft} \section{Vectors handling} Let V<DynamicalSystem*> be a vector. \\ Data access: \bei \item V[i] is the component at position i in the vector, and so is a DynamicalSystem*. \item V.size(): number of elements in V. \item V.push\_back(DS) : adds the DynamicalSystem* DS at the end of V. \item V.pop\_back() : removes the last element of V. \item V.clear() : removes all elements. \ei Examples: \\ \expandafter\ifx\csname indentation\endcsname\relax% \newlength{\indentation}\fi \setlength{\indentation}{0.5em} \begin{flushleft} {$//$\it{} Creates a vector of three elements, that contains DynamicalSystem$\ast${}\mbox{}\\ }vector$<$DynamicalSystem$\ast$$>$ V(3);\mbox{}\\ V[0] = {\bf new} DynamicalSystem($\ldots$);\mbox{}\\ \mbox{}\\ DynamicalSystem$\ast$ DS = {\bf new} DynamicalSystem($\ldots$);\mbox{}\\ V[1] = DS;\mbox{}\\ \mbox{}\\ V[2] = NULL; \mbox{}\\ \mbox{}\\ {$//$\it{} At this points, V.size() is equal to 3.{}\mbox{}\\ }\mbox{}\\ DynamicalSystem$\ast$ DS2 = {\bf new} DynamicalSystem($\ldots$);\mbox{}\\ \mbox{}\\ V.push\_back(DS2);\mbox{}\\ \mbox{}\\ {$//$\it{} then V.size() is equal to 4.{}\mbox{}\\ }\mbox{}\\ {$//$\it{} DS display: {}\mbox{}\\ }V[1]$\rightarrow$display();\mbox{}\\ \mbox{}\\ $\ldots$\mbox{}\\ \mbox{}\\ {\bf delete} DS2;\mbox{}\\ {\bf delete} DS;\mbox{}\\ {\bf delete} V[0];\mbox{}\\ V.clear();\mbox{}\\ \end{flushleft} \end{document}
{ "alphanum_fraction": 0.6805449453, "avg_line_length": 35.2197309417, "ext": "tex", "hexsha": "88ff8924cd6e152a58dd1f30a8f486cf0838b813", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2739a23f23d797dbfecec79d409e914e13c45c67", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "siconos/siconos-deb", "max_forks_repo_path": "Docs/User/StlToolsShortTutorial/STLSiconosTutorial.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2739a23f23d797dbfecec79d409e914e13c45c67", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "siconos/siconos-deb", "max_issues_repo_path": "Docs/User/StlToolsShortTutorial/STLSiconosTutorial.tex", "max_line_length": 174, "max_stars_count": null, "max_stars_repo_head_hexsha": "2739a23f23d797dbfecec79d409e914e13c45c67", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "siconos/siconos-deb", "max_stars_repo_path": "Docs/User/StlToolsShortTutorial/STLSiconosTutorial.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2587, "size": 7854 }
\chapter{Using DARTEL \label{Chap:dartelguide}} DARTEL\footnote{DARTEL stands for ``Diffeomorphic Anatomical Registration Through Exponentiated Lie algebra''. It may not use a true Lie Algebra, but the acronym is a nice one.} is a suite of tools for achieving more accurate inter-subject registration of brain images. It consists of several thousand lines of code. Because it would be a shame if this effort was wasted, this guide was written to help encourage its widespread use. Experience at the FIL would suggest that it offers very definite improvements for VBM studies -- both in terms of localisation\footnote{Less smoothing is needed, and there are fewer problems relating to how to interpret the differences.} and increased sensitivity\footnote{More sensitivity could mean that fewer subjects are needed, which should save shed-loads of time and money.}. \section{Using DARTEL for VBM \label{Sec:dartel_vbm}} The following procedures could be specified one at a time, but it is easier to use the batching system. The sequence of jobs (use the \emph{TASKS} pull-down from the \emph{Graphics} window to select \emph{BATCH}) would be: \begin{itemize} \item{{\bf Module List} \begin{itemize} \item{{\bf SPM$\rightarrow$Spatial$\rightarrow$Segment}: To obtain *\_seg\_sn.mat files for ``importing'' the data into a form that DARTEL can use for registering the subject's scans.} \item{{\bf SPM$\rightarrow$Tools$\rightarrow$DARTEL Tools$\rightarrow$Initial Import}: Uses the *\_seg\_sn.mat files to generate roughly (via a rigid-body) aligned grey and white matter images of the subjects.} \item{{\bf SPM$\rightarrow$Tools$\rightarrow$DARTEL Tools$\rightarrow$Run DARTEL (create Template)}: Determine the nonlinear deformations for warping all the grey and white matter images so that they match each other.} \item{{\bf SPM$\rightarrow$Tools$\rightarrow$DARTEL Tools$\rightarrow$Normalise to MNI Space}: Actually generate the smoothed ``modulated'' warped grey and white matter images.} \end{itemize} } \end{itemize} Alternatively, the {\bf New Segment} procedure could be used. Although still at the slightly experimental stages of development, this procedure has been found to be generally more robust than the implementation of ``Unified Segmentation'' from SPM$\rightarrow$Spatial$\rightarrow$Segment (which is the version from the Segment button - the same as that in SPM5). The new segmentation can require quite a lot of memory, so if you have large images (typically greater than about $256\times256\times150$) and trying to run it on a 32 bit computer or have relatively little memory installed, then it may throw up an out of memory error. The new segmentation procedure includes the option to generate DARTEL ``imported'' data, so the {\bf Initial Import} step is skipped. \begin{itemize} \item{{\bf Module List} \begin{itemize} \item{{\bf SPM$\rightarrow$Tools$\rightarrow$New Segment}: To generate the roughly (via a rigid-body) aligned grey and white matter images of the subjects.} \item{{\bf SPM$\rightarrow$Tools$\rightarrow$DARTEL Tools$\rightarrow$Run DARTEL (create Template)}: Determine the nonlinear deformations for warping all the grey and white matter images so that they match each other.} \item{{\bf SPM$\rightarrow$Tools$\rightarrow$DARTEL Tools$\rightarrow$Normalise to MNI Space}: Actually generate the smoothed ``modulated'' warped grey and white matter images.} \end{itemize} } \end{itemize} The segmentation and importing steps of these two alternative processing streams are described next. \subsection{Using Spatial$\rightarrow$Segment and DARTEL Tools$\rightarrow$Initial Import} The first step is to classify T1-weighted scans\footnote{Other types of scan may also work, but this would need some empirical exploration.} of a number of subjects into different tissue types via the Segmentation routine in SPM. The \emph{SPM$\rightarrow$Spatial$\rightarrow$Segment} pull-down can be used here: \begin{itemize} \item{{\bf Segment} \begin{itemize} \item{{\bf Data}: Select all the T1-weighted images, one per subject. It is usually a good idea to have roughly aligned them to MNI space first. The \emph{Display} button can be used to reorient the data so that the \emph{mm} coordinate of the AC is within about 3cm from $[0, 0, 0]$, and the orientation is within about $15^o$ of MNI space. The \emph{Check Reg} button can be used to see how well aligned a number of images are. } \item{{\bf Output Files}: It is suggested that \emph{Native Space} grey (and possibly white) matter images are created. These are c1*.img and c2*.img. The Segmentation produces a *\_seg\_sn.mat and a *\_seg\_inv\_sn.mat for each image. It is the *\_seg\_sn.mat files that are needed for the next step. } \item{{\bf Custom}: Default settings can usually be used here. } \end{itemize} } \end{itemize} The resulting *\_seg\_sn.mat files encode various parameters that allow the data to be ``imported'' into a form that can be used by the main DARTEL algorithm. In particular, \emph{Procrustes} aligned maps of grey and white matter can be generated. Select \emph{SPM$\rightarrow$Tools$\rightarrow$DARTEL Tools$\rightarrow$Initial Import}: \begin{itemize} \item{{\bf Initial Import} \begin{itemize} \item{{\bf Parameter Files}: Select all the *\_seg\_sn.mat files generated by the previous step. The T1-weighted scans need not be selected, as the import routine will try to find them. If the image files have not been moved since the segmentation, then their location can be determined by the contents of the *\_seg\_sn.mat files. If they have been moved, then the routine looks for the files in the current directory, or the output directory. } \item{{\bf Output Directory}: Specify where the imported data should be written. } \item{{\bf Bounding box}: This is the bounding box for the imported data. If the values are not finite (eg, if they are $[NaN, NaN, NaN; NaN, NaN, NaN]$) then the bounding box for the tissue probability maps, used as priors for the segmentation, will be assumed. Note that the deformations that DARTEL estimates will wrap around at the boundaries, so it is usually a good idea to ensure that the whole brain is easily enclosed within the bounding box. } \item{{\bf Voxel size}: These specify the resolution of the imported data. $[1.5, 1.5, 1.5]$ are reasonable values here. If the resolution is finer than this, then you may encounter memory problems during the actual DARTEL registration. If you do want to try working at a higher resolution, then consider changing the bounding box (but allow for the strange behaviour at the edges). } \item{{\bf Image option}: No imported version of the image is needed - usually only the grey and white matter tissue classes are used (ie choose \emph{None}). } \item{{\bf Grey Matter}: Yes, you need this. } \item{{\bf White Matter}: Yes, you also need this. } \item{{\bf CSF}: The CSF is not usually segmented very reliably because the segmentation only has tissue probability maps for GM WM an CSF. Because there are no maps for bone and other non-brain tissue, it is difficult for the segmentation algorithm to achieve a good CSF segmentation. Because of the poor CSF segmentation, it is not a good idea to use this tissue class for the subsequent DARTEL registration. } \end{itemize} } \end{itemize} \begin{figure} \begin{center} \epsfig{file=dartelguide/imported,width=140mm} \end{center} \caption{ Imported data for two subjects (A and B). Top row: rc1A.nii and rc2A.nii. Bottom row: rc1B.nii and rc2B.nii. \label{Fig:imported}} \end{figure} \subsection{Using Tools$\rightarrow$New Segment} \emph{Note: This subsection will be elaborated on later.} There is a new segmentation option in SPM8, which can be found within Tools$\rightarrow$New Segment. If this option is used, then the ``imported'' tissue class images (usually rc1.nii and rc2.nii) would be generated directly and the initial import step is skipped. It is also suggested that \emph{Native Space} versions of the tissues in which you are interested are also generated. For VBM, these are usually the c1*.nii files, as it is these images that will eventually be warped to MNI space. Both the imported and native tissue class image sets can be specified via the Native Space options of the user interface. \subsection{Using DARTEL Tools$\rightarrow$Run DARTEL (create Template)} The output of the previous step(s) are a series of rigidly aligned tissue class images (grey matter is typically encoded by rc1*.nii and white matter by rc2*.nii -- see Fig \ref{Fig:imported}). The headers of these files encode two affine transform matrices, so the DARTEL tools are still able to relate their orientations to those of the original T1-weighted images. The next step is to estimate the nonlinear deformations that best align them all together. This is achieved by alternating between building a template, and registering the tissue class images with the template, and the whole procedure is very time consuming. Specify \emph{SPM$\rightarrow$Tools$\rightarrow$Dartel Tools$\rightarrow$Run DARTEL (create Template)}. \begin{itemize} \item{{\bf Run DARTEL (create Template)} \begin{itemize} \item{{\bf Images} \begin{itemize} \item{{\bf Images}: Select all the rc1*.nii files generated by the import step. } \item{{\bf Images}: Select all the rc2*.nii files, in the same subject order as the rc1*.nii files. The first rc1*.nii is assumed to correspond with the first rc2*.nii, the second with the second, and so on. } \end{itemize} } \item{{\bf Settings}: Default settings generally work well, although you could try changing them to see what happens. A series of templates are generated called Template\_basename\_0.nii, Template\_basename\_1.nii etc. If you run multiple DARTEL sessions, then it may be a good idea to have a unique template basename for each. } \end{itemize} } \end{itemize} The procedure begins by computing an initial template from all the imported data. If u\_rc1*.nii files exist for the images, then these are treated as starting estimates and used during the creation of the initial template. If any u\_rc1*.nii files exist from previous attempts, then it is usually recommended that they are removed first (this sets all the starting estimates to zero). Template generation incorporates a smoothing procedure, which may take a while (several minutes). Once the original template has been generated, the algorithm will perform the first iteration of the registration on each of the subjects in turn. After the first round of registration, a new template is generated (incorporating the smoothing step), and the second round of registration begins. Note that the earlier iterations usually run faster than the later ones, because fewer ``time-steps'' are used to generate the deformations. The whole procedure takes (in the order of) about a week of processing time for 400 subjects. \begin{figure} \begin{center} \epsfig{file=dartelguide/sharpening,width=140mm} \end{center} \caption{ Different stages of template generation. Top row: an intermediate version of the template. Bottom row: the final template data. \label{Fig:sharpening}} \end{figure} The end result is a series of templates (see Fig \ref{Fig:sharpening}), and a series of u\_rc1*.nii files. The first template is based on the average\footnote{They are actually more similar to weighted averages, where the weights are derived from the Jacobian determinants of the deformations. There is a further complication in that a smoothing procedure is built into the averaging.} of the original imported data, where as the last is the average of the DARTEL registered data. The u\_rc1*.nii files are flow fields that parameterise the deformations. Note that all the output usually contains multiple volumes per file. For the u\_rc1*.nii files, only the first volume is visible using the Display or Check Reg tools in SPM. All volumes within the template images can be seen, but this requires the file selection to be changed to give the option of selecting more than just the first volume (in the file selector, the widget that says ``1'' should be changed to ``1:2''). \subsection{Using DARTEL Tools$\rightarrow$Normalise to MNI Space} The next step is to create the Jacobian scaled (``modulated'') warped tissue class images, by selecting \emph{SPM$\rightarrow$Tools$\rightarrow$DARTEL Tools$\rightarrow$Normalise to MNI Space}. The option for spatially normalising to MNI space automatically incorporates an affine transform that maps from the population average (DARTEL Template space) to MNI space, as well as incorporating a spatial smoothing step. \begin{itemize} \item{{\bf Normalise to MNI Space} \begin{itemize} \item{{\bf DARTEL Template}: Specify the last of the series of templates that was created by \emph{Run DARTEL (create Template)}. This is usually called \emph{Template\_6.nii}. Note that the order of the \emph{N} volumes in this template should match the order of the first \emph{N} volumes of the \emph{toolbox/DARTEL/TPM.nii} file.} \item{{\bf Select according to} either \emph{Few Subjects} or \emph{Many Subjects}. For VBM, the \emph{Many Subjects} option would be selected. \begin{itemize} \item{{\bf Flow Fields}: Specify the flow fields (u\_rc1*.nii) generated by the nonlinear registration.} \item{{\bf Images}: You may add several different sets of images. \begin{itemize} \item{{\bf Images}: Select the c1*.nii files for each subject, in the same order as the flow fields are selected.} \item{{\bf Images}: This is optional, but warped white matter images can also be generated by selecting the c2*.nii files.} \end{itemize}} \end{itemize}} \item{{\bf Voxel sizes}: Specify the desired voxel sizes for the spatially normalised images (NaN, NaN, NaN gives the same voxel sizes as the DARTEL template).} \item{{\bf Bounding box}: Specify the desired bounding box for the spatially normalised images (NaN, NaN, NaN; NaN NaN NaN gives the same bounding box as the DARTEL template).} \item{{\bf Preserve}: Here you have a choice of \emph{Preserve Concentrations} (ie not Jacobian scaled) or \emph{Preserve Amount} (Jacobian scaled). The \emph{Preserve Amount} would be used for VBM, as it does something similar to Jacobian scaling (modulation).} \item{{\bf Gaussian FWHM}: Enter how much to blur the spatially normalised images, where the values denote the full width at half maximum of a Gaussian convolution kernel, in units of mm. Because the inter-subject registration should be more accurate than when done using other SPM tools, the FWHM can be smaller than would be otherwise used. A value of around 8mm (ie $[8, 8, 8]$) should be about right for VBM studies, although some empirical exploration may be needed. If there are fewer subjects in a study, then it may be advisable to smooth more.} \end{itemize} } \end{itemize} The end result should be a bunch of smwc1*.nii files\footnote{The actual warping of the images is done slightly differently, with the aim that as much of the original signal is preserved as possible. This essentially involves pushing each voxel from its position in the original image, into the appropriate location in the new image - keeping a count of the number of voxels pushed into each new position. The procedure is to scan through the original image, and push each voxel in turn. The alternative (older way) was to scan through the spatially normalised image, filling in values from the original image (pulling the values from the original). The results of the pushing procedure are analogous to Jacobian scaled (``modulated'') data. A minor disadvantage of this approach is that it can introduce aliasing artifacts (think stripy shirt on TV screen) if the original image is at a similar - or lower - resolution to the warped version. Usually, these effects are masked by the smoothing.} (possibly with smwc2*.nii if white matter is also to be studied). \begin{figure} \begin{center} \epsfig{file=dartelguide/VBM,width=140mm} \end{center} \caption{ Pre-processing for VBM. Top row: Imported grey matter (rc1A.nii and rc1B.nii). Centre row: Warped with \emph{Preserve Amount} option and zero smoothing (``modulated''). Bottom row: Warped with \emph{Preserve Amount} option smoothing of 8mm (smwc1A.nii and smwc1B.nii). \label{Fig:VBM}} \end{figure} The final step is to perform the statistical analysis on the preprocessed data (smwc1*.nii files), which should be in MNI space. The next section says a little about how data from a small number of subjects could be warped to MNI space. \section{Spatially normalising functional data to MNI space} Providing it is possible to achieve good alignment between functional data from a particular subject and an anatomical image of the same subject (distortions in the fMRI may prevent accurate alignment), then it may be possible to achieve more accurate spatial normalisation of the fMRI data using DARTEL. There are several advantages of having more accurate spatial normalisation, especially in terms of achieving more significant activations and better localisation. The objectives of spatial normalisation are: \begin{itemize} \item{To transform scans of subjects into alignment with each other. DARTEL was developed to achieve better inter-subject alignment of data. } \item{To transform them to a standard anatomical space, so that activations can be reported within a standardised coordinate system. Extra steps are needed to achieve this aim. } \end{itemize} The option for spatially normalising to MNI space automatically incorporates an affine transform that maps from the population average (DARTEL Template space) to MNI space. This transform is estimated by minimising the KL divergence between the final template image generated by DARTEL and tissue probability maps that are released as part of SPM (in the new segmentation toolbox). MNI space is defined according to affine matched images, so an affine transform of the DARTEL template to MNI space would appear to be a reasonable strategy. For GLM analyses, we usually do not wish to work with Jacobian scaled data. For this reason, warping is now combined with smoothing, in a way that may be a bit more sensible than simply warping, followed by smoothing. The end result is essentially the same as that obtained by doing the following with the old way of warping \begin{itemize} \item{Create spatially normalised and ``modulated'' (Jacobian scaled) functional data, and smooth.} \item{Create spatially normalised maps of Jacobian determinants, and smooth by the same amount.} \item{Divide one by the other, adding a small constant term to the denominator to prevent divisions by zero.} \end{itemize} This should mean that signal is averaged in such a way that as little as possible is lost. It also assumes that the procedure does not have any nasty side effects for the GRF assumptions used for FWE corrections. Prior to spatially normalising using DARTEL, the data should be processed as following: \begin{itemize} \item{If possible, for each subject, use \emph{SPM$\rightarrow$Tools$\rightarrow$FieldMap} to derive a distortion field that can be used for correcting the fMRI data. More accurate within-subject alignment between functional and anatomical scans should allow more of the benefits of DARTEL for inter-subject registration to be achieved.} \item{Use either \emph{SPM$\rightarrow$Spatial$\rightarrow$Realign$\rightarrow$Realign: Estimate Reslice} or \emph{SPM$\rightarrow$Spatial$\rightarrow$Realign Unwarp}. If a field map is available, then use the \emph{Realign Unwarp} option. The images need to have been realigned and resliced (or field-map distortion corrected) beforehand - otherwise things are not handled so well. The first reason for this is that there are no options to use different methods of interpolation, so rigid-body transforms (as estimated by Realign but without having resliced the images) may not be well modelled. Similarly, the spatial transforms do not incorporate any masking to reduce artifacts at the edge of the field of view.} \item{For each subject, register the anatomical scan with the functional data (using \emph{SPM $\rightarrow$ Spatial $\rightarrow$ Coreg $\rightarrow$ Coreg: Estimate}). No reslicing of the anatomical image is needed. Use \emph{SPM$\rightarrow$Util$\rightarrow$Check Registration} to assess the accuracy of the alignment. If this step is unsuccessful, then some pre-processing of the anatomical scan may be needed in order to skull-strip and bias correct it. Skull stripping can be achieved by segmenting the anatomical scan, and masking a bias corrected version (which can be generated by the segmentation option) by the estimated GM, WM and CSF. This masking can be done using \emph{SPM$\rightarrow$Util$\rightarrow$Image Calculator} (\emph{ImCalc} button), by selecting the bias corrected scan (m*.img), and the tissue class images (c1*.img, c2*.img and c3*.img) and evaluating ``i1.\*((i2+i3+i4)$>$0.5)''. If segmentation is done before coregistration, then the functional data should be moved so that they align with the anatomical data.} \item{Segment the anatomical data and generate ``imported'' grey and white matter images. If \emph{SPM$\rightarrow$Tools$\rightarrow$New Segment} is used, then make sure that ``imported'' grey and white matter images are created. If \emph{SPM$\rightarrow$Spatial$\rightarrow$Segment} (the SPM5 segmentation routine, which is the one under the \emph{Segment} button), then an additional \emph{SPM$\rightarrow$Tools$\rightarrow$DARTEL Tools$\rightarrow$Initial Import} will be needed.} \item{To actually estimate the warps, use \emph{SPM$\rightarrow$Tools$\rightarrow$DARTEL Tools$\rightarrow$Run DARTEL (create Templates)} in order to generate a series of templates and a flow field for each subject.} \end{itemize} In principle (for a random effects model), you could run the first level analysis using the native space data of each subject. All you need are the contrast images, which can be warped and smoothed. Alternatively, you could warp and smooth the reslices fMRI, and do the statistical analysis on the spatially normalised images. Either way, you would select \emph{SPM$\rightarrow$Tools$\rightarrow$DARTEL Tools$\rightarrow$Normalise to MNI Space}: \begin{itemize} \item{{\bf Normalise to MNI Space} \begin{itemize} \item{{\bf DARTEL Template}: Template\_6.nii,1 is usually the grey matter component of the final template of the series. An affine transform is determined using this image.} \item{{\bf Select according to} either \emph{Few Subjects} or \emph{Many Subjects}. For fMRI analyses, the \emph{Few Subjects} option would be selected, which gives the option of selecting a flow field and list of images for each subject. \begin{itemize} \item{{\bf Subject} \begin{itemize} \item{{\bf Flow Field}: Specify the flow field (``u\_c1*.nii'') for this subject.} \item{{\bf Images}: Select the images for this subject that are to be transformed to MNI space.} \end{itemize} } \end{itemize} } \item{{\bf Voxel sizes}: Specify the desired voxel sizes for the spatially normalised images (NaN, NaN, NaN gives the same voxel sizes as the DARTEL template).} \item{{\bf Bounding box}: Specify the desired bounding box for the spatially normalised images (NaN, NaN, NaN; NaN NaN NaN gives the same bounding box as the DARTEL template).} \item{{\bf Preserve}: Here you have a choice of \emph{Preserve Concentrations} (ie not Jacobian scaled) or \emph{Preserve Amount} (Jacobian scaled). The \emph{Preserve Concentrations} option would normally be used for fMRI data, whereas \emph{Preserve Amount} would be used for VBM.} \item{{\bf Gaussian FWHM}: Enter how much to blur the spatially normalised images, where the values denote the full width at half maximum of a Gaussian convolution kernel, in units of mm.} \end{itemize} } \end{itemize} An alternative approach is now presented, which does not attempt to make optimal use of the available signal. \subsection{An alternative approach for using DARTEL to spatially normalise to MNI Space} During spatial normalisation of a brain image, some regions need to expanded and other regions need to contract in order to match the template. If some structure is excessively shrunk by DARTEL (because it has the freedom to estimate quite large deformations), then this will lead to a systematic reduction in the amount of BOLD signal being detected from that brain region. For this reason, the normalise to MNI space option would generally be preferred when working with functional data that is to be smoothed. \subsubsection{Affine transform of DARTEL template to MNI space} DARTEL works with images that are of average size. When DARTEL is used to generate an average shaped template (represented by a series of tissue probability maps) from a group of scans of various individuals, the result is of average size. Brains normalised to MNI space are slightly larger than average. In order to spatially normalise to MNI space, the deformation that maps from MNI space to the space of the group average is required. Because the MNI space was derived by affine registration of a number of subjects to a common coordinate system, in most cases it should be possible to achieve a reasonable match of the template generated by DARTEL using only an affine spatial normalisation. This can be achieved by matching the grey matter component of the template with a grey matter tissue probability map in MNI space. The spatial normalisation routine in SPM can be used to achieve this. \begin{itemize} \item{{\bf Normalise: Estimate} \begin{itemize} \item{{\bf Data} \begin{itemize} \item{{\bf Subject} \begin{itemize} \item{{\bf Source Image}: Template\_6.nii,1 is usually the grey matter component of the final template of the series.} \item{{\bf Source Weighting Image}: $<$None$>$} \end{itemize} } \end{itemize} } \item{{\bf Estimation Options} \begin{itemize} \item{{\bf Template Image}: Should be the apriori/grey.nii file distributed in SPM.} \item{{\bf Template Weighting Image}: $<$None$>$} \item{{\bf Source Image Smoothing}: 8mm (the same as the apriori/grey.nii file has been smoothed).} \item{{\bf Template Image Smoothing}: 0mm (because the data in the apriori folder are already smoothed by 8mm.)} \item{{\bf Affine Regularisation}: Usually, you would specify ``ICBM space template''.} \item{{\bf Nonlinear Frequency Cutoff}: Set this to infinity (enter ``Inf'') for affine registration.} \item{{\bf Nonlinear Iterations}: Setting this to zero will also result in affine-only spatial normalisation.} \item{{\bf Nonlinear Regularisation}: Setting this to infinity is another way of doing affine-only spatial normalisation.} \end{itemize} } \end{itemize} } \end{itemize} For some populations of subjects, an affine transform may not be adequate for achieving good registration of the average shape to MNI space. Nonlinear spatial normalisation may be more appropriate for these cases. As ever, determining which procedure is better would involve a degree of empirical exploration. \subsubsection{Combining deformations} Once you have the spatial transformation that maps from MNI space to the space of the DARTEL template, it is possible to combine this with the DEFORMATIONS estimated by DARTEL. Rather than warping the image data twice (introducing interpolation artifacts each time), the two spatial transforms can be combined by composing them together. The required deformation, for spatially normalising an individual to MNI space, is a mapping from MNI space to the individual image. This is because the spatially normalised images are generated by scanning through the (initially empty) voxels in the spatially normalised image, and figuring out which voxels in the original image to sample from (as opposed to scanning through the original image and putting the values into the right places in the spatially normalised version). The desired mapping is from MNI space to DARTEL template to individual scan. If \emph{A} is the mapping from MNI to template, and \emph{B} is the mapping from template to individual, then this mapping is $B \circ A$, where ``$\circ$'' denotes the composition operation. Spatially normalising via the composed deformations can be achieved through the \emph{Deformations} utility from the \emph{TASKS} pull-down (it is in \emph{Utils}). \begin{itemize} \item{{\bf Deformations} \begin{itemize} \item{{\bf Composition} \begin{itemize} \item{{\bf DARTEL flow} \begin{itemize} \item{{\bf Flow field}: Specify the u\_rc1*.nii flow field for that subject.} \item{{\bf Forward/Backwards}: This should be set to ``Backward'' to indicate a mapping from template to individual.} \item{{\bf Time Steps}: This is the number of time steps used by the final iterations of the DARTEL registration (usually 64).} \end{itemize} } \item{{\bf Imported \_sn.mat} \begin{itemize} \item{{\bf Parameter File}: Select the spatial normalisation parameters that would spatially normalise the Template\_6.nii file.} \item{{\bf Voxel sizes}: These are set to ``NaN'' (not a number) by default, which would take the voxel sizes for the apriori/grey.nii file. Alternatively, you could specify your favourite voxel sizes for spatially normalised images.} \item{{\bf Bounding box}: Again, these are set to non-finite values by default, which results in the same bounding box as the apriori/grey.nii file. To specify your favourite bounding box, enter $[x_{min}, y_{min}, z_{min}; x_{max}, y_{max}, z_{max}]$ (in units of mm, relative to the AC).} \end{itemize} } \end{itemize} } \item{{\bf Save as}: You can save the composed deformations as a file. This would be called y\_*.nii, which contains three volumes that encode the x, y and z components of the mapping. Note that only the first (x) component can be visualised in SPM. These things were not really designed to be visualised as images anyway.} \item{{\bf Apply to}: Specify the images for that subject that you would like spatially normalised. Note that the spatially normalised images are not masked (see the Chapter on Realignment for more information here). If realignment parameters are to be incorporated into the transformation, then this could cause problems at the edges. These can be avoided by reslicing after realignment (which is the default option if you ``Realign Unwarp''). Alternatively, some form of additional masking could be applied to the spatially normalised images, prior to smoothing.} \item{{\bf Interpolation}: Specify the form of interpolation.} \end{itemize} } \end{itemize} The above procedure would be repeated for each subject in the study. \section{Warping Images to Existing Templates} If templates have already been created using DARTEL, then it is possible to align other images with such templates. The images would first be imported in order to generate rc1*.nii and rc2*.nii files. The procedure is relatively straight-forward, and requires the \emph{SPM$\rightarrow$Tools$\rightarrow$DARTEL Tools$\rightarrow$Run DARTEL (existing Template)} option to be specified. Generally, the procedure would begin by registering with a smoother template, and end with a sharper one, with various intermediate templates between. \begin{itemize} \item{{\bf Run DARTEL (existing Templates)} \begin{itemize} \item{{\bf Images} \begin{itemize} \item{{\bf Images}: Select the rc1*.nii files.} \item{{\bf Images}: Select the corresponding rc2*.nii files.} \end{itemize} } \item{{\bf Settings}: Most settings would be kept at the default values, except for the specification of the templates. These are specified in within each of the \emph{Settings$\rightarrow$Outer Iterations$\rightarrow$Outer Iteration$\rightarrow$Template} fields. If the templates are Template\_*.nii, then enter them in the order of Template\_1.nii, Template\_2.nii, ... Template\_6.nii. } \end{itemize} } \end{itemize} Running this option is rather faster than \emph{Run DARTEL (create Template)}, as templates are not created. The output is in the form of a series of flow fields (u\_rc1*.nii). \section{Warping one individual to match another} Sometimes the aim is to deform an image of one subject to match the shape of another. This can be achieved by running DARTEL so that both images are matched with a common template, and composing the resulting spatial transformations. This can be achieved by aligning them both with a pre-existing template, but it is also possible to use the \emph{Run DARTEL (create Template)} option with the imported data of only two subjects. Once the flow fields (u\_rc1*.nii files) have been estimated, then the resulting deformations can be composed using \emph{SPM$\rightarrow$Utils$\rightarrow$Deformations}. If the objective is to warp A.nii to align with B.nii, then the procedure is set up by: \begin{itemize} \item{{\bf Deformations} \begin{itemize} \item{{\bf Composition} \begin{itemize} \item{{\bf DARTEL flow} \begin{itemize} \item{{\bf Flow field}: Specify the u\_rc1A\_Template.nii flow field.} \item{{\bf Forward/Backwards}: Backward.} \item{{\bf Time Steps}: Usually 64.} \end{itemize} } \item{{\bf DARTEL flow} \begin{itemize} \item{{\bf Flow Field}: Specify the u\_rc1B\_Template.nii flow field.} \item{{\bf Forward/Backwards}: Forward.} \item{{\bf Time Steps}: Usually 64.} \end{itemize} } \item{{\bf Identity} \begin{itemize} \item{{\bf Image to base Id on}: Specify B.nii in order to have the deformed image(s) written out at this resolution, and with the same orientations etc (ie so there is a voxel-for-voxel alignment, rather than having the images only aligned according to their ``voxel-to-world'' mappings).} \end{itemize} } \end{itemize} } \item{{\bf Save as}: You can save the composed deformations as a file. This would be called y\_*.nii, which contains three volumes that encode the x, y and z components of the mapping.} \item{{\bf Apply to}: Specify A.nii, and any other images for that subject that you would like warped to match B.nii. Note that these other images must be in alignment according to \emph{Check Reg}.} \item{{\bf Interpolation}: Specify the form of interpolation.} \end{itemize} } \end{itemize} Suppose the image of one subject has been manually labelled, then this option is useful for transferring the labels on to images of other subjects. \begin{figure} \begin{center} \epsfig{file=dartelguide/AtoB,width=140mm} \end{center} \caption{ Composition of deformations to warp one individual to match another. Top-left: Original A.nii. Top-right: A.nii warped to match B.nii. Bottom-left: Original B.nii. Bottom-right: B.nii warped to match A.nii. \label{Fig:AtoB}} \end{figure}
{ "alphanum_fraction": 0.7673942701, "avg_line_length": 81.069124424, "ext": "tex", "hexsha": "73141163679f72e4eb2f536e4ab1fb06d5fabd03", "lang": "TeX", "max_forks_count": 9, "max_forks_repo_forks_event_max_datetime": "2021-04-02T05:12:25.000Z", "max_forks_repo_forks_event_min_datetime": "2018-02-05T22:13:49.000Z", "max_forks_repo_head_hexsha": "0d817a8478de736cd91946efa2a71c8dae7ec08a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Hexans/spm_linux", "max_forks_repo_path": "spm8/man/dartelguide/dartelguide.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "0d817a8478de736cd91946efa2a71c8dae7ec08a", "max_issues_repo_issues_event_max_datetime": "2020-02-24T20:06:01.000Z", "max_issues_repo_issues_event_min_datetime": "2019-09-27T20:50:48.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Hexans/spm_linux", "max_issues_repo_path": "spm8/man/dartelguide/dartelguide.tex", "max_line_length": 1065, "max_stars_count": 14, "max_stars_repo_head_hexsha": "0d817a8478de736cd91946efa2a71c8dae7ec08a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Hexans/spm_linux", "max_stars_repo_path": "spm8/man/dartelguide/dartelguide.tex", "max_stars_repo_stars_event_max_datetime": "2021-11-29T20:28:03.000Z", "max_stars_repo_stars_event_min_datetime": "2018-02-17T14:01:29.000Z", "num_tokens": 8584, "size": 35184 }
% This is a comment. It doesn't show in the final pdf. %%%%% Preamble %%%%% \documentclass[11pt]{article} % Font Size and Document type (article is the standard type) \usepackage{amsmath} % use the package amsmath, which lets us do mathy things \usepackage{graphicx} % use graphicx, which lets us insert images \usepackage{multicol} % use multicol, which lets us split things into multiple columns \usepackage{biblatex} \addbibresource{bibliography.bib} % title info \title{\textbf{A Simple \LaTeX{} Document}} \author{Your name goes here\\} \date{\today} %%%%% Body of Document %%%%% \begin{document} \maketitle % this command gets the title info from above and puts it all together \section{Formatting} % this is a section header %-------------------------------------------------- \subsection{Basic} % this is a subsection header %-------------------------------------------------- \begin{itemize} % start an unordered list \begin{multicols}{2} % break the list into two columns \item \textit{Italic} \item \textbf{Bold} \item \textbf{\textit{Italic and Bold}} \item \underline{Underlined} \item `Single Quotes' \item``Double Quotes'' \end{multicols} \end{itemize} \subsection{Paragraphs} %-------------------------------------------------- \subsubsection{Alignment} \begin{flushleft} % try changing 'flushleft' to 'center' or 'flushright' Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. \end{flushleft} \subsection{Math} %-------------------------------------------------- To write a bunch of math all on its own, do it like this: \[ax^2 + bx + c = 0\] % '\[' and '\]' mark the start and end of the math figure To write math in the middle of a bunch of words, do it like this: $e=\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^n$ See? Easy! \section{Tables} %-------------------------------------------------- \begin{center} \begin{tabular}{|r|c l|} % format the columns \hline % horizontal line a1 & b1 & c1 \\ % '\\' ends the current line a2 & b2 & c2 \\ a3 & b3 & c3 \\ \hline a4 & b4 & c4 \\ \hline \end{tabular} \end{center} \section*{Images and Figures} % use '*' to suppress the section numbering %-------------------------------------------------- \begin{figure}[h] % google what h does \begin{center} \includegraphics[scale=0.1]{beaker} % we shrink the picture so it's not too big \end{center} \caption{Here's a picture of a beaker} \end{figure} \section{Other Stuff} %-------------------------------------------------- \begin{enumerate} % start an ordered list \item This is an ordered list \item Break one line \\ into two. \end{enumerate} %-------------------------------------------------- % this is the simplest way to make a bibliography \begin{thebibliography}{1} \bibitem{Wikibook} The LaTeX wikibook: {\em https://en.wikibooks.org/wiki/LaTeX} \end{thebibliography} %-------------------------------------------------- \end{document} % all documents must come to an end
{ "alphanum_fraction": 0.6286919831, "avg_line_length": 36.4615384615, "ext": "tex", "hexsha": "520c49f2287aca825a75aa5b4035855ca06b3961", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "36dfc6f18038c8e642c98a2c27c476646bf62728", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mesbahamin/latex-workshop", "max_forks_repo_path": "examples/basic-example/basic-example.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "36dfc6f18038c8e642c98a2c27c476646bf62728", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mesbahamin/latex-workshop", "max_issues_repo_path": "examples/basic-example/basic-example.tex", "max_line_length": 445, "max_stars_count": 1, "max_stars_repo_head_hexsha": "36dfc6f18038c8e642c98a2c27c476646bf62728", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mesbahamin/latex-workshop", "max_stars_repo_path": "examples/basic-example/basic-example.tex", "max_stars_repo_stars_event_max_datetime": "2017-10-27T17:44:25.000Z", "max_stars_repo_stars_event_min_datetime": "2017-10-27T17:44:25.000Z", "num_tokens": 867, "size": 3318 }
\documentclass[article]{jss} %% -- LaTeX packages and custom commands --------------------------------------- %% recommended packages \usepackage{thumbpdf,lmodern} %% equations packages \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} %% another package (only for this demo article) \usepackage{framed} %% new custom commands \newcommand{\class}[1]{`\code{#1}'} \newcommand{\fct}[1]{\code{#1()}} %% -- Article metainformation (author, title, ...) ----------------------------- %% - \author{} with primary affiliation %% - \Plainauthor{} without affiliations %% - Separate authors by \And or \AND (in \author) or by comma (in \Plainauthor). %% - \AND starts a new line, \And does not. % \author{Achim Zeileis\\Universit\"at Innsbruck % \And Second Author\\Plus Affiliation} % \Plainauthor{Achim Zeileis, Second Author} \author{Juan-Ramón González\\ISGlobal \And Dolors Pelegrí\\ISGlobal \And Isaac Subirana\\CIBERESP} %\Plainauthor{Achim Zeileis, Second Author} %% - \title{} in title case %% - \Plaintitle{} without LaTeX markup (if any) %% - \Shorttitle{} with LaTeX markup (if any), used as running title \title{A Short Demo Article: Regression Models for Count Data in \proglang{R}} \Plaintitle{A Short Demo Article: Regression Models for Count Data in R} \Shorttitle{A Short Demo Article in \proglang{R}} %% - \Abstract{} almost as usual \Abstract{ .. to be completed .. % This short article illustrates how to write a manuscript for the % \emph{Journal of Statistical Software} (JSS) using its {\LaTeX} style files. % Generally, we ask to follow JSS's style guide and FAQs precisely. Also, % it is recommended to keep the {\LaTeX} code as simple as possible, % i.e., avoid inclusion of packages/commands that are not necessary. % For outlining the typical structure of a JSS article some brief text snippets % are employed that have been inspired by \cite{Zeileis+Kleiber+Jackman:2008}, % discussing count data regression in \proglang{R}. Editorial comments and % instructions are marked by vertical bars. } %% - \Keywords{} with LaTeX markup, at least one required %% - \Plainkeywords{} without LaTeX markup (if necessary) %% - Should be comma-separated and in sentence case. \Keywords{JSS, style guide, comma-separated, not capitalized, \proglang{R}} \Plainkeywords{JSS, style guide, comma-separated, not capitalized, R} %% - \Address{} of at least one author %% - May contain multiple affiliations for each author %% (in extra lines, separated by \emph{and}\\). %% - May contain multiple authors for the same affiliation %% (in the same first line, separated by comma). \Address{ Juan-Ramón González\\ ISGlobal\\ \emph{and}\\ Department of Statistics\\ Faculty of Economics and Statistics\\ Universit\"at Innsbruck\\ Universit\"atsstr.~15\\ 6020 Innsbruck, Austria\\ E-mail: \email{[email protected]}\\ URL: \url{https://eeecon.uibk.ac.at/~zeileis/} } \begin{document} %% -- Introduction ------------------------------------------------------------- %% - In principle "as usual". %% - But should typically have some discussion of both _software_ and _methods_. %% - Use \proglang{}, \pkg{}, and \code{} markup throughout the manuscript. %% - If such markup is in (sub)section titles, a plain text version has to be %% added as well. %% - All software mentioned should be properly \cite-d. %% - All abbreviations should be introduced. %% - Unless the expansions of abbreviations are proper names (like "Journal %% of Statistical Software" above) they should be in sentence case (like %% "generalized linear models" below). \section{Introduction} \label{sec:intro} In many biomedical studies, from genetics, clinical or to epidemiological ones, data are collected from recruited individuals, frequenlty having many variables of different types; they can be demographic, such as age, sex, race, etc., morphologic such as weight, height, waist, etc., lipidic such as cholesterol, HDL, LDL, tryglicerides, etc, or obtained from questionaries of nutrition, quality of live, physical activity, etc. It may be interesting to integrate these sets of data in the analyses. There exists many statistical techniques to do so, from a standard Canonical Correlation Analyses (CCA) to relate two data sets, or Generalized Correlation Canonical Analyses (GCCA) which expands to more than two sets \cite{Tenenhaus:2011}. In parallel, there are some tools to deal with sparse data, when many variables are involved and it is thought that most of them are not associated, see \cite{Tenenhaus:2017}, or when variables are not normally distributed (\cite{MOFA}). This topics will not be covered either discussed in this paper. The goal of CCA is find two canonical variables computed as a linear combination of variables of each data set with the highest correlation. When having more than two data sets, it can be computed several pairwise correlations between canonical variables. Therefore, there exists different criteria to obtain them, \cite{Tenenhaus:2017}. Here we focuse on the criteria that minimizes the distance between latent variables and canonical variables of each data set. This criteria has the advantage of providing common coordinates to represent the individuals as in a Principal Component Analyses (PCA). Also, weights for each variable of each data set is obtained as in ordinary CCA, to investigate which variables are most important in the analyses. % \begin{leftbar} % The introduction is in principle ``as usual''. However, it should usually embed % both the implemented \emph{methods} and the \emph{software} into the respective % relevant literature. For the latter both competing and complementary software % should be discussed (within the same software environment and beyond), bringing % out relative (dis)advantages. All software mentioned should be properly % \verb|\cite{}|d. (See also Appendix~\ref{app:bibtex} for more details on % \textsc{Bib}{\TeX}.) % % For writing about software JSS requires authors to use the markup % \verb|\proglang{}| (programming languages and large programmable systems), % \verb|\pkg{}| (software packages), \verb|\code{}| (functions, commands, % arguments, etc.). If there is such markup in (sub)section titles (as above), a % plain text version has to be provided in the {\LaTeX} command as well. Below we % also illustrate how abbrevations should be introduced and citation commands can % be employed. See the {\LaTeX} code for more details. % \end{leftbar} % Modeling count variables is a common task in economics and the social sciences. % The classical Poisson regression model for count data is often of limited use in % these disciplines because empirical count data sets typically exhibit % overdispersion and/or an excess number of zeros. The former issue can be % addressed by extending the plain Poisson regression model in various % directions: e.g., using sandwich covariances or estimating an additional % dispersion parameter (in a so-called quasi-Poisson model). Another more formal % way is to use a negative binomial (NB) regression. All of these models belong to % the family of generalized linear models (GLMs). However, although these models % typically can capture overdispersion rather well, they are in many applications % not sufficient for modeling excess zeros. Since \cite{Mullahy:1986} there is % increased interest in zero-augmented models that address this issue by a second % model component capturing zero counts. An overview of count data models in % econometrics, including hurdle and zero-inflated models, is provided in % \cite{Cameron+Trivedi:2013}. % In \proglang{R} \citep{R}, GLMs are provided by the model fitting functions % \fct{glm} in the \pkg{stats} package and \fct{glm.nb} in the \pkg{MASS} package % \citep[][Chapter~7.4]{Venables+Ripley:2002} along with associated methods for % diagnostics and inference. The manuscript that this document is based on % \citep{Zeileis+Kleiber+Jackman:2008} then introduced hurdle and zero-inflated % count models in the functions \fct{hurdle} and \fct{zeroinfl} in the \pkg{pscl} % package \citep{Jackman:2015}. Of course, much more software could be discussed % here, including (but not limited to) generalized additive models for count data % as available in the \proglang{R} packages \pkg{mgcv} \cite{Wood:2006}, % \pkg{gamlss} \citep{Stasinopoulos+Rigby:2007}, or \pkg{VGAM} \citep{Yee:2009}. %% -- Manuscript --------------------------------------------------------------- %% - In principle "as usual" again. %% - When using equations (e.g., {equation}, {eqnarray}, {align}, etc. %% avoid empty lines before and after the equation (which would signal a new %% paragraph. %% - When describing longer chunks of code that are _not_ meant for execution %% (e.g., a function synopsis or list of arguments), the environment {Code} %% is recommended. Alternatively, a plain {verbatim} can also be used. %% (For executed code see the next section.) % \section{Models and software} \label{sec:models} \section{Methods} \label{sec:methods} The method described in this paper was first introduced and formulated in \cite{Velden:2006}. It is an extension of canonical correlation analyses for two or more data sets (Generalized Canonical Correlation Analyses -GCCA-). In this paper we want to focus on the strategy proposed in \cite{Velden:2006} to deal with missing data in a whole row. They derive a closed form to compute the results inserting a dummy diagonal squared matrix indicating which individuals were missing for each data set. This non-iterative procedure speeds and simplify the calculus procedure. Results are detailed in \cite{Velden:2006}. Shortly speaking, the method tries to find latent variables which minimizes the mean square error between them and a linear combination of each set of variables. This latent variables are orthogonal by construction and can be used to represent the individuals in a two dimensional space in order to distinguish underlying groups or outliers. It must be noted that the coordinates of the latent variables are obtained for all individuals, even if they have data in all sets ($X_j$) or not. In order to introduce some elements that are discussed along this paper, the goal function of the method is \begin{equation} \label{eq:crit} \min_{Y,B_j} \phi = \text{trace} \sum_{j=1}^J \left(Y - X_j B_j\right)^t K_j \left(Y-X_j B_j \right) \end{equation} constrained to $Y^t K Y = \sqrt{J} I_L$, where $I_L$ is the $L$ by $L$ identity matrix. The elements in equation (\ref{eq:crit}) are: - $X_j$, $j=1,\ldots,J$: $n$ by $p_j$ matrix representing the $j$-th set of observed variables. Note that all data sets, $X_j$, have the same number of rows, $n$, which represents the whole sample, i.e. individuals that are present in at least one data set. If a particular individual does not have data in $X_j$, his/her row is filled by an arbitrary value, for instance zero. - $K_j$: $n$-square matrix with zeros outside the diagonal, and one in the $i$ row and $i$ column if the $i$-th individual is not missing for $j$-th data set, and zero otherwise. - $K = \sum_{j=1}^J K_j$ - $B_j$: $p_j$ by $L$ matrix containing the coefficients of each variable for each data set. - $Y$: $n$ by $L$ matrix with latent variables in columns. The solution of equation (\ref{eq:crit}) is obtained by computing the eigen values and eigen vectors of of the expression \begin{equation} \label{eq:sol} K^{-\frac{1}{2}} \left(\sum_{j=1}^J K_j X_j \left(X_j^t K_j X_j \right)^{-1} X_j^t K_j \right) K^{-\frac{1}{2}} Y^{*} = Y^{*} \Lambda \end{equation} where $\Lambda$ is a diagonal matrix containing the eigen values $\lambda_l$, $l=1,\ldots,L$, and $Y^{*}$ is a $n$ by $L$ orthonormal matrix with eigen vectors in columns. Finally, the latent variables matrix, $Y$, is obtained by \begin{equation} \label{eq:latent} Y=\sqrt{J} K^{-\frac{1}{2}} Y^{*} \end{equation} Note that when there are more columns than rows, $\left(X_j^t K_j X_j \right)$ in equation (\ref{eq:sol}) becomes singular and the general inverse must be used such as Moore Ponrose-inverse. \subsection{Other common strategies to deal with missing rows/individuals} Other commonly used non-iterative approaches for dealing with missig rows are: \begin{itemize} \item \textbf{"IMPUTE"}: It consists of filling empty rows by the variable means from the rest of individuals for whom the data is available. This imputation is done variable by variable. This approach has the advantage of including all individuals in tha analyses regardless if they have information in all data sets or are missing in some of them. On contrary, it has the problem of not taking into account uncertainty since it assigns the same value to all missing individuals. \item \textbf{"COMPLETE"}: With this strategy, only individuals with available information in all data sets are included in the analyses. Unlike the "IMPUTE" strategy, it does not assign any value but sample size may be substancially reduced. \end{itemize} @@@ discarted strategies (multiple imputation, ¿¿single imputation taking into account other variables??) % The basic Poisson regression model for count data is a special case of the GLM % framework \cite{McCullagh+Nelder:1989}. It describes the dependence of a count % response variable $y_i$ ($i = 1, \dots, n$) by assuming a Poisson distribution % $y_i \sim \mathrm{Pois}(\mu_i)$. The dependence of the conditional mean % $\E[y_i \, | \, x_i] = \mu_i$ on the regressors $x_i$ is then specified via a % log link and a linear predictor % % % \begin{equation} \label{eq:mean} % \log(\mu_i) \quad = \quad x_i^\top \beta, % \end{equation} % % % where the regression coefficients $\beta$ are estimated by maximum likelihood % (ML) using the iterative weighted least squares (IWLS) algorithm. % \begin{leftbar} % Note that around the \verb|{equation}| above there should be no spaces (avoided % in the {\LaTeX} code by \verb|%| lines) so that ``normal'' spacing is used and % not a new paragraph started. % \end{leftbar} % \proglang{R} provides a very flexible implementation of the general GLM % framework in the function \fct{glm} \citep{Chambers+Hastie:1992} in the % \pkg{stats} package. Its most important arguments are % \begin{Code} % glm(formula, data, subset, na.action, weights, offset, % family = gaussian, start = NULL, control = glm.control(...), % model = TRUE, y = TRUE, x = FALSE, ...) % \end{Code} % where \code{formula} plus \code{data} is the now standard way of specifying % regression relationships in \proglang{R}/\proglang{S} introduced in % \cite{Chambers+Hastie:1992}. The remaining arguments in the first line % (\code{subset}, \code{na.action}, \code{weights}, and \code{offset}) are also % standard for setting up formula-based regression models in % \proglang{R}/\proglang{S}. The arguments in the second line control aspects % specific to GLMs while the arguments in the last line specify which components % are returned in the fitted model object (of class \class{glm} which inherits % from \class{lm}). For further arguments to \fct{glm} (including alternative % specifications of starting values) see \code{?glm}. For estimating a Poisson % model \code{family = poisson} has to be specified. % \begin{leftbar} % As the synopsis above is a code listing that is not meant to be executed, % one can use either the dedicated \verb|{Code}| environment or a simple % \verb|{verbatim}| environment for this. Again, spaces before and after should be % avoided. % % Finally, there might be a reference to a \verb|{table}| such as % Table~\ref{tab:overview}. Usually, these are placed at the top of the page % (\verb|[t!]|), centered (\verb|\centering|), with a caption below the table, % column headers and captions in sentence style, and if possible avoiding vertical % lines. % \end{leftbar} % \begin{table}[t!] % \centering % \begin{tabular}{lllp{7.4cm}} % \hline % Type & Distribution & Method & Description \\ \hline % GLM & Poisson & ML & Poisson regression: classical GLM, % estimated by maximum likelihood (ML) \\ % & & Quasi & ``Quasi-Poisson regression'': % same mean function, estimated by % quasi-ML (QML) or equivalently % generalized estimating equations (GEE), % inference adjustment via estimated % dispersion parameter \\ % & & Adjusted & ``Adjusted Poisson regression'': % same mean function, estimated by % QML/GEE, inference adjustment via % sandwich covariances\\ % & NB & ML & NB regression: extended GLM, % estimated by ML including additional % shape parameter \\ \hline % Zero-augmented & Poisson & ML & Zero-inflated Poisson (ZIP), % hurdle Poisson \\ % & NB & ML & Zero-inflated NB (ZINB), % hurdle NB \\ \hline % \end{tabular} % \caption{\label{tab:overview} Overview of various count regression models. The % table is usually placed at the top of the page (\texttt{[t!]}), centered % (\texttt{centering}), has a caption below the table, column headers and captions % are in sentence style, and if possible vertical lines should be avoided.} % \end{table} %% -- Illustrations ------------------------------------------------------------ %% - Virtually all JSS manuscripts list source code along with the generated %% output. The style files provide dedicated environments for this. %% - In R, the environments {Sinput} and {Soutput} - as produced by Sweave() or %% or knitr using the render_sweave() hook - are used (without the need to %% load Sweave.sty). %% - Equivalently, {CodeInput} and {CodeOutput} can be used. %% - The code input should use "the usual" command prompt in the respective %% software system. %% - For R code, the prompt "R> " should be used with "+ " as the %% continuation prompt. %% - Comments within the code chunks should be avoided - these should be made %% within the regular LaTeX text. %\section{Illustrations} \label{sec:illustrations} \section{Real data example} ... to be completed ... \section{Simulation studies} \subsection{Simulation methods} The proposed method (MGCCA) was validated and compared to the other common methods (IMPUTE and COMPLETE), in two simulation studies. The first one assesses how similar are the estimated scores of latent variables, $Y$, of each method compared to what would be obtained if all individuals were available ("full data"). Mean square distance, "MSD" is computed as follows as the mean of euclidan distances of each individual represented in the $Y$ coordinates obtained with full data and for each method. In the second simulation study, data were simulated distinguishing two groups of individuals with different means, and each method is evaluated in terms of power to detect differences between the groups. Simulated data were generated similarly to \cite{Velden:2006}. Detailed steps are listed below. \subsubsection{Simulation study I} \begin{itemize} \item \textbf{Step 1:} generate a $n$ by $2$ matrix, $Y$, from a standardized normal distribution, which corresponds to the two latent variables. \item \textbf{Step 2:} generate two matrices, $B_1$ and $B_2$ with dimensions $2$ by $p_1$ and $2$ by $p_2$ of coefficients ranging from -1 to 1 under a uniform distribution. \item \textbf{Step 3:} Compute $X_1$ and $X_2$, of dimensions $n$ by $p_1$ and $n$ by $p_2$, respectively, post-multiplying $Y$ by coefficient matrices $B_1$ and $B_2$. \item \textbf{Step 4:} Add noise to $X_1$ and $X_2$ by adding a gnerated normal value of zero mean and $\sigma_2$ standard deviation. At this point full data is obtained. \item \textbf{Step 5:} Randomly select a proportion of rows for $X_1$ and $X_2$ (not the same rows) to be declared as missing individuals. \end{itemize} Data were generated under the following scenarios: \begin{itemize} \item Fixing the number of individuals, $n$, to 500. \item Varying the number of variables to 50 and 100. In all scenarios, it has been considered the same number of variables for both data sets, $X_1$ and $X_2$, i.e. $p=q$. \item Varying the noise, $\sigma$, to 0.125 and 0.250. \item Varying the proportion of missing individuals, $p$, to 0.1, 0.2 and 0.3. \end{itemize} A total of 12 escenarios were simulated, and for each simulated scenario, 100 data sets were generated. Generalized Canonical Correlation Analyses (GCCA) was performed for each generated data and two canonical latent variables were estimate ($\hat{Y}_1$, $\hat{Y}_2$), using full data ("FULL"). Then, the three methods (MGCCA, IMPUTE, COMPLETE) were applied to the data with missing rows. Finally, the Mean Square Distance (MSD) is computed as follows: $$\text{MSD} = \frac{1}{n}\sum_{i=1}^{n}\left[\left(\hat{y}_{\text{FULL}}[i1]-\hat{y}_{\text{METHOD}}[i1]\right)^2+\left(\hat{y}_{\text{FULL}}[i2]-\hat{y}_{\text{METHOD}}[i2]\right)^2\right]$$ where $\hat{y}_{\text{METHOD}}[ij],\quad j=1,2$ are the latent variables coordinates obtained with each method (MGCCA, IMPUTE or COMPLETE) for the $i$ individual, and $\hat{y}_{\text{FULL}}[ij],\quad j=1,2$ are the latent variables coordinates obtained with full data. Note that when computing the MSD for COMPLETE method, $n$ is the number of individuals with complete data, so rows of $\hat{y}_{\text{FULL}}$ and $\hat{y}_{\text{COMPLETE}}$ corresponds to these individuals. \subsubsection{Simulation study II} Another generation data process similar to the one described above was performed but now two groups are distinguished, and methods are assessed in terms of discriminate these groups. All steps are the same except the step 1, where first $\frac{n}{2}$ rows (individuals) of $Y$ matrix are generated under a normal distribution with mean equals to $\frac{\delta}{2}$ and the rest of the rows under a normal distribution with mean equals to $-\frac{\delta}{2}$. Once the data were generated, a MANOVA anlyses was performed to test differences in means of estimated canonical variables scores among the two groups. In these simulation study, number of generated individuals and variables were fixed to $n=500$, and $p=q=50$, respectively. Noise standard deviation was fixed to $\sigma=0.2$. While the varying parameters where the difference in means of groups $\delta=\{0, 0.25, 0.5\}$ and proportion of missing individuals $p=\{0.1, 0.2, 0.3\}$ Finally, 500 data sets were generated and p-values were computed for each of them. Power was computed as the proportion of times the p-value was lower than significance level that was set to 5\%. \subsection{Simulation results} \subsubsection{Simulation study I} From simulation study I, the method with best performance was IMPUTE method (see Figure \ref{fig:MSD1}) under all scenarios, since it provides the lowest MSD and therefore the estimated latent variables coordinates was more similar to the ones that would be obtained if all data was available (no missing individuals). While the worst method by far was the COMPLETE one, which analyse only complete cases, i.e. indivivuals with information in all data sets. Results are similar regardless the number of variables (rows) and noise variable (columns). And the larger of missing individuals (x-axis), the larger MSD, specially for COMPLETE method. On the other side, IMPUTE and MGCCA method is more robust when proportion of missing individuals increases. \begin{figure}[t!] \centering \includegraphics{./simulations/case1/plot1a} \caption{\label{fig:MSD1} Average of Mean Square Differencs of all 100 replicates for each scenario and method by proportion of missing individuals, stratified by number of variables, $p=q$ (rows) and noise standard deviation $\sigma$ (columns).} \end{figure} Additionally, consistency of results between generated data (replicates) has been described using boxplots within each scenario (Figure \ref{fig:MSD2}). It can be seen that MGCCA method is very consistent, i.e. simulated results are very similar between replicates, compared to other two methods. Therefore, while, on average the IMPUTE method provides better results in terms of MSD, for some data results can be much worse than the ones obtained with MGCCA. \begin{figure}[t!] \centering \includegraphics{./simulations/case1/plot1b} \caption{\label{fig:MSD2} Boxplot of Mean Square Differencs within 100 replicates for each scenario and method by proportion of missing individuals, stratified by number of variables, $p=q$ (rows) and noise standard deviation $\sigma$ (columns).} \end{figure} \subsubsection{Simulation study II} From simulation study II results, it can be seen that MGCCA is the method that provides better power overcoming the other two methods in all scenarios (see Figure \ref{fig:pow}), specially when proportions of missing individuals is low (0.1) or moderate (0.2). The COMPLETE method is the least powerfull in all scenarios. When proportion of missings is high (0.3) the three methods perform similar in terms of power, but COMPLETE that performs much worse than the other two when difference between groups is high. Finally, when data is generated under no difference between groups ("Difference=0" on x-axis), all three methods estimate the same power to significance level, showing that there is no excess of false posive rate. \begin{figure}[t!] \centering \includegraphics[width=1\textwidth]{./simulations/case2/plot2} \caption{\label{fig:pow} Power for each method depending on the difference between groups, and stratified by proportions of missing individuals.} \end{figure} % For a simple illustration of basic Poisson and NB count regression the % \code{quine} data from the \pkg{MASS} package is used. This provides the number % of \code{Days} that children were absent from school in Australia in a % particular year, along with several covariates that can be employed as regressors. % The data can be loaded by % % % \begin{CodeChunk} % \begin{CodeInput} % R> data("quine", package = "MASS") % \end{CodeInput} % \end{CodeChunk} % % % and a basic frequency distribution of the response variable is displayed in % Figure~\ref{fig:quine}. % \begin{leftbar} % For code input and output, the style files provide dedicated environments. % Either the ``agnostic'' \verb|{CodeInput}| and \verb|{CodeOutput}| can be used % or, equivalently, the environments \verb|{Sinput}| and \verb|{Soutput}| as % produced by \fct{Sweave} or \pkg{knitr} when using the \code{render_sweave()} % hook. Please make sure that all code is properly spaced, e.g., using % \code{y = a + b * x} and \emph{not} \code{y=a+b*x}. Moreover, code input should % use ``the usual'' command prompt in the respective software system. For % \proglang{R} code, the prompt \code{"R> "} should be used with \code{"+ "} as % the continuation prompt. Generally, comments within the code chunks should be % avoided -- and made in the regular {\LaTeX} text instead. Finally, empty lines % before and after code input/output should be avoided (see above). % \end{leftbar} % \begin{figure}[t!] % \centering % \includegraphics{article-visualization} % \caption{\label{fig:quine} Frequency distribution for number of days absent % from school.} % \end{figure} % As a first model for the \code{quine} data, we fit the basic Poisson regression % model. (Note that JSS prefers when the second line of code is indented by two % spaces.) % % % \begin{CodeChunk} % \begin{CodeInput} % R> m_pois <- glm(Days ~ (Eth + Sex + Age + Lrn)^2, data = quine, % + family = poisson) % \end{CodeInput} % \end{CodeChunk} % % % To account for potential overdispersion we also consider a negative binomial % GLM. % % % \begin{CodeChunk} % \begin{CodeInput} % R> library("MASS") % R> m_nbin <- glm.nb(Days ~ (Eth + Sex + Age + Lrn)^2, data = quine) % \end{CodeInput} % \end{CodeChunk} % % % In a comparison with the BIC the latter model is clearly preferred. % % % \begin{CodeChunk} % \begin{CodeInput} % R> BIC(m_pois, m_nbin) % \end{CodeInput} % \begin{CodeOutput} % df BIC % m_pois 18 2046.851 % m_nbin 19 1157.235 % \end{CodeOutput} % \end{CodeChunk} % % % Hence, the full summary of that model is shown below. % % % \begin{CodeChunk} % \begin{CodeInput} % R> summary(m_nbin) % \end{CodeInput} % \begin{CodeOutput} % Call: % glm.nb(formula = Days ~ (Eth + Sex + Age + Lrn)^2, data = quine, % init.theta = 1.60364105, link = log) % % Deviance Residuals: % Min 1Q Median 3Q Max % -3.0857 -0.8306 -0.2620 0.4282 2.0898 % % Coefficients: (1 not defined because of singularities) % Estimate Std. Error z value Pr(>|z|) % (Intercept) 3.00155 0.33709 8.904 < 2e-16 *** % EthN -0.24591 0.39135 -0.628 0.52977 % SexM -0.77181 0.38021 -2.030 0.04236 * % AgeF1 -0.02546 0.41615 -0.061 0.95121 % AgeF2 -0.54884 0.54393 -1.009 0.31296 % AgeF3 -0.25735 0.40558 -0.635 0.52574 % LrnSL 0.38919 0.48421 0.804 0.42153 % EthN:SexM 0.36240 0.29430 1.231 0.21818 % EthN:AgeF1 -0.70000 0.43646 -1.604 0.10876 % EthN:AgeF2 -1.23283 0.42962 -2.870 0.00411 ** % EthN:AgeF3 0.04721 0.44883 0.105 0.91622 % EthN:LrnSL 0.06847 0.34040 0.201 0.84059 % SexM:AgeF1 0.02257 0.47360 0.048 0.96198 % SexM:AgeF2 1.55330 0.51325 3.026 0.00247 ** % SexM:AgeF3 1.25227 0.45539 2.750 0.00596 ** % SexM:LrnSL 0.07187 0.40805 0.176 0.86019 % AgeF1:LrnSL -0.43101 0.47948 -0.899 0.36870 % AgeF2:LrnSL 0.52074 0.48567 1.072 0.28363 % AgeF3:LrnSL NA NA NA NA % --- % Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 % % (Dispersion parameter for Negative Binomial(1.6036) family taken to be 1) % % Null deviance: 235.23 on 145 degrees of freedom % Residual deviance: 167.53 on 128 degrees of freedom % AIC: 1100.5 % % Number of Fisher Scoring iterations: 1 % % % Theta: 1.604 % Std. Err.: 0.214 % % 2 x log-likelihood: -1062.546 % \end{CodeOutput} % \end{CodeChunk} %% -- Summary/conclusions/discussion ------------------------------------------- \section{Summary and discussion} \label{sec:summary} ... to be completed ... % \begin{leftbar} % As usual \dots % \end{leftbar} %% -- Optional special unnumbered sections ------------------------------------- \section*{Computational details} % \begin{leftbar} % If necessary or useful, information about certain computational details % such as version numbers, operating systems, or compilers could be included % in an unnumbered section. Also, auxiliary packages (say, for visualizations, % maps, tables, \dots) that are not cited in the main text can be credited here. % \end{leftbar} ... specify R version and used packages ... ... consumed time for simulation. ? analyses of real data ? % The results in this paper were obtained using % \proglang{R}~3.4.1 with the % \pkg{MASS}~7.3.47 package. \proglang{R} itself % and all packages used are available from the Comprehensive % \proglang{R} Archive Network (CRAN) at % \url{https://CRAN.R-project.org/}. \section*{Acknowledgments} % \begin{leftbar} % All acknowledgments (note the AE spelling) should be collected in this % unnumbered section before the references. It may contain the usual information % about funding and feedback from colleagues/reviewers/etc. Furthermore, % information such as relative contributions of the authors may be added here % (if any). % \end{leftbar} %% -- Bibliography ------------------------------------------------------------- %% - References need to be provided in a .bib BibTeX database. %% - All references should be made with \cite, \citet, \citep, \citealp etc. %% (and never hard-coded). See the FAQ for details. %% - JSS-specific markup (\proglang, \pkg, \code) should be used in the .bib. %% - Titles in the .bib should be in title case. %% - DOIs should be included where available. \bibliography{refs} %% -- Appendix (if any) -------------------------------------------------------- %% - After the bibliography with page break. %% - With proper section titles and _not_ just "Appendix". \newpage \begin{appendix} \section{More technical details} \label{app:technical} ... not sure if necessary to include an appendix... % \begin{leftbar} % Appendices can be included after the bibliography (with a page break). Each % section within the appendix should have a proper section title (rather than % just \emph{Appendix}). % % For more technical style details, please check out JSS's style FAQ at % \url{https://www.jstatsoft.org/pages/view/style#frequently-asked-questions} % which includes the following topics: % \begin{itemize} % \item Title vs.\ sentence case. % \item Graphics formatting. % \item Naming conventions. % \item Turning JSS manuscripts into \proglang{R} package vignettes. % \item Trouble shooting. % \item Many other potentially helpful details\dots % \end{itemize} % \end{leftbar} % \section[Using BibTeX]{Using \textsc{Bib}{\TeX}} \label{app:bibtex} % \begin{leftbar} % References need to be provided in a \textsc{Bib}{\TeX} file (\code{.bib}). All % references should be made with \verb|\cite|, \verb|\citet|, \verb|\citep|, % \verb|\citealp| etc.\ (and never hard-coded). This commands yield different % formats of author-year citations and allow to include additional details (e.g., % pages, chapters, \dots) in brackets. In case you are not familiar with these % commands see the JSS style FAQ for details. % % Cleaning up \textsc{Bib}{\TeX} files is a somewhat tedious task -- especially % when acquiring the entries automatically from mixed online sources. However, % it is important that informations are complete and presented in a consistent % style to avoid confusions. JSS requires the following format. % \begin{itemize} % \item JSS-specific markup (\verb|\proglang|, \verb|\pkg|, \verb|\code|) should % be used in the references. % \item Titles should be in title case. % \item Journal titles should not be abbreviated and in title case. % \item DOIs should be included where available. % \item Software should be properly cited as well. For \proglang{R} packages % \code{citation("pkgname")} typically provides a good starting point. % \end{itemize} % \end{leftbar} \end{appendix} %% ----------------------------------------------------------------------------- \end{document}
{ "alphanum_fraction": 0.7070934305, "avg_line_length": 50.3108882521, "ext": "tex", "hexsha": "16f93192edc86617c2c305e34551e1fe32525971", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "13ded5c048aef3630da3ea9aa637a39c667cd5ac", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "isglobal-brge/paperGCCA", "max_forks_repo_path": "article.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "13ded5c048aef3630da3ea9aa637a39c667cd5ac", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "isglobal-brge/paperGCCA", "max_issues_repo_path": "article.tex", "max_line_length": 613, "max_stars_count": null, "max_stars_repo_head_hexsha": "13ded5c048aef3630da3ea9aa637a39c667cd5ac", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "isglobal-brge/paperGCCA", "max_stars_repo_path": "article.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 9292, "size": 35117 }
\section{Memory Mapping}\label{s:mapping} \subsection{Structures and Bookkeeping}\label{ss:mapping_structures} To keep track of the various mapped regions and pieces of memory, as well as the free virtual addresses, we have to keep a significant amount of state for the memory mapping and paging process. \begin{itemize} \item Free vspace: To prevent address collisions, the free virtual addresses are tracked. This is done using a singly linked list, because its simplicity. This might introduce additional overhead when mapping memory, when the process has run some time and the memory is fragmented. The lookup scales with $0\klammern{n}$. Currently, freed nodes are simply prepended to the list. This scales with $0\klammern{1}$ \item Mappings: To be able to unmap a piece of memory again, the mappings are stored. This is done using a singly linked list. A new mapping is added in front of the list. This scales with $0\klammern{1}$. Finding the right mapping for unmapping memory scales with $0\klammern{n}$ with the number of mappings in the worst case, when it has to traverse the whole list. However, we think that there are two sorts of mappings. One is short lived and will be unmapped soon after it is mapped. And the others are long living. Our implementation let the long living mappings wander to the back of the list while being fast for unmapping recently added mappings. \item L1 pagetable: a reference to the L1 pagetable is stored. \item L2 pagetables: an array of L2 pagetables. Initially, this list is empty. If a L2 pagetable is used for the first time, it is created and the capref is stored for future uses. \item Spawninfo: When spawning a new domain, we need to copy the caps for the mappings and the L2 pagetables to the new domain. To be able to reuse our mapping code, we keep a reference to the spawninfo that contains a callback function to be used when mapping for a foreign domain. \end{itemize} \subsection{Mapping} The mapping of a memory frame to the virtual address space of a domain consists of the following steps: \begin{enumerate} \item If the address is not user chosen, the information about a free block of virtual addresses is computed from with the information stored in the paging state (see \autoref{ss:mapping_structures}). \item The L2 pagetable corresponding to the virtual address to be mapped is read from the L1 pagetable. If this pagetable does not yet exist, a new L2 pagetable is created. \item Map the minimum between the number of bytes we have to map and the number of bytes that still fit into the L2 pagetable. \item Store the reference to the L2 pagetable and the mapping information to be able to unmap the piece of memory again. \item The steps 2 - 4 are repeated until all memory is mapped. This is the case when the requested size, starting from the virtual address, does not fit into a single L2 pagetable. \end{enumerate} \subsection{Unmapping} Because we stored a fair amount of state, unmapping is easy. All parts of the region to unmap are traversed and unmapped. After this is done, the freed virtual addresses are added to the list of free vspace again (see \autoref{ss:mapping_structures}). \medskip One problem we encountered while implementing the unmapping was that it can be hard to test or demonstrate. Due to compiler optimizations (especially instruction reordering), unmapped memory seemed accessible even after it was unmapped for a short time.
{ "alphanum_fraction": 0.7587548638, "avg_line_length": 47.3421052632, "ext": "tex", "hexsha": "41819d112f6aeb4cbc47c5a80473d40ed30b8121", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a5c23e6f827b7b6835d001d9f6b5c9776926372b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "PhilippSchaad/AOS_Barrelfish", "max_forks_repo_path": "Report/Chapters/ClownFish/Mapping.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a5c23e6f827b7b6835d001d9f6b5c9776926372b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "PhilippSchaad/AOS_Barrelfish", "max_issues_repo_path": "Report/Chapters/ClownFish/Mapping.tex", "max_line_length": 79, "max_stars_count": null, "max_stars_repo_head_hexsha": "a5c23e6f827b7b6835d001d9f6b5c9776926372b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "PhilippSchaad/AOS_Barrelfish", "max_stars_repo_path": "Report/Chapters/ClownFish/Mapping.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 860, "size": 3598 }
\chapter{{\tt Interest Point} Module}\label{ch:interestpoint-module} [{\em Note: Parts of this documentation may be no longer applicable... }] $$ $$ Interest points are unique point identifiers with in an image. There are usually many of them in an image and what defines them changes based on the algorithm used. For the most part though, interest points are defined at places where a corner exists, where at least 2 edges come together. Interest points are helpful in locating the same feature in multiple images. There are 3 major parts to using interest points. First, all images are processed with some interest point / corner detection and all points are recorded. Secondly, all of the points found from the first step now have a unique identifier built for them that properly describes the feature and it texture surroundings. Lastly, captured interest points are compared across images. Features on separate images that have identifiers that seem approximately equal are then connected into a match file. Uses for interest points are general corner detection, object recognition, and image alignment. The last one, image alignment, is a common use used within Vision Workbench. Using fitting functors that will be described later, transform matrices can be solved for to describe the relationship between images and later merge them (see {\tt <vw/Image/TransformView.h>}). Also as a side note, interest points can be used for measurements for a bundle adjustment that would solve for the original placement of the cameras (see {\tt <vw/Camera/BundleAdjust.h>}). The interest point module includes a complete set of classes and functions for each step of interest point detection. They can be imported into your code by including {\tt <vw/InterestPoint.h>}. The built-in classes {\tt ScaledInterestPointDetector} and {\tt SimpleInterestPointDetector} (defined in {\tt <vw/InterestPoint/Detector.h>}) provide out-of-the-box support for detecting interest points, with or without scale space methods. The interest point module is designed to be as flexible as possible in that it decouples each step in the process of interest point detection. Different interest measures and thresholding methods, built-in or user-defined, can be used with the {\tt InterestPointDetector} classes. {\tt ScaledInterestPointDetector} and {\tt SimpleInterestPointDetector} can both easily be subclassed to further customize their operation, for example by implementing an alternative method for finding peaks in the interest image. The interest point module also provides tools for generating descriptors to compactly describe the properties of an interest point. \section{Scale Space Methods} When detecting interest points, we want them to be invariant to changes in view perspective. The scale space is a standard tool for making a detection algorithm scale invariant \cite{lindeberg98}. The interest point module provides support for scale space detection methods based on the {\tt ImageOctave} class. An octave is a subset of the scale space. It is a set of images (scales) formed by convolving the source image with Gaussian kernels having progressively larger standard deviations; the sigma used for the last scale in the octave is twice that used for the first scale. Given a source image and a number of scales per octave, {\tt ImageOctave} will construct the first octave of the scale space of the source image. Successive octaves can be constructed with the \verb#build_next()# method. \begin{verbatim} ImageView<double> source; int scales_per_octave; ImageOctave octave(source, scales_per_octave); // Process first octave... // Then build the second octave octave.build_next(); \end{verbatim} Building the next octave is a destructive operation, as the previously computed octave data is not saved. If you need to retain all of the scaled images generated, e.g. for use in generating descriptors, {\tt ImageOctaveHistory} can be used to store this data. \section{Measuring Interest} The interest point module includes both classes and free functions for computing interest images from source images using the standard Harris \cite{harris88} and LoG (Laplacian of Gaussian) interest measures. They can be imported by including the file {\tt vw/InterestPoint/Interest.h}. The HarrisInterest and LoGInterest classes are intended for use in conjunction with the InterestPointDetector classes (next section). Creating your own interest measure classes for use with the Detector classes is straightforward. Subclass the \verb#InterestBase# abstract base type. In the constructor, set \verb#InterestBase<T>::type# to \verb#IP_MIN#, \verb#IP_MAX# or \verb#IP_MINMAX#, depending on what type of peaks in the generated interest image represent interest points. Then overload the abstract virtual method \verb#compute_interest# with your implementation of the interest measure. \section{The Interest Point Detector Classes} The InterestPointDetector classes in {\tt <vw/InterestPoint/Detector.h>} form the heart of the interest point module. They integrate the various components of the module into an easy all-in-one interface for detecting interest points. The InterestPointDetector class itself is an abstract base class. Two implementations of its interface are supplied, {\tt ScaledInterestPointDetector} and {\tt SimpleInterestPointDetector}. The Scaled version uses scale space methods, while the Simple version does not; otherwise they are identical. When constructing either type of detector, you specify an interest measure class and a thresholding class. Built-in thresholding classes are defined in {\tt <vw/InterestPoint/Threshold.h>}. \begin{verbatim} LoGInterest<float> log; InterestThreshold<float> thresholder(0.0001); ScaledInterestPointDetector<float> detector(&log, &thresholder); std::vector<InterestPoint> points = interest_points(src, &detector); \end{verbatim} \section{Flow of Data} Although designed primarily for flexibility, the interest point module takes care not to sacrifice efficiency by unnecessarily recomputing internal images such as gradients. If you take advantage of the module's flexibility by customizing its framework (for example, by implementing a new interest measure class), you will probably make use of {\tt ImageInterestData}, a struct which holds a source image and several interesting related images, such as gradients and interest. \section{Generating Descriptors} A descriptor of an interest point represents the local image region around the point. It should be distinctive as well as invariant to factors such as illumination and viewpoint. The interest module contains basic functions and classes for generating descriptors in {\tt <vw/InterestPoint/Descriptor.h>}. Generating a descriptor for an interest point requires knowledge of the point's source image. Different descriptor classes may require different source data. The trivial {\tt PatchDescriptor} uses only the source {\tt ImageView} as its source data. \begin{verbatim} SimpleInterestPointDetector<float> detector(&harris, &thresholder); std::vector<InterestPoint> points = interest_points(source_image, &detector); PatchDescriptor<float> pd; generate_descriptors(points, source_image, pd); \end{verbatim} Properly generating descriptors for interest points found with {\tt ScaledInterestPointDetector} is more involved, as various blurred versions of the source image may be required to provide local image regions for interest points at different scales. \begin{verbatim} ScaledInterestPointDetector<float> detector(&log, &thresholder); ImageOctaveHistory<ImageInterestData<float> > history; detector.record_history(&history); SIFT_Descriptor<float> sd; generate_descriptors(points, history, sd); \end{verbatim} \section{Matching} Aliens have abducted this section. Are you man enough to save it? Huh, Punk? \section{RANSAC} RANdom SAmple Consensus {\em (or RANSAC)} is a method for sifting through messy data to remove outliers. RANSAC starts with the goal of fitting some objective to a mass of data. In the case of interest points it is usually fitting some transform matrix to represent to move of points from one image's coordinate frame to another. The algorithm works by randomly selecting a minimal number of matches and fitting an initial transform to this small selected set. It then proceeds to to grow the initial set of matches from with matches from the original pool whose error stays within an inlier threshold. This process of randomly selecting a minimal set, fitting, and growing is repeated many times. The round that produced the most inliers is kept for a final stage where a better fitting algorithm can be applied to the entire final pool of matches. The transform solved on the last step is considered best solution that correctly maps the inliers. This shotgun method, though not efficient, gives the ability for coping with a large percentage of outliers. {\em Yet, also be aware that it is entirely possible that in a worst case scenario, RANSAC might fit itself to an interesting bunch of outliers.} Vision Workbench's implementation can be found in {\tt <vw/Math/RANSAC.h>}. {\tt RandomSampleConsensus} expects 3 inputs during it's construction. It requires a fitting functor that describes the type of transform matrix used for fitting. It also needs an error metric functor. For the case of interest points, {\tt InterestPointErrorMetric()} should do the job. Finally, an integer describing the inlier threshold is required that defines the greatest error allowed during fitting. An example of construction is below. \begin{verbatim} vw::math::RandomSampleConsensus<math::SimilarityFittingFunctor, math::InterestPointErrorMetric > ransac( vw::math::SimilarityFittingFunctor(), vw::math::InterestPointErrorMetric(), inlier_threshold ); \end{verbatim} {\tt RandomSampleConsensus} is operated via an overloaded {\tt operator()}. It expects a container of Vector3s. Interest Point module provides a helpful tool for converting to {\tt Vector3} from a {\tt std::vector} of InterestPoints called {\tt iplist\_to\_vectorlist}. Finally the overloaded {\tt operator()} returns it's final transform matrix result that was used to select it's inliers, this can be stored for later image transform operations if desired. {\tt RandomSampleConsensus} does not return a new list of inliers, instead it returns the index locations of the inliers. It is left up to the user to repackage the interest points to have only inliers. An example of operations is below. \begin{verbatim} std::vector<Vector3> ransac_ip1 = iplist_to_vectorlist(matched_ip1); std::vector<Vector3> ransac_ip2 = iplist_to_vectorlist(matched_ip2); Matrix<double> H(ransac(ransac_ip1,ransac_ip2)); std::vector<int> indices = ransac.inlier_indices(H,ransac_ip1,ransac_ip2); \end{verbatim} Lastly, below is a listing of fitting functors that are available in {\tt <vw/Math/Geometry.h>}. \begin{table}[h]\begin{centering} \begin{tabular}{|c|l|} \hline Functor & Description \\ \hline \hline \verb#HomographyFittingFunctor()# & 8 DOF. Also known as Projective matrix. \\ \hline \verb#AffineFittingFunctor()# & 6 DOF. Handles rotation, translation, scaling, and skewing. \\ \hline \verb#SimilarityFittingFunctor()# & 4 DOF. Handles rotation, translation, and scaling. \\ \hline \verb#TranslationRotationFittingFunctor()# & 3 DOF. Also known as Euclidean matrix. \\ \hline \end{tabular} \caption{Fitting functors defined in {\tt <vw/Math/Geometry.h>}.} \label{tbl:fitting-functors} \end{centering}\end{table} \section{Pre-built Tools} To further help the introduction to the use of interest points, Vision Workbench supplies three utility programs for working with interest points. These utilites can be found with the rest in Vision Workbench's build path; their source code is available in {\tt <vw/tools/>}. \begin{figure}[h] \begin{center} \includegraphics[width=6in]{images/ip_demo_match.jpg} \end{center} \caption{Example debug image from {\tt Ipmatch}.} \label{fig:demo} \end{figure} The above is an example of a result that can be created with {\tt ipfind} and {\tt ipmatch}. The image shows red lines that were drawn between matched points. Here are the commands used to create that image.. \begin{verbatim} ipfind left.png right.png ipmatch left.png right.png -r homography -d \end{verbatim} There is also a utility called {\tt ipalign} which can be used to align images using interest points. It works by finding matched points and then fitting an affine or homography matrix to its observations. The solved matrix is then used to transform the secondary images into the perspective of the reference or first image. \begin{figure}[h] \begin{center} \includegraphics[width=6in]{images/aligned_images.jpg} \end{center} \caption{Example images using {\tt ipalign}. Top row shows the original. Bottom row shows after.} \label{fig:align_demo} \end{figure} Here are the commands used to create the {\tt ipalign} example. \begin{verbatim} ipalign 0.jpg 1.jpg 2.jpg 3.jpg --homography \end{verbatim} \begin{thebibliography}{1} \bibitem{harris88} Harris, Chris, and Mike Stephens, ``A Combined Corner and Edge Detector,'' Proc. 4th Alvey Vision Conf., Manchester, pp. 147-151, 1988. \bibitem{jakkula10} Jakkula, Vinayak R., ``Efficient Feature Detection Using OBALoG: Optimized Box Approximation of Laplacian of Gaussian,'' Unpublished master's thesis, Kansas State University, 2010. \bibitem{lindeberg98} Lindeberg, Tony, ``Feature Detection with Automatic Scale Selection,'' Int. J. of Computer Vision, Vol. 30, number 2, 1998. \bibitem{lowe04} Lowe, David G., ``Distinctive Image Features from Scale-Invariant Keypoints,'' Int. J. of Computer Vision, 2004. \end{thebibliography}
{ "alphanum_fraction": 0.7874541927, "avg_line_length": 46.082781457, "ext": "tex", "hexsha": "8b0f593bb5ce3dc5fbfe11c2762d939c18460c30", "lang": "TeX", "max_forks_count": 135, "max_forks_repo_forks_event_max_datetime": "2022-03-18T13:51:40.000Z", "max_forks_repo_forks_event_min_datetime": "2015-01-19T00:57:20.000Z", "max_forks_repo_head_hexsha": "b06ba0597cd3864bb44ca52671966ca580c02af1", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "maxerbubba/visionworkbench", "max_forks_repo_path": "docs/workbook/interestpoint_module.tex", "max_issues_count": 39, "max_issues_repo_head_hexsha": "b06ba0597cd3864bb44ca52671966ca580c02af1", "max_issues_repo_issues_event_max_datetime": "2021-03-23T16:11:55.000Z", "max_issues_repo_issues_event_min_datetime": "2015-07-30T22:22:42.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "maxerbubba/visionworkbench", "max_issues_repo_path": "docs/workbook/interestpoint_module.tex", "max_line_length": 200, "max_stars_count": 318, "max_stars_repo_head_hexsha": "b06ba0597cd3864bb44ca52671966ca580c02af1", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "maxerbubba/visionworkbench", "max_stars_repo_path": "docs/workbook/interestpoint_module.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-17T07:12:20.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-02T16:37:34.000Z", "num_tokens": 3242, "size": 13917 }
% SPDX-FileCopyrightText: © 2021 Martin Michlmayr <[email protected]> % SPDX-License-Identifier: CC-BY-4.0 \setchapterimage[9.5cm]{images/mentor} \chapter{Mentorship} \labch{mentorship} Several projects and organizations offer formalized mentorship programs to make it easier for those new to open source to get involved. Some organizations also pay stipends to mentees. The Linux Foundation offer a \href{https://www.linuxfoundation.org/en/about/diversity-inclusivity/mentorship/}{mentorship program} to help developers ``to learn, experiment, and contribute effectively to open source communities''. Projects that mentees can work on include the Linux kernel, Hyperledger, and GraphQL. The organization also provides \href{https://lfx.linuxfoundation.org/tools/mentorship}{tooling} through which projects can run mentorship programs. X.Org's \href{https://www.x.org/wiki/XorgEVoC/}{Endless Vacation of Code (EVoC)} is another program that pairs those interested in working on a project with a mentor. Several organizations offer mentorship programs that are open to many open source projects, including \href{https://summerofcode.withgoogle.com/}{Google Summer of Code} and \href{https://www.outreachy.org/}{Outreachy}. Being part of an organization may allow a project to participate in such outreach programs. \begin{kaobox}[frametitle=Outreachy: increasing diversity in open source] \href{https://www.outreachy.org/}{Outreachy} intends to increase diversity in open source by offering remote internships to ``anyone who faces under-representation, systemic bias, or discrimination in the technology industry of their country'': \begin{quote} Interns work with experienced mentors from open source communities. Outreachy internship projects may include programming, user experience, documentation, illustration, graphical design, data science, project marketing, user advocacy, or community event planning. \end{quote} \end{kaobox} Mentorship and outreach programs are a good way for projects to attract new contributors and grow their community. Foundations can help by taking care of the legal paperwork and distributing funds.
{ "alphanum_fraction": 0.8043680297, "avg_line_length": 67.25, "ext": "tex", "hexsha": "f6ba41ca1d2484707545ce9930ccd23f6c5da4a5", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "1c7370b86f9ea5133f6a077d9b7b0105729f21ac", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "tbm/foss-foundations-primer", "max_forks_repo_path": "chapters/community/mentorship.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "1c7370b86f9ea5133f6a077d9b7b0105729f21ac", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "tbm/foss-foundations-primer", "max_issues_repo_path": "chapters/community/mentorship.tex", "max_line_length": 466, "max_stars_count": 3, "max_stars_repo_head_hexsha": "1c7370b86f9ea5133f6a077d9b7b0105729f21ac", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "tbm/foss-foundations-primer", "max_stars_repo_path": "chapters/community/mentorship.tex", "max_stars_repo_stars_event_max_datetime": "2021-05-06T12:32:59.000Z", "max_stars_repo_stars_event_min_datetime": "2021-04-29T20:30:34.000Z", "num_tokens": 487, "size": 2152 }
\documentclass[10pt,a4paper]{article} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{graphicx} \usepackage{lmodern} \usepackage{hyperref} \usepackage[left=2cm,right=2cm,top=2cm,bottom=2cm]{geometry} \author{Marten Jäger} \title{Manual} \begin{document} \maketitle For several regions of the new genome release GRCh38 exists alternative loci. The reference genome chromosomes with their corresponding accession numbers are stored in \\ \url{ftp://ftp.ncbi.nlm.nih.gov/genomes/H_sapiens/Assembled_chromosomes/chr_accessions_GRCh38.p2} e.g. it looks like this: \begin{verbatim} #Chromosome RefSeq Accession.version RefSeq gi GenBank Accession.version GenBank gi 1 NC_000001.11 568815597 CM000663.2 568336023 2 NC_000002.12 568815596 CM000664.2 568336022 ... MT NC_012920.1 251831106 J01415.2 113200490 \end{verbatim} An overview of the alternative loci can be found in \\ \url{ftp://ftp.ncbi.nlm.nih.gov/genomes/H_sapiens/Assembled_chromosomes/alts_accessions_GRCh38.p2} \begin{verbatim} #Chromosome RefSeq Accession.version RefSeq gi GenBank Accession.version GenBank gi 1 NW_011332688.1 732663890 KN538361.1 729255073 1 NW_009646195.1 698040070 KN196473.1 693592224 1 NW_009646194.1 698040071 KN196472.1 693592225 1 NW_009646196.1 698040069 KN196474.1 693592223 1 NW_011332687.1 732663891 KN538360.1 729255091 2 NW_011332690.1 732663888 KN538363.1 729255051 ... \end{verbatim} Where the Chromosome column corresponds to the Chromosome column in the \textit{chr\_accessions\_GRCh38.p2} file and the \textit{GenBank Accession.version} column in modofied version to the chromosome ref in the VCF files. The regions are defined in a file at \\ \url{ftp://ftp.ncbi.nlm.nih.gov/genomes/H\_sapiens/chr\_context\_for\_alt\_loci/GRCh38.p2/genomic\_regions\_definitions.txt} \begin{verbatim} #region_name chromosome start stop REGION108 NC_000001.11 2448811 2791270 PRAME_REGION_1 NC_000001.11 13075113 13312803 REGION200 NC_000001.11 17157487 17460319 REGION189 NC_000001.11 26512383 26678582 REGION109 NC_000001.11 30352191 30456601 FOXO6 NC_000001.11 41250328 41436604 REGION190 NC_000001.11 112909422 113029606 CEN1 NC_000001.11 122026460 125184587 1Q21 NC_000001.11 144488706 144674781 ... \end{verbatim} Which alt loci belongs to which region can now be derived from the files in the subfolder of \\ \url{ftp://ftp.ncbi.nlm.nih.gov/genomes/H_sapiens/chr_context_for_alt_loci/GRCh38.p2/ALT_REF_LOCI_X/alt_scaffolds/alt_scaffold_placement.txt} where \textit{X} is in $[1..35]$. This means we can build up a tree like structure for the alt loci \begin{verbatim} genome |- chr1 |- chr2 : |-Region2.1 : |-Region2.2 : : |-alt_loci2.2.1 : : |-alt_loci2.2.2 : : |-alt_loci2.2.3 : : : \end{verbatim} \subsection*{Regions} \begin{itemize} \item 178 regions with alternativ loci \item $1 - 35$ alt. loci per region \item cumulative length: 173.055.655bp \item assuming the genome is of size 3.088.269.832bp (only top chromosomes), the alt loci cover about $5.6\%$ of the whole genome \end{itemize} \subsection*{alt. loci} \begin{itemize} \item 261 alt. loci \item there are varying numbers of alt. loci covering a region. A alt. loci does not have to cover the whole region (e.g 1Q21, ADAM5, APOBEC, ... - mostly single alt. loci <-> region relations) but can also cover only a part of the region (e.g. ABR, CYP2D6, ...). Some regions are even not covered completely by alt. loci but are defined by the most 5' and 3' alt loci ends in a specific genomic range (e.g. KRTAP\_REGION\_1, OLFACTORY\_REGION\_1, PRADER\_WILLI, ...) \end{itemize} \subsection*{seeds} There exists alignment information files for each alt. loci on the NCBI FTP-server. These alignments are stored in GTF format (single row) and \subsection*{alignment} \begin{itemize} \item \end{itemize} \section*{MANUAL} \subsection*{Usage} \paragraph*{download} \begin{verbatim} java -jar hg38altlociselector-cli-0.0.1-SNAPSHOT.jar download GRCh38 \end{verbatim} Using the \texttt{download} command, all necessary files are downloaded. This includes the genome reference (from BWA-kit) and the region and alt loci definition files (from NCBI s.o.). \paragraph*{create-fa} \begin{verbatim} java -jar hg38altlociselector.jar create-fa -o data/ \end{verbatim} Creates fasta files for the regions. The alt loci can be extended and adapted to the strand of the reference. Using the \texttt{-o} flag, the outputfolder can be specified. \paragraph*{create-seed} \begin{verbatim} java -jar hg38altlociselector.jar create-seed -o data/ \end{verbatim} Creates seed files used by the Seqan-Alignment tool in the specified output folder. \paragraph*{align} \begin{verbatim} java -jar hg38altlociselector.jar align -d data/ -s \end{verbatim} Creates the Fasta and seed files and stores them in the \textit{temporary} folder. then calls the Seqan aligner and that creates the VCF files.\\ With the \texttt{-d} flag you can specify the data directory and with \texttt{-s} you will generate single files for each \paragraph*{annotate} \begin{verbatim} java -jar hg38altlociselector.jar annotate -v file.vcf.gz -a alt_loci.vcf.gz \end{verbatim} The \texttt{annotate} command will annotate an existing vcf-file with the help and knowledge about alternative loci. It will take all known alternative loci for a specific region and according to the overlap with SNVs between the reference toplevel allele and the alternative loci alleles, decide which allele is the most probable. Those variants which are FP in the selected allele will be marked in the \textit{FILTER} column. \begin{verbatim} ##FILTER=<ID=altloci,Description="This is a FP variant according to the most probable alt loci."> \end{verbatim} There are several checks to see if the variant distribution indicates a specific common allele. \begin{enumerate} \item Imaging, we have two sets of variants $\mathcal{A}$ with the SNVs defining the difference between toplevel chromosome allele and alt loci allele and $\mathcal{B}$ which are those variants found in the specific sample $\mathcal{S}$ for the same chromosomal region $\Re$.\\ The \end{enumerate} \end{document}
{ "alphanum_fraction": 0.7649471525, "avg_line_length": 37.2882352941, "ext": "tex", "hexsha": "d6d38d57b9b13a9f7ee7fe6ba0421fdb8a7db049", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2021-01-18T22:52:52.000Z", "max_forks_repo_forks_event_min_datetime": "2016-12-15T13:52:59.000Z", "max_forks_repo_head_hexsha": "688a923f297338f0053c94d620a8601b1c4d9203", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "charite/asdpex", "max_forks_repo_path": "doc/manual.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "688a923f297338f0053c94d620a8601b1c4d9203", "max_issues_repo_issues_event_max_datetime": "2019-12-18T14:43:02.000Z", "max_issues_repo_issues_event_min_datetime": "2017-01-04T08:53:09.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "charite/asdpex", "max_issues_repo_path": "doc/manual.tex", "max_line_length": 467, "max_stars_count": 9, "max_stars_repo_head_hexsha": "688a923f297338f0053c94d620a8601b1c4d9203", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "charite/asdpex", "max_stars_repo_path": "doc/manual.tex", "max_stars_repo_stars_event_max_datetime": "2021-09-02T12:10:54.000Z", "max_stars_repo_stars_event_min_datetime": "2016-12-14T19:04:08.000Z", "num_tokens": 1941, "size": 6339 }
% use paper, or submit % use 11 pt (preferred), 12 pt, or 10 pt only \documentclass[letterpaper, preprint, paper,11pt]{AAS} % for preprint proceedings %\documentclass[letterpaper, paper,11pt]{AAS} % for final proceedings (20-page limit) %\documentclass[letterpaper, paper,12pt]{AAS} % for final proceedings (20-page limit) %\documentclass[letterpaper, paper,10pt]{AAS} % for final proceedings (20-page limit) %\documentclass[letterpaper, submit]{AAS} % to submit to JAS \usepackage{bm} \usepackage{amsmath} \usepackage{subfigure} %\usepackage[notref,notcite]{showkeys} % use this to temporarily show labels \usepackage[colorlinks=true, pdfstartview=FitV, linkcolor=black, citecolor= black, urlcolor= black]{hyperref} \usepackage{overcite} \usepackage{footnpag} % make footnote symbols restart on each page \PaperNumber{XX-XXX} \begin{document} \title{MANUSCRIPT TITLE (UP TO 6 INCHES IN WIDTH AND CENTERED, 14 POINT BOLD FONT, MAJUSCULE)} \author{John L. Doe\thanks{Title, department, affiliation, postal address.}, Jane Roe\thanks{Title, department, affiliation, postal address.}, \ and J.Q. Public\thanks{Title, department, affiliation, postal address.} } \maketitle{} \begin{abstract} The abstract should briefly state the purpose of the manuscript, the problem to be addressed, the approach taken, and the nature of results or conclusions that can be expected. It should stand independently and tell enough about the manuscript to permit the reader to decide whether the subject is of specific interest. The abstract shall be typed single space, justified, centered, and with a column width of 4.5 inches. The abstract is not preceded by a heading of ``Abstract'' and its length may not extend beyond the first page. \end{abstract} \section{Introduction} The American Astronautical Society (AAS) publishes bound sets of printed conference proceedings for personal, institutional, and library usage. The availability of hardcopy enhances the longevity of your work and elevates the importance of your conference contribution to professionals through archival publication. To preserve the consistency and quality of the proceedings, all authors adhere to the latest version of AAS conference proceedings format. This document is intended to serve as a visual and instructional guide, and as a \LaTeX\ document template, for the AAS conference proceedings format. This template provides the basic font sizes, styles, and margins required by the publisher's formatting instructions. There are also styles for centered equations, figure and table captions, section and sub-section headings, footnote text, \emph{etc}. This template provides samples of their usage. To use this document as a template, simply copy and change its contents with your own information while maintaining the required predefined style, rather than starting anew. Since this is not a tutorial on how to use \LaTeX\, refer to \LaTeX\ manuals for more information. Your manuscript should include a paper number, a title, an author listing, an abstract, an introductory section, one or more sections containing the main body of the manuscript, a concluding summary section, and a reference or bibliography section. You may also include a section on notation, an acknowledgements section, and appendices, as illustrated in the sequel. You should \emph{not} include a leading cover sheet. Author affiliation shall appear on the first page, added as a footnote to the last name of each the author. If a distributional release statement or copyright notice is required by your sponsor, this is added as a footnote to the title of the manuscript, appearing on the first page only. Page numbers should be centered halfway between the lower margin and the bottom edge of the page (\emph{i.e.}, approximately 0.75 inches from the bottom). Copy should be single space with double space between paragraphs, with the first line of each paragraph indented 0.2 inches. The recommended sans-serif font for paper number, title, and author listing is \emph{Arial}, or, \emph{Helvetica}. The title font and paper-number font should be the same: 14-point sans-serif, centered, and bold. The author-listing font should be 12-point sans-serif, centered, and bold. The recommended serif font for body text, headings, \emph{etc}., is \emph{Times} or \emph{Times New Roman} at 10-12 point, 11 point preferred. The captions for figures and tables are bold 10-point serif font. The endnote reference text and footnote text is 9-point serif font. The right-hand margin of body text should be justified; if not, it should be fairly even nevertheless. All text and text background shall remain uncolored (black on white). These conventions should be automatically implemented in this \LaTeX\ template when the predefined styles of this template are used. The body text of this template is based on the preferred font size of 11 points. To change this to 12-point size, increase the font size at the top of the \LaTeX\ template by uncommenting the appropriate {\tt documentclass[]\{\}} line. For very long manuscripts, a 10-point font may be used to keep the manuscript within the publisher's limit of twenty (20) physical pages. \section{This is a Sample of a General Section Heading} Numbering of section headings and paragraphs should be avoided. Major section headings are majuscule, bold, flush (aligned) left, and use the same style san-serif font as the body text. Widow and orphan lines should also be avoided; more than one line of a paragraph should appear at the end or beginning of a page, not one line by itself. A heading should not appear at the bottom of a page without at least two lines of text. Equations, figures, and tables must be sequentially numbered with no repeated numbers or gaps. Excessive white space --- such as large gaps before, between, and after text and figures --- should be minimal and eliminated where possible. \subsection{This Is a Sample of a Secondary (Sub-Section) Heading} Secondary, or sub-section, headings are title case (miniscule lettering with the first letter of major words majuscule), flush left, and bold. Secondary headings use the same serif font style as the body text and, like section headings, should not be numbered. Tertiary headings should be avoided, but if necessary, they are run-in, italic, and end in a period, as illustrated with the next six (6) paragraphs. \begin{equation} \label{eq:ab} a = b^{2} \end{equation} \subsubsection{Equations.} Equations are centered with the equation number flush to the right. In the text, these equations should be referenced by name as Eq.~\eqref{eq:ab} or Equation~\eqref{eq:ab} (\emph{e.g}., not eq. 1, (1), or \emph{Equation 1}). To improve readability, scalar variable names such as $a$ and $b^{2}$ are usually italicized when appearing in text and equations.\footnote{A section on mathematical notation is provided in the sequel.} \subsubsection{Abbreviations.} When abbreviations for units of measure are used, lower case without periods is preferred in most instances; \emph{e.g}. ft, yd, sec, ft/sec, \emph{etc}., but in. for inch. \begin{figure}[htb] \centering\includegraphics[width=3.5in]{Figures/test} \caption{Illustration Caption Goes Here} \label{fig:xxx} \end{figure} \subsubsection{Figures.} Illustrations are referenced by name and without formatting embellishments, such as Figure~\ref{fig:xxx}, Figure 2, \emph{etc}., or, Figures 3 and 4 (\emph{e.g}., not figure (1), Fig. 1, \underline{Figure 1}, \emph{Figure 1}, \emph{etc}.). Each illustration should have a caption unless it is a mere sketch. Single-phrase captions are usually in title case; they are bold 10-point serif font and centered below the figure as shown in Figure~\ref{fig:xxx}. An explanatory caption of several sentences is permissible. Ideally, every illustration should be legibly sized -- usually about one-half or one-quarter page -- and appear in the text just before it is called out or mentioned. Alternatively, it is also permissible to place all figures together at the end of the text as a separate appendix; however, these two conventions should not be mixed. All figures and callouts should remain clearly legible after reduction. All illustrations appear as black and white in the final printing, although colors are retained in the electronic (CD-ROM) version. \subsubsection{Graphic Formats.} The highest quality formats are Encapsulated PostScript (EPS) and PDF vector-graphic formats. These formats are recommended for all illustrations, unless they create document files that are excessively large. Specifically, you should change the graphic format or compress the image resolution whenever an illustrated page takes more than two seconds to render onscreen, or, whenever the total manuscript file size starts to approach 5 Mb. Photographs, illustrations that use heavy toner or ink (such as bar graphs), and figures without text callouts, may be suitably displayed with picture formats such as BMP, GIF, JPEG, PNG, TIFF, \emph{etc}. Line drawings, plots, and callouts on illustrations, should not use picture formats that do not provide sharp reproduction. All graphical content must be embedded when creating a PDF document, especially any fonts used within the illustration. Note that the Windows Metafile Format (WMF) is sometimes problematic and should be avoided. \subsubsection{References and Citations.} The citation of bibliographical endnote references is indicated in the text by superscripted Arabic numerals, preferably at the end of a sentence.\cite{doe2005, style1959} If this citation causes confusion in mathematics, or if a superscript is inappropriate for other reasons, this may be alternately expressed as (Reference~\citenum{doe2005}) or (see References~\citenum{doe2005} and \citenum{style1959}), (\emph{e.g}., not [1], Ref. (1), \emph{etc}.). While there is no singly prescribed format for every bibliographical endnote, references should be consistent in form. Citations should be sufficient to allow the reader to precisely find the information being cited, and should include specific pages, editions, and printing numbers where necessary. URL citations are discouraged, especially when an archival source for the same information is available. If a URL citation is required, it should appear completely and as a footnote instead of a bibliographical reference.\footnote{\url{http://www.univelt.com/FAQ.html\#SUBMISSION}} The citation of private communication is especially discouraged, but if required it should be cited as a footnote and include the date, professional affiliation, and location of the person cited.\footnote{Gangster, Maurice (1999), personal correspondence of March 21st. Sr. Consultant, Space Cowboy Associates, Inc., Colorado Springs, CO.} \begin{table}[htbp] \fontsize{10}{10}\selectfont \caption{A Caption Goes Here} \label{tab:label} \centering \begin{tabular}{c | r | r } % Column formatting, \hline Animal & Description & Price (\$)\\ \hline Gnat & per gram & 13.65 \\ & each & 0.01 \\ Gnu & stuffed & 92.50 \\ Emu & stuffed & 33.33 \\ Armadillo & frozen & 8.99 \\ \hline \end{tabular} \end{table} \emph{Tables.} Tables are referred to by name in the text as Table~\ref{tab:label}, or, Tables 2 and 3 (\emph{e.g}., not table 1, Tbl. 1, or \emph{Table 1}). The title is centered above the table, as shown in Table 1. The font size inside tables should be no larger than the body text, but may be adjusted down to 9-point if necessary (10-point serif font is considered nominal). Note that table units are in parentheses. Only the minimum number of table lines needed for clarity is desired. Ideally, every table should appear within the text just before it is called out, but, it is also permissible to place all tables together at the end of the text as a separate appendix. If so, these two conventions should not be mixed. Equations, figures, and tables must be sequentially numbered with no repeated numbers or gaps. Each figure and table shall be called out in the text; gratuitous figures and tables that are not called out should be eliminated. Intermediate equations may be numbered without being called out. \section{Manuscript Submission} The Portable Document Format (PDF) is the preferred format for electronic submissions.\footnote{By contributing your manuscript for proceedings publication, you necessarily extend any copyrights to the AAS and its designated publisher, to allow the AAS to publish your manuscript content in all the forms that it wishes.} The page size should be 8.5 inches by 11 inches exactly. You should use ``press-quality'' or ``high-quality'' software settings to create your PDF file; these settings tend to keep the PDF file true to the original manuscript layout, and automatically embed the correct fonts, \emph{etc}. Otherwise, settings such as ``Embed All Fonts'', \emph{etc}., should be selected as available. The use of internal hyperlinks within the electronic file is not encouraged because hyperlinks may not be supported in the final version of the electronic proceedings. \subsection{Journal Submission} If you wish to submit this manuscript to the \emph{Journal of Astronautical Sciences}, it must be re-formatted into a double-spaced format. This can be done easily with this template. At the top of the document, there are two (2) types document class statements ({\tt paper} and {\tt submit}). The first type is the one to use for a conference paper. The second type , which is commented out, can be used to reformat the paper for the JAS journal submission. \section{Conclusion} Some AAS meetings are co-sponsored with the American Institute of Aeronautics and Astronautics (AIAA). When your paper number starts with ``AAS'', or when the conference is described as a joint ``AAS/AIAA'' meeting with the AAS listed first, this AAS conference proceedings format shall be used. Your final manuscript should be camera-ready as submitted --- free from technical, typographical, and formatting errors. Manuscripts not suitable for publication are omitted from the final proceedings. \section{Acknowledgment} Any acknowledgments by the author may appear here. The acknowledgments section is optional. \section{Notation} \begin{tabular}{r l} $a$ & a real number \\ $b$ & the square root of $a$ \\ \end{tabular} \\ If extensive use of mathematical symbols requires a table of notation, that table may appear here. Where the first mathematical symbol is introduced, a footnote should direct the attention of the reader to this table.\footnote{The footnote symbols are a standard sequence: $\ast$, $\dagger$, $\ddag$, \emph{etc}. This sequence of footnote symbols should restart with each new page.} The notation table should be simple and reasonably consistent with the standards of modern technical journals, as illustrated above. The notation table does not need its own caption like an ordinary table, since the section heading serves this purpose. The notation section is optional. \appendix \section*{Appendix: Title here} Each appendix is its own section with its own section heading. The title of each appendix section is preceded by ``APPENDIX: '' as illustrated above, or ``APPENDIX A: '', ``APPENDIX B: '', \emph{etc}., when multiple appendixes are necessary. Appendices are optional and normally go after references; however, appendices may go ahead of the references section whenever the word processor forces superscripted endnotes to the very end of the document. The contents of each appendix must be called out at least once in the body of the manuscript. \subsection*{Miscellaneous Physical Dimensions} The page size shall be the American standard of 8.5 inches by 11 inches (216 mm x 279 mm). Margins are as follows: Top -- 0.75 inch (19 mm); Bottom -- 1.5 inches (38 mm); Left -- 1.25 inches (32 mm); Right -- 1.25 inch (32 mm). The title of the manuscript starts one inch (25.4 mm) below the top margin. Column width is 6 inches (152.5 mm) and column length is 8.75 inches (222.5 mm). The abstract is 4.5 inches (114 mm) in width, centered, justified, 10 point normal (serif) font. \bibliographystyle{AAS_publication} % Number the references. \bibliography{references} % Use references.bib to resolve the labels. \end{document}
{ "alphanum_fraction": 0.7754010695, "avg_line_length": 90.4175824176, "ext": "tex", "hexsha": "3a4375f8cdfb61db183858d8e26a1b7d7ba588af", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7739daab7f39e0daa6bb640f255c781c3c9388cb", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "fsbr/startracker-thesis", "max_forks_repo_path": "AAStemplatev2_0_6.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7739daab7f39e0daa6bb640f255c781c3c9388cb", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "fsbr/startracker-thesis", "max_issues_repo_path": "AAStemplatev2_0_6.tex", "max_line_length": 1381, "max_stars_count": 1, "max_stars_repo_head_hexsha": "55bd84e8d4b4bffa8f7526bd5b94ddef80911f99", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "aerosara/thesis", "max_stars_repo_path": "ASC 2015/AAS Paper Format Instructions and Templates for LaTeX Users/AAStemplatev2_0_6.tex", "max_stars_repo_stars_event_max_datetime": "2020-10-19T23:08:31.000Z", "max_stars_repo_stars_event_min_datetime": "2020-10-19T23:08:31.000Z", "num_tokens": 3818, "size": 16456 }
\setvariables[article][shortauthor={Jégousso, O'Dell}, date={January, 2020}, issue={4}, DOI={https://doi.org/10.7916/archipelagos-6fz4-kb03}] \setupinteraction[title={Thinking Digital: Archives and Glissant Studies in the Twenty-First Century},author={Jeanne Jégousso, Emily O'Dell}, date={January, 2020}, subtitle={Thinking Digital}] \environment env_journal \starttext \startchapter[title={Thinking Digital: Archives and Glissant Studies in the Twenty-First Century} , marking={Thinking Digital} , bookmark={Thinking Digital: Archives and Glissant Studies in the Twenty-First Century}] \startlines {\bf Jeanne Jégousso Emily O'Dell } \stoplines {\startnarrower\it The {\em Library of Glissant Studies}, an open-access website, centralizes information by and on the work of Caribbean writer Édouard Glissant (Martinique, 1928--France, 2011). Endorsed by literary executors Sylvie and Mathieu Glissant, this multilingual, multimedia website provides bibliographic information, reproduces rare sources, connects numerous institutions, and includes the works of both renowned and emerging scholars in order to stimulate further research. The library chronicles all forms of academic work devoted to Glissant's writings, such as doctoral dissertations, MA theses, articles, books, and conference presentations. In this article, Jeanne Jégousso and Emily O'Dell address the origin, the evolution, and the challenges of the digital platform that was launched in March 2018. \stopnarrower} \blank[2*line] \blackrule[width=\textwidth,height=.01pt] \blank[2*line] Martinican author Édouard Glissant (1928--2011) revolutionized contemporary literary thought while providing new ways to theorize and understand transnational exchanges thanks to his key notions of Relation, developed in his 1990 {\em Poétique de la Relation} ({\em Poetics of Relation}), and the {\em Tout-monde} (Whole-World), which was also the title of his 1993 novel. Drawing from the diverse philosophical, poetic, and oral traditions of Europe, Africa, and the Americas, Glissant questioned and reconsidered the very notion of national boundaries and cultural legacies in order to create a dialogue among civilizations across time and geographic borders. Toward the end of his life, Glissant became particularly preoccupied with the continuation of the work that the Canadian scholar Alain Baudot had completed in his {\em Bibliographie annotée d'Édouard Glissant} ({\em Annotated Bibliography of Édouard Glissant}), which contains more than thirteen hundred references and sixty illustrations pertaining to Glissant's work and its criticism. However, because Baudot's bibliography had been out of print since 1993, Glissant expressed the need for a new project that would \quotation{poursuivre le travail d'Alain Baudot} (\quotation{continue Alain Baudot's work}) and include the books, articles, conference talks, interviews, newspaper clippings, and poems that Glissant had written in the 1990s and 2000s.\footnote{Édouard Glissant, discussion with Raphaël Lauro, December 2010, Paris. Unless otherwise noted, all translations are ours.} The resulting project is the {\em Library of Glissant Studies}, a digital platform created by Raphaël Lauro and Jeanne Jégousso dedicated to inspiring new research and facilitating collaboration between scholars. In this article, we address the inspiration behind and purpose of the {\em Library of Glissant Studies}, as well as the hermeneutic questions that arose during the first stages of this project and the research philosophy that led to the design and functionality of the online platform. The \useURL[url1][https://www.glissantstudies.com][][{\em Library of Glissant Studies}]\from[url1] ({\em LoGS}) was designed as a digital bibliography dedicated to making textual references by and about Édouard Glissant accessible and widely available to readers, students, and scholars on every continent.\footnote{See the {\em Library of Glissant Studies}, \useURL[url2][https://www.glissantstudies.com]\from[url2].} The objective of the project is to preserve and share information on Glissant's work and thought in an open-access website and also to facilitate scholarly collaboration by, on the one hand, including materials in numerous languages from geographically disparate areas and by, on the other, creating research groups in several institutions. It is important to emphasize that although the majority of Glissant's work was produced in French, the {\em Library of Glissant Studies} was not designed or developed exclusively for French speakers and French materials. Instead, the team pays special attention to including all texts pertaining to Glissant or his work in any language, which demonstrates Glissant's theory formulated in his fifth essay of poetics, {\em La Cohée du Lamentin}, that there is a \quotation{multirelation où toutes les langues du monde . . . trament ensemble des chemins qui sont autant d'échos} (\quotation{a multirelation where all the languages of the world . . . hatch together paths that are as many echoes}).\footnote{Édouard Glissant, {\em La Cohée du Lamentin} (Paris: Gallimard, 2005), 137.} In other words, the implementation of this philosophy in the construction of the digital bibliography supposes that the aggregation of languages will facilitate the emergence of new research trends that will eventually resonate and influence one another. In addition, Glissant writes in his 1981 essay {\em Le discours antillais} that plurilingualism is one of the paths to Relation, which consists of ongoing exchanges between various elements as well as cultural {\em metissage}, and that \quotation{multilinguisme \quote{disperse} le texte écrit dans une diversité concrète dont il faut dès maintenant explorer les accès inconnus} (\quotation{plurilingualism \quote{diffuses} the written text in a concrete diversity of which we must immediately explore the unknown paths}).\footnote{Édouard Glissant, {\em Le discours antillais} (Paris: Gallimard, 1997), 616.} Therefore, in keeping with Glissant's conception of plurilingualism, the {\em Library of Glissant Studies} strives to facilitate exchanges by including multiple languages from numerous countries and to encourage students, scholars, and readers to explore \quotation{unknown paths} in Glissant's literary production while developing an aesthetic and structure reflecting his notions of Relation and \quotation{archipelic-thinking.} In the 2005 essay {\em La Cohée du Lamentin}, Glissant distinguished the \quotation{pensée continentale, qui dévoile en diasporas les splendeurs absolues de l'Un{[}, et la{]} pensée archipélique, où se concentre l'infinie variation de la Diversité} (\quotation{continental-thinking, which reveals in diasporas the absolute splendors of the One{[}, and the{]} archipelic-thinking, where the infinite variation of Diversity is concentrated}).\footnote{Glissant, {\em La Cohée du Lamentin}, 231.} By placing these two types of thinking in opposition, Glissant favors the diverse multiplicity (archipelic-thinking) over the repetition of the same (continental-thinking). By bringing together languages, materials, and scholars from various disciplines and geographical areas, we designed our project to be faithful to this emphasis on plurality and diversity. Although Glissant was influenced by his Caribbean heritage, his thought and writings are not anchored to one particular place but rather illustrate a worldly and intercontinental way of thinking. He left his native island of Martinique at the age of eighteen to study in Paris, he lived in Louisiana and in New York City, and he traveled extensively to several islands in the Caribbean and all over the world. In fact, Glissant repeatedly rejects the notion of {\em identité-racine} (rooted-identity) to favor an {\em identité-rhizome} (rhizomatic-identity), which embraces a vision of the world that focuses on diversity and plurality instead of unicity and purity. Thus the {\em Library of Glissant Studies} was designed with an esthetics of the Whole-World in mind, and chronology seemed to be a more essential element to emphasize than geography because it gives a clearer picture of the evolution of Glissant's key notions and of their reception at particular moments in time. Therefore, the project will soon include an evolutive map that showcases the dispersion of the author's ideas throughout time by showing dots on a map that correspond to the number of works by and about the author created in a particular year. Glissant's role in redefining Caribbean studies and Atlantic studies and their roles as models for global studies is central to many recent studies, such as, for example, Kristin Van Haesendonck and Theo D'Haen's 2014 edited volume {\em Caribbeing: Comparing Caribbean Literatures and Cultures} and John E. Drabinski and Marisa Parham's 2015 volume of essays {\em Theorizing Glissant: Sites and Citations}. Therefore, this project also responds to the growing interest surrounding Glissant's literary production. For instance, according to the French database {\em Fichier central des thèses}, the number of completed doctoral theses in France mentioning Glissant's theories and works has increased from only 11 in 2009 to 25 in 2017. Similarly, ProQuest.com lists 681 theses devoted to Glissant produced in North America from 2008 to 2017. It is clear that Glissant's texts and notions are continuously and increasingly being studied and debated. Another indicator of the growing interest in Glissant studies is that these works are no longer relegated to French and francophone literary studies and are now the subject of study in other academic fields and disciplines (e.g., comparative literature, postcolonial theory, cultural criticism, linguistics, anthropology, philosophy, history, etc.), as demonstrated by Van Haesendonck and D'Haen's and Drabinski and Parham's volumes mentioned above.\footnote{Kristin Van Haesendonck and Theo D'Haen, {\em Caribbeing: Comparing Caribbean Literatures and Cultures} (Amsterdam: Rodopi, 2014); and John E. Drabinski and Marisa Parham, {\em Theorizing Glissant: Sites and Citations} (London: Rowman and Littlefield, 2015). Other examples include Celia Britton, {\em Édouard Glissant and Postcolonial Theory: Strategies of Language and Resistance} (Charlottesville: University Press of Virginia, 1999); Christina Kullberg, \quotation{Crossroads Poetics: Glissant and Ethnography,} {\em Callaloo} 36, no. 4 (2013): 968--82; Alexandre Leupin, {\em Édouard Glissant, philosophe: Héraclite et Hegel dans le Tout-monde} (Paris: Hermann, 2016); and Michael Wiedorn, {\em Think like an Archipelago: Paradox in the Work of Édouard Glissant} (New York: State University of New York Press, 2018).} As perspectives on Glissant's work evolve, it remains difficult to access and stay current with the numerous publications, events, and documents created around the globe. This is also true of Glissant's original works, which are inaccessible to the majority of researchers because many of his writings have never been republished since their initial appearance in currently unavailable literary journals, newspapers, and catalogues of art exhibits. It was essential to gather these materials to make them available to research and to further our understanding of Glissant's work. {\em LoGS} now offers unique documents, such as poems published in the early 1950s, rare interviews published in various newspapers, and exclusive pictures and manuscripts of the author sent in by various contributors. Thanks to this collection, readers and scholars are able to have a better understanding of the movement and evolution of Glissant writing process. For the project's directors, the accumulation of documents creates an {\em archéologie relationnelle} (relational archaeology) of Glissant's writing.\footnote{This notion and its consequences were presented in a symposium at the University of Cambridge. See Raphaël Lauro, \quotation{Édouard Glissant, archiviste de lui-même: L'exemple du {\em Discours antillais}} (paper presented at Édouard Glissant: Le cri et la parole, Cambridge, June 2019).} In fact, it allows the user to compare various texts from the same time period, which reveals the ways Glissant developed one theme throughout various genres and from different perspectives. For example, during the 1970s Glissant focused his literary production on the creation of a written language resulting from an assemblage of both French and Creole. This theory appears in 1975 in three forms: a conference presentation at the University of Milwaukee--Madison in April (later modified and included in Glissant's 1981 essay {\em The Caribbean Discourse}), a radio interview on 10 July, and a novel titled {\em Malemort}. Before the {\em Library of Glissant Studies}, these documents were scattered across different physical and digital archives. In addition, some of them, such as Glissant's presentation, were entirely unknown to the general public and a majority of scholars. This illustrates how our digital project very concretely lends itself to the creation of a relational archaeology, which places texts from different genres and time periods in relation. As a result, the 1975 conference presentation can be used to clarify sections of the {\em Caribbean Discourse} published in 1981, therefore opening new textual interpretations and permitting a better understanding of Glissant's texts and evolutions of thought. In addition, the digital archives allow us to perceive a \quotation{technique de l'entassement, de la reprise, de la répétition} (\quotation{technique of stacking, reiteration, repetition}),\footnote{Ibid.} which results in the formulation of new research questions, including, Why did the author stop writing poetry for nineteen years? How did a particular notion evolve between the first draft of an article or conference talk and the final publication? It is a significant advancement in the field of Glissant studies to be able to \quotation{see} the {\em ressassement} at work by placing literary journals from the 1950s and essays from the late 1960s in conversation. {\em Ressassement}, which means repetition with minor changes and is often symbolized by a spiral, plays an integral role in Glissant's writing as he explained to Alexandre Leupin in {\em Les entretiens de Baton Rouge}: \quotation{La répétition et le ressassement m'aident ainsi à fouiller} (\quotation{Repetition and {\em ressassement} help me to search}).\footnote{Alexandre Leupin, {\em Les entretiens de Baton Rouge} (Paris: Gallimard, 2008), 58.} For instance, a poem included in Glissant's 1969 essay {\em Poetic Intention} was actually published nine years previously in {\em La Voix des Poètes}, a literary journal directed by Simone Chevalier.\footnote{Édouard Glissant, {\em L'intention poétique} (Paris: Gallimard, 1997), 26.} Therefore, a digital bibliography allows us to create a complete overview of Glissant's work, to formulate new questions, and to examine new dimensions of the author's thought. Another objective of the project was to solve the problems created when scholars and students in the United States are unfamiliar with or do not have access to the developments in Glissant studies in Japan and when scholars in France are not aware of the works being produced in Canada, and so on. Although platforms such as JSTOR, WorldCat, Project MUSE, Érudit, and EBSCOHost make it possible to access some of these critical and literary productions, they are incomplete and often limited to particular languages or geographical regions. They also do not offer the interactive features necessary to allow Glissant scholars to collaborate with one another. By gathering all of Glissant's criticism in one place, it is possible to see which issues have been discussed at length, such as the notion of Relation, creolization, and {\em antillanité}, and which ones require further study. For example, the journal {\em Acoma} created by Glissant and divided into five issues has been analyzed only in pieces in two scholarly articles.\footnote{See {\em Acoma: Revue de littérature, de sciences humaines et politiques trimestrielle, 1--5, 1971--1973} (Perpignan: Presses Universitaires de Perpignan, 2005).} In keeping the bibliography current, we hope to see new topics of research emerge in the future and to encourage Glissantian criticism not to repeat itself by keeping people apprised of what types of work are being produced, and to keep the field of Glissant studies lively and innovative. From a practical standpoint, the {\em Library of Glissant Studies} consists of an interface organized into two primary bibliographic categories: texts and works written by Édouard Glissant and texts and works written about Édouard Glissant. The items in these sections are organized in chronological order to emphasize how particular essays and novels were shaped by previous publications in literary journals and prefaces and also to easily compare the author's work, its reception, and its criticisms. In addition, each user will be able to search by language, year, type of document, or keyword, and each of these subcategories includes the language and country of publication. In the near future, when looking for a specific topic, researchers will be able to discover additional materials through network visualization. Under each of the categories, bibliographical references will be indicated using the eighth edition of the {\em Modern Language Association Handbook} and accompanied by a PDF document or digital scan of materials in the public domain or by a link to the text in question if it is still under copyright (via Cairn, JSTOR, or other reference sites). The website also has a section in which users can submit documents to be considered for the project's platform. All the submitted documents are reviewed by our executive board to ensure the accuracy and authenticity of the information before posting them on our website. One of the most common challenges faced in academic endeavors is finding monetary support, and, because of a lack of funding, {\em LoGS} relies on a free web platform and the voluntary work of dozens of dedicated individuals. During the first year of the project, Raphaël Lauro, who had archived Glissant's manuscripts for the National Library of France, and Jeanne Jégousso, who recovered and archived documentation for the Center for French and Francophone Studies at Louisiana State University, worked together with the support of Édouard Glissant's widow, Sylvie Glissant, to archive the author's lesser-known works. During this period, Lauro and Jégousso indexed most of the scholarly work produced in France and in the United States, designed and built the website, and contacted established researchers and graduate students to create a network of Glissant scholars. This interdisciplinary team of students, scholars, and independent researchers is responsible for fostering greater accessibility to past and present work and to creating a discussion with a wide and diverse audience. Today, {\em LoGS} is structured around seven research and editorial groups, divided into geographical areas and led by one or several scholars. Each group is in charge of relaying the new publications, events, archives, and other type of materials to the project's e-mail address (glissantstudies{[}at{]}gmail.com). The information is then processed and added to the website. Each team leader has involved his or her institution to support the project and to contribute to the efforts of securing grants. The teams are located in Japan (Takayuki Nakamura, Waseda University), France (Sylvie Glissant and Loïc Céry, Institut du Tout-monde), Québec (Raphaël Lauro, Université de Montréal), the United States (Jeanne Jégousso, Hollins University, and Charly Verstraet, University of Albama at Birmingham), Martinique (Axel Arthéron and Dominique Aurélia, Université des Antilles), Cuba (Camila Valdés, Casa de las Americas), and Italy (Elena Pessini, University of Parma, and Guiseppe Sofo, Università Ca' Foscari Venezia). In addition to the editorial board, the {\em Library of Glissant Studies} relies on the involvement of an advisory board and a team of contributors. The advisory board consists of renowned scholars in the field of Glissant studies, and their endorsement is a testament to the rigor of the project. The contributors are students, scholars, and critical thinkers who have sent at least five new references to the project. Their efforts are recognized by adding their picture and biography to our website. This is to say that the {\em Library of Glissant Studies} is first and foremost a collective project, where people share their references, meet one another during colloquia focusing on Glissant's work, and engage with each other on our social media platforms. Over the past few months, our team has provided documentation and support to curators for exhibits in Paris and Miami, created awareness of upcoming publications and translations (e.g., an upcoming Japanese version of the {\em Caribbean Discourse}, the first Spanish translation of Glissant's essay {\em The Philosophy of Relation}, and an updated English translation of Glissant's poems), and provided pedagogical support and materials to help professors find the ideal documents for their students. During this process, we had to think about how to build a digital bibliography and how to examine our digital practices in order to create and design a unique tool that is faithful not only to Glissant's work and philosophy but also to our wish to further research and collaboration while being as inclusive as possible. The {\em Library of Glissant Studies} was created to facilitate \quotation{la consultation des sources sans véritablement se prononcer, en apparence, sur les méthodes d'analyse qui doivent être employees} (\quotation{the consultation of sources without determining the methods of analysis that must be used}), to quote Pierre Mounier in his 2018 essay {\em Les humanités numériques: Une histoire critique}.\footnote{Pierre Mounier, {\em Les humanités numériques: Une histoire critique} (Paris: Éditions de la Maison des sciences de l'homme, 2018), 48.} Following this principle, the {\em Library of Glissant Studies} was designed as a collection without any interpretation to guide or influence the user, thereby leaving the interpretation and analysis of the documents and archives to the users. Concerning the project's sustainability, in the future we plan to consolidate our digital platform by gaining more team members and universities and by adding a digital initiatives and metadata librarian to our team. This will ensure that the project is built to last, thus preserving all the materials and exclusive documents we are currently providing to the public. We have also recently undergone the process of copyrighting exclusive documents (inscribed books, archives, personalized poems written by Glissant, etc.) and watermarking photos in order to convince people to securely share their personal archives. This also serves to remind users of the source of the material and to pay tribute to contributors who took the time to send their personal documents to our team. \placefigure{Works by Édouard Glissant: Inscribed Books}{\externalfigure[images/odell-jegousso/LibraryofGlissantStudies_Image1.png]} Since doing so, we have seen an increase in the number of contributions and hope to continue to see these numbers rise. We are currently working to create individual archival funds, which will collect in one place the documents related to Glissant that belong to a single individual. This new initiative has been successful and has encouraged several scholars and several of Glissant's former colleagues to share their materials with the project. \placefigure{Works by Édouard Glissant: Manuscripts}{\externalfigure[images/odell-jegousso/LibraryofGlissantStudies_Image2.png]} The {\em Library of Glissant Studies} has received an increasing amount of support over the past few months. It has been added to the {\em Caribbean Literary Heritage} digital collection, it has been included in several university libraries in Florida and Louisiana, and numerous newspaper articles about the project have been published in France, Japan, and the United States. However, there are still several ongoing challenges that we strive to address. For example, our team has thus far not been successful in accessing data in Eastern Europe, Eastern Asia, or several African countries, so we are working to establish and develop new partnerships with individuals in these localities in order to index more materials, which would make these works of literary criticism available to a greater number of users. We also hope that by presenting the project in as many forums as possible we will be able to receive more feedback to improve our search options, further develop our interface, and add forums for people to ask questions, exchanges ideas, and discuss pedagogical strategies. \thinrule \page \subsection{Jeanne Jégousso} Jeanne Jégousso is the cofounder and codirector of the {\em Library of Glissant Studies}. She is an assistant professor of French at Hollins University (VA), where she teaches francophone literatures and culture of the Caribbean and the Indian Ocean. She has published articles in the journal {\em Nouvelles Études Francophones} and in the books {\em La Louisiane et les Antilles, une nouvelle région du monde} (Presses Universitaires des Antilles, 2019) and {\em Édouard Glissant: L'éclat et l'obscur} (Presses Universitaires des Antilles, 2019). She is also a coeditor of the collection {\em Teaching, Reading, and Theorizing Caribbean Texts} (Lexington, forthcoming). She is a member of the board of the Centre international d'études Édouard Glissant, and she is currently working on her first book titled \quotation{La poétique du dépassement dans les littératures contemporaines des Antilles et de l'Océan Indien.} \subsection{Emily O'Dell} Emily O'Dell received her PhD from Louisiana State University and is currently an English lecturer at Georgia College. Her articles have been featured in {\em Postcolonial Interventions}, the {\em Louisiana Folklife Journal}, the {\em Louisiana Folklore Miscellany}, and {\em Atlantic Studies}, as well as in the collected volumes {\em La Louisiane et les Antilles, une nouvelle région du monde} (Presses Universitaires des Antilles, 2019) and {\em Utopia and Dystopia in the Age of Trump: Images from Literature and Visual Arts} (Rowman and Littlefield, 2019){\em .} She is also a coeditor of the collection {\em Teaching, Reading, and Theorizing Caribbean Texts} (Lexington, forthcoming). \stopchapter \stoptext
{ "alphanum_fraction": 0.8104862768, "avg_line_length": 331.0617283951, "ext": "tex", "hexsha": "309dcdf8296600927713923b185c6b3020a3159c", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2019-06-10T07:45:40.000Z", "max_forks_repo_forks_event_min_datetime": "2016-11-02T14:31:16.000Z", "max_forks_repo_head_hexsha": "640d45b1ab1068651d0b7f1b7a5882a000d8924d", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "archipelagosjournal/archipelagos-public", "max_forks_repo_path": "utility/log/_issue04/odell-thinking-digital.tex", "max_issues_count": 37, "max_issues_repo_head_hexsha": "640d45b1ab1068651d0b7f1b7a5882a000d8924d", "max_issues_repo_issues_event_max_datetime": "2019-07-04T12:37:18.000Z", "max_issues_repo_issues_event_min_datetime": "2016-04-02T18:43:38.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "archipelagosjournal/archipelagos-public", "max_issues_repo_path": "utility/log/_issue04/odell-thinking-digital.tex", "max_line_length": 2229, "max_stars_count": 8, "max_stars_repo_head_hexsha": "640d45b1ab1068651d0b7f1b7a5882a000d8924d", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "archipelagosjournal/archipelagos-public", "max_stars_repo_path": "utility/log/_issue04/odell-thinking-digital.tex", "max_stars_repo_stars_event_max_datetime": "2019-02-10T21:01:59.000Z", "max_stars_repo_stars_event_min_datetime": "2016-07-19T20:03:23.000Z", "num_tokens": 5892, "size": 26816 }
\chapter{NaCL} In principle, there's no difference in building PixelLight for \ac{NaCL} on a \ac{MS} Windows or Linux host. So, the following description is e.g. not \ac{MS} Windows only. In case you never ever did \ac{NaCL} development before, you may have a look at \url{https://developers.google.com/native-client/devguide/tutorial}. Under Linux, in case you're using environment variables as described within the appendix, don't forget that you have to start CMake-GUI out of a terminal. If you don't, the CMake-GUI doesn't know anything about your environment variables. \paragraph{Static Library} For \ac{NaCL}, everything has to be build as a static library. The final application is within a single \emph{nexe}-module for each hardware plattform. \paragraph{Under Construction} The \ac{NaCL} port is currently under construction. \section{Prerequisites} \begin{itemize} \item{Installed \ac{NaCL} \ac{SDK}} \end{itemize} \paragraph{Path to the \ac{NaCL} \ac{SDK} - Windows} Add a new environment variable \emph{NACL\_SDK} and set it to \emph{<NaCL directory>/pepper\_18} (example: \emph{C:/nacl\_sdk/pepper\_18}). \paragraph{Path to the \ac{NaCL} \ac{SDK} - Linux} This example assumes that the data has been extracted directly within the home (\emph{\textasciitilde}) directory. Open hidden "\textasciitilde /.bashrc"-file and add: \begin{lstlisting}[language=sh] # Important NaCL SDK path export NACL_SDK=~/nacl_sdk/pepper_18 \end{lstlisting} \begin{itemize} \item{Open a new terminal so the changes from the step above have an effect} \end{itemize} \paragraph{make (for Windows)} \begin{itemize} \item{\emph{Make for Windows}: Make: GNU make utility to maintain groups of programs} \item{Directly used by the CMake scripts under \ac{MS} Windows when using the \ac{NaCL} toolchain} \item{\emph{cmake/UsedTools/make/make.exe} was downloaded from \url{http://gnuwin32.sourceforge.net/packages/make.htm}} \end{itemize} This tool can't be set within a CMake file automatically, there are several options: \begin{itemize} \item{Add \emph{\textless PixelLight root path\textgreater /cmake/UsedTools/make} to the \ac{MS} Windows \emph{PATH} environment variable *recommended*} \item{Use a MinGW installer from e.g. \url{http://www.tdragon.net/recentgcc/} which can set the \emph{PATH} environment variable *overkill because only the 171 KiB \emph{make} is required*} \item{Use CMake from inside a command prompt by typing for example (\emph{DCMAKE\_TOOLCHAIN\_FILE} is only required when using a toolchain) \\ *not really comfortable when working with it on a regular basis* \begin{lstlisting}[language=sh] cmake.exe -G"Unix Makefiles" -DCMAKE\_MAKE\_PROGRAM="<PixelLight root path>/cmake/UsedTools/make/make.exe" -DCMAKE\_TOOLCHAIN\_FILE="<PixelLight root path>/cmake/Toolchains/Toolchain-nacl.cmake" \end{lstlisting} } \end{itemize} \section{Create Makefiles and Build} \label{NaCL:CreateMakefilesAndBuild} Here's how to compile PixelLight by using the CMake-\ac{GUI}: \begin{itemize} \item{Ensure "make" (GNU make utility to maintain groups of programs) can be found by CMake (add for instance "\textless PixelLight root path\textgreater /cmake/UsedTools/make" to the \ac{MS} Windows \emph{PATH} environment variable)} \item{Start "CMake (cmake-gui)"} \item{"Where is the source code"-field: e.g. "C:/PixelLight"} \item{"Where to build the binaries"-field: e.g. "C:/PixelLight/CMakeOutput"} \item{Press the "Configure"-button} \item{Choose the generator "Unix Makefiles" and select the radio box "Specify toolchain file for cross-compiling"} \item{Press the "Next"-button} \item{"Specify the Toolchain file": e.g. "C:/PixelLight/cmake/Toolchains/Toolchain-nacl.cmake"} \item{Press the "Generate"-button} \end{itemize} The CMake part is done, you can close "CMake (cmake-gui)" now. All required external packages are downloaded automatically, see chapter~\ref{Chapter:ExternalDependencies}. \begin{itemize} \item{Open a command prompt and change into e.g. "C:/PixelLight/CMakeOutput" (\ac{MS} Windows: by typing "cd /D C:/PixelLight/CMakeOutput" -> "/D" is only required when changing into another partition)} \item{Type "make" (example: "make -j 4 -k" will use four \ac{CPU} cores and will keep on going when there are errors)} \item{(You now should have the ready to be used \ac{NaCL} static library files)} \end{itemize}
{ "alphanum_fraction": 0.7629885057, "avg_line_length": 53.7037037037, "ext": "tex", "hexsha": "de4ab17d91f5bf109234f52e5f2251aef4bb3e90", "lang": "TeX", "max_forks_count": 40, "max_forks_repo_forks_event_max_datetime": "2021-03-06T09:01:48.000Z", "max_forks_repo_forks_event_min_datetime": "2015-02-25T18:24:34.000Z", "max_forks_repo_head_hexsha": "d7666f5b49020334cbb5debbee11030f34cced56", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "naetherm/PixelLight", "max_forks_repo_path": "Docs/PixelLightBuild/NaCL.tex", "max_issues_count": 27, "max_issues_repo_head_hexsha": "43a661e762034054b47766d7e38d94baf22d2038", "max_issues_repo_issues_event_max_datetime": "2020-02-02T11:11:28.000Z", "max_issues_repo_issues_event_min_datetime": "2019-06-18T06:46:07.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "PixelLightFoundation/pixellight", "max_issues_repo_path": "Docs/PixelLightBuild/NaCL.tex", "max_line_length": 387, "max_stars_count": 83, "max_stars_repo_head_hexsha": "43a661e762034054b47766d7e38d94baf22d2038", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ktotheoz/pixellight", "max_stars_repo_path": "Docs/PixelLightBuild/NaCL.tex", "max_stars_repo_stars_event_max_datetime": "2021-07-20T17:07:00.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-08T15:06:14.000Z", "num_tokens": 1233, "size": 4350 }
\chapter{Summary of Architectural Considerations} \label{chp:sum} Integrating retroaction in event-sourced systems introduces a set of capabilities which are not feasible in traditional event-sourced architectures. A comparison of our conceptual considerations to existing systems is difficult, since the capabilities which we propose for event-sourced systems cannot be found in this combination in the event-sourced field, nor in related work. To the best of our knowledge, there exists no work which examines retroactive computing in event-sourced systems. Thus, \emph{it is not possible to evaluate our conceptual proposals and ideas against a comparable system in the event sourcing domain}. If we turn to related work, there are some domains which have similar ideas. But as illustrated in Chapter \ref{chp:related-work}, most of these domains utilize the recorded history of a program in a different manner than we do: decoupled from the application (debuggers), limited to passive retrospection (history-aware languages and algorithms), or on a meta level (VCSes). Oftentimes the recorded history of an application is only utilized for post hoc analysis in separate tools, after the execution has finished. % In Chapter \ref{chp:related-work}, we described related works and efforts to utilize retroaction in data structures and aspect oriented programming languages. These approaches have parallels to our work, but do not consider problems of temporal and causal inconsistencies, necessary restrictions, or side effects. The problem of how to handle side effects in replays is mentioned in the related work concerning reotractive aspects as well, though there is no clear solution, whereas we have provided an in-depth examination of side effects in replays and suggested ways of recording and controlling them. If side effects are outsourced into separate, individual commands, they can be partially reused or reinvoked. % As we view it, these systems do not consider the challenges which arise from modifying and interacting with the application's history in a single environment. Consistency issues, causality violations, branching, and the control of side effects in replays, are some of the challenges which we have attempted to solve in the two previous chapters. In our approach, applications can examine their state history retrospectively and modify this history as a mean to explore alternative states. Event sourcing with CQRS is a perfect match for this, since it inherently captures state changes as commands and events. In this first of part of the thesis, we identified the major challenge of retroaction in event-sourced systems. We discussed temporal and causal inconsistencies, potential solutions and necessary restrictions. Furthermore, we provided an extensive overview on limitations and constraints of retroaction in event-sourced systems. Next, we described two appropriate architectural modifications to event-sourced systems following a CQRS style of architecture. % We demonstrated the applicability of both architectures by implementing them as prototypes. For the unified architecture, we described an appropriate programming model and its implementation as a prototype. % In Chapter \ref{chp:concept}, we illustrated that the usage of retroactive computing is heavily dependent on the application domain and its domain-specific constraints. Some domains cannot take advantage of its full potential due to strict constraints caused by side effects, real-world coupling, or hidden causalities. Other domains on the other hand benefit heavily of retroactive aspects, as it allows for an entirely new perspective on application state. % As can be seen with the other retroactive constraints as well, it is heavily dependent on the domain model how much can be made of retroaction and how high the informative value of retroactive modifications can be. If retroaction is taken into account from the start when building a system, the informative value of retroactive changes can be maximized.
{ "alphanum_fraction": 0.8173549656, "avg_line_length": 61.6363636364, "ext": "tex", "hexsha": "518cf01390e23e88d59093c31f534c21e970c729", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "4b7405aaa6d8db306854acf70565727e2e3f3d2f", "max_forks_repo_licenses": [ "Unlicense", "MIT" ], "max_forks_repo_name": "cmichi/masterthesis", "max_forks_repo_path": "thesis/chapters/pt1-results.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "4b7405aaa6d8db306854acf70565727e2e3f3d2f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense", "MIT" ], "max_issues_repo_name": "cmichi/masterthesis", "max_issues_repo_path": "thesis/chapters/pt1-results.tex", "max_line_length": 85, "max_stars_count": 3, "max_stars_repo_head_hexsha": "4b7405aaa6d8db306854acf70565727e2e3f3d2f", "max_stars_repo_licenses": [ "Unlicense", "MIT" ], "max_stars_repo_name": "cmichi/masterthesis", "max_stars_repo_path": "thesis/chapters/pt1-results.tex", "max_stars_repo_stars_event_max_datetime": "2021-08-06T22:44:55.000Z", "max_stars_repo_stars_event_min_datetime": "2017-08-10T05:35:51.000Z", "num_tokens": 807, "size": 4068 }
%Suggested order of slides % slides-linsvm-hard-margin % slides-linsvm-hard-margin-dual % slides-linsvm-soft-margin % slides-linsvm-erm % slides-linsvm-optimization \subsection{Linear Hard Margin SVM} \includepdf[pages=-]{../slides-pdf/slides-linsvm-hard-margin.pdf} \subsection{Hard-Margin SVM Dual} \includepdf[pages=-]{../slides-pdf/slides-linsvm-hard-margin-dual.pdf} \subsection{Soft-Margin SVM} \includepdf[pages=-]{../slides-pdf/slides-linsvm-soft-margin.pdf} \subsection{SVMs and Empirical Risk Minimization} \includepdf[pages=-]{../slides-pdf/slides-linsvm-erm.pdf} \subsection{Support Vector Machine Training} \includepdf[pages=-]{../slides-pdf/slides-linsvm-optimization.pdf}
{ "alphanum_fraction": 0.7705627706, "avg_line_length": 30.1304347826, "ext": "tex", "hexsha": "98c69fe09d821aab8b6ed0ef1f2e4751ba33e445", "lang": "TeX", "max_forks_count": 56, "max_forks_repo_forks_event_max_datetime": "2021-09-08T05:53:20.000Z", "max_forks_repo_forks_event_min_datetime": "2019-02-27T16:25:44.000Z", "max_forks_repo_head_hexsha": "cd4900f5190e9d319867b4c0eb9d8e19f659fb62", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "jukaje/lecture_i2ml", "max_forks_repo_path": "slides/linear-svm/chapter-order.tex", "max_issues_count": 323, "max_issues_repo_head_hexsha": "cd4900f5190e9d319867b4c0eb9d8e19f659fb62", "max_issues_repo_issues_event_max_datetime": "2021-10-07T08:11:41.000Z", "max_issues_repo_issues_event_min_datetime": "2019-02-27T08:02:59.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "jukaje/lecture_i2ml", "max_issues_repo_path": "slides/linear-svm/chapter-order.tex", "max_line_length": 70, "max_stars_count": 93, "max_stars_repo_head_hexsha": "cd4900f5190e9d319867b4c0eb9d8e19f659fb62", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "jukaje/lecture_i2ml", "max_stars_repo_path": "slides/linear-svm/chapter-order.tex", "max_stars_repo_stars_event_max_datetime": "2021-09-24T12:05:52.000Z", "max_stars_repo_stars_event_min_datetime": "2019-02-27T17:20:30.000Z", "num_tokens": 195, "size": 693 }
\documentclass{article} \usepackage[utf8]{inputenc} \usepackage[square,numbers]{natbib} \bibliographystyle{abbrvnat} \usepackage{xcolor} \pagestyle{headings} \usepackage{hyperref} \usepackage{array} \newcolumntype{P}[1]{>{\centering\arraybackslash}p{#1}} \newcolumntype{M}[1]{>{\centering\arraybackslash}m{#1}} \usepackage{longtable} \usepackage{romannum} \usepackage{multirow} \usepackage{graphicx} %package to manage images \graphicspath{ {images/} } % Here are our name and address. \title{\textbf{Malware Research in PDF Files\\{\small A multidisciplinary approach in the identification of malicious PDF files.}}} \author{Shir Bentabou \qquad Alexey Titov\\ \\ {\normalsize Advisors: Ph.D. Amit Dvir and Ph.D. Ran Dubin}\\ {\small{\textit{Ariel University,Department of Computer Sciences,40700 Ariel, Israel}}}} \date{} %---- \begin{document} \renewcommand{\thepage}{\arabic{page}}% Arabic page numbers \pagecolor{yellow!20} % title page \begin{titlepage} \begin{center} \vspace*{1cm} \Huge \textbf{Malware Research in PDF Files} \vspace{0.5cm} \LARGE A multidisciplinary approach in the identification of malicious PDF files. \vspace{1.5cm} \textbf{Shir Bentabou \qquad Alexey Titov} \\ Advisor: Ph.D. Amit Dvir and Ph.D. Ran Dubin \vfill \vspace{0.8cm} \includegraphics[width=0.4\textwidth]{university} \Large Department of Computer Science\\ Ariel University\\ Israel\\ 04.09.2019 \end{center} \end{titlepage} % title \maketitle % Here is the abstract. \begin{abstract} Cyber is a prefix used in a growing number of terms that describe new actions that are being made possible by the usage of computers and networks. The main terms in those are cyber crime, cyber attack and cyber warfare, all of those can be carried out by malwares.\newline \indent Malware, or malicious software, is any software intentionally designed to invade, cause damage, or disable computers, mobile devices, server, client,or computer network. Malware does the damage after it is implanted or introduced in some way into a target’s computer. Nowadays there are many distribution strategies for malwares and many programs are used as platforms. Some of these programs are in the user's everyday use, and seem pretty innocent. In our project we will focus on PDF files as a platform for malware distribution.\newline \indent PDF, Portable Document Format, is used for over 20 years worldwide, and has become one of the leading standards for the dissemination of textual documents. A typical user uses this format due to its flexibility and functionality,but it also attracts hackers who exploit various types of vulnerabilities available in this format, causing PDF to be one of the leading vectors of malicious code distribution. \indent Users normally open PDF files because they have confidence in this format, and thus allow malwares to run due to vulnerabilities found in the readers. Therefore, many threat analysis platforms are trying to identify the main functions that characterize the behavior of malicious PDF files by analyzing their contents, in order to learn how to automatically recognize old and new attacks. \indent The target of our work is to test and analyze how the combination of three different approaches and the use of machine learning methods can lead to effective recognition of malwares in PDF documents. \end{abstract} %---- \newpage \tableofcontents \newpage \section{Introduction} \indent \textbf{Cyber} is a prefix used in a growing number of terms that describe new actions that are being made possible by the usage of computers and networks. As the evolution of modern computers and networks progressed, a cat-and-mouse game evolved simultaneously, and continues to this day. This cat-and-mouse game is cyber warfare, and is caused by the challenges in information security, as a result of various cyber threats that exist. \indent The first known cyber-attack is the Morris worm, when a student in Cornell university wanted to know how many devices existed in the internet. He wrote a program that passed between computers and this program asked each device it reached to send a signal to a control server that counted the signals sent to him. Nowadays the growth in number of cyber-attacks is unimaginable; according to published data by Check Point Software Technologies, there were 23,208,628 attacks in February 23rd, 2019 alone. \indent The growth in cyber-attacks is not only in the numeric value, but also in the different kinds of attacks that exist. There are many different types of threats, and usually they consist of one or more kinds of attacks in the following list: \begin{table}[htb] \centering \begin{tabular}[c]{|c|c|c|c|} \hline Advanced Persistent & \multirow{ 2}{*}{DDoS} & Intellectual Property & Rogue \\ Threats & & Theft & Software\\ \hline \multirow{ 2}{*}{Phishing} & Wiper & \multirow{ 2}{*}{Spyware/Malware} & Unpatched \\ & Attacks & & Software \\ \hline Trojans & Money Theft & MITM & \\ \hline \multirow{ 2}{*}{Botnets} & Data & Drive-By Downloads & \\ & Manipulation & & \\ \hline \multirow{ 2}{*}{Ransomware} & Data & \multirow{ 2}{*}{Malvertising} & \\ & Destruction & & \\ \hline \end{tabular} \caption{List including different kinds of cyber-attacks.} \end{table} \indent Phishing is one of the popular distribution methods for malware. It is the fraudulent attempt to obtain sensitive information from a target by disguising as a trustworthy entity in electronic communication. Phishing is typically carried out by e-mail spoofing, and often directs users to enter personal information at a fake website or by sending fake but credible documents, often PDFs. \indent Threats as these come from various sources. The profile of the attackers does not match one certain type and depends on the source's interests and available technology. The most common sources of cyber threats are: nation states or national governments, terrorists, industrial spies, organized crime groups, hacktivists and hackers, business competitors, and disgruntled insiders. \indent \textbf{Malware}, or malicious software, is any software designed to serve any kind of cyber-attack. Software is considered malware based on the intent of the creator rather than its actual features. Different kinds of malware serve for different uses, and it does the damage after it is implanted or introduced in some way into a target's computer. \indent The first recorded malware was named Elk Cloner, created by 15-year-old high school student Rich Skrenta as a prank, and affected Apple \Romannum{2} systems in 1982. This virus was disseminated by infected floppy disks and spread to all the disks that were attached to the system by attaching itself to the OS. \indent What started as a teenage prank in 1982, has evolved into a wide range of variated software that is used today as malwares. The main types of malwares that exist today are: \begin{itemize} \item Trojan Horse – This type of malware infects a computer and usually runs in the background, sometimes for long periods of time, and gains unauthorized access to the affected computer, gathers information about the user / machine it is installed on. The information gathered by the trojan is then sent to the attacker, normally a server side that stores the data for the attacker. \item Virus - A virus is software usually hidden within another program that can produce copies of itself and insert them into other programs or files, and usually performs a harmful action. \item Worm - Similar to viruses, worms self-replicate in order to spread to other computers over a network, usually causing harm by destroying data and files. \item Spyware - Malware that secretly observes the computer user's activities without permission and reports it to the software's author. \item Exploits - Malware that takes advantage of bugs and vulnerabilities in a system in order to allow the exploit’s creator to take control. \item Ransomware - Malware that locks you out of your device and/or encrypts your files, then forces you to pay a ransom to get them back. \end{itemize} \indent And other forms… \newline \newline \indent Writing malware of the kinds we have seen above alone is not enough to create a cyber-attack. The malware must reach the target's system and operate on it in order to carry out an attack, and for that there are many distribution methods, some of them are explained below: \begin{itemize} \item Social Engineering - Socially engineered attacks exploit weaknesses of humans rather than weaknesses of software. Users are manipulated into running malicious binaries believing it is safe. \item E-Mail – E-Mail attacks can exploit vulnerabilities in the e-mail software or in the libraries that the e-mail software uses. Moreover, viruses and trojans are often disguised as innocent e-mail attachments in phishing e-mails. \item Network Intrusion - Network intrusion attacks are initiated by the attacker. The attacker finds some vulnerability in the network and takes advantage of it to infect some system in the network. \item Links - Links typically lead to malicious sites that download malware to the victim's device when they load the page. \item Infected Storage Devices - Storage devices can be used as a distribution method when they are infected with malware, and when plugged into a device, they can transfer their contents to the device and infect it. \item Drive-by Downloads – Relates to the unintended download of malware from the internet, either without the user knowing, or with their authorization without understanding the consequences. \end{itemize} \indent All the above methods intend to distribute malware to systems, normally without the users being aware that the process has happened at all. In order to do that, the platforms used in these methods are well known files and programs in the user’s daily use. A very popular file type for distributing malware is PDF, Portable Document Format. In our project we will focus on PDF files as a platform for malware distribution. \indent \textbf{PDF}, Portable Document Format, is a file format used for over 20 years worldwide, and has become one of the leading standards for the dissemination of textual documents. Based on the PostScript language, each PDF file encapsulates a complete description of a fixed-layout flat document, including the text, fonts, vector graphics, raster images and other information needed to display it. PDF files are composed by a set of sections: \begin{figure}[htb] \centering \begin{tabular}[c]{|c|} \hline Header\\ \hline \\ Body\\ \\ \hline ‘xref’ Table\\ \hline Trailer\\ \hline \end{tabular} \caption{The structure of a PDF file.} \end{figure} \begin{enumerate} \item \textbf{PDF Header} - This is the first line of a PDF file and it specifies the version number of the used PDF specification which the document uses (e.g. ”\%PDF-1.7”). \item \textbf{PDF Body} - The body of the PDF document is composed of objects that typically include text streams, fonts, images, multimedia elements, etc. The body section is used to hold all the document's data that is shown to the user. Notice that streams are interesting to us in the security aspect because they can store a large amount of data and thus store executable code that runs after some event. \newpage \item \textbf{Cross-Reference Table} – Also called 'xref' table, this table contains the references to all the objects in the document. The purpose of a cross reference table is that it allows random access to objects in the document, so we don't need to read the whole PDF document to locate an object. Each object is represented by one entry in the table, which is always 20 bytes long. If the document changes, the table is updated automatically. \item \textbf{Trailer} – This section specifies how the application that reads the PDF document should find the cross-reference table and other special objects in the document. The trailer section also contains the EOF indicator. \end{enumerate} \indent Following is a simple example of a PDF file \cite{1}. This example gives us a notion of how the PDF files look like before they are parsed. These contents can be seen for every PDF file, by opening it with any kind of text editor. In more complex files, it is possible to see different kinds of objects in the body section. Every object resides between /Obj – /Endobj tags and contains different kinds of data. For example, streams can contain large amounts of data (even executable code) and are normally compressed. Due to that they are not readable without using some tool to decompress the data. \begin{figure}[h] \centering \includegraphics[width=0.90\textwidth]{HWexample} \\ \caption{PDF file ‘Hello World’ example\cite{1}.} \label{fig:HW} \end{figure} \indent PDF is used widely around the world since it's creation, and still is because of two main advantages that it provides compared to other file formats: (1) \textbf{PDF files are compatible across multiple platforms} - A PDF reader presents a document independently of the hardware, operating system and application software used to create the original PDF file. The PDF format was designed to create transferable documents that can be shared across multiple computer platforms. (2) \textbf{The software for viewing PDF files is free} - Most PDF Readers, including Adobe Reader, are free to the public. This ensures that anyone you send the file to will be able to see the full version of your document. \indent A typical user uses this format due to its flexibility and functionality, but it also attracts hackers from the same reasons: it enables cross platform attacks and is widely used (including specific targets for attacks) because many readers are free. Moreover, there are various types of vulnerabilities available in this format, causing PDF to be one of the leading vectors of malicious code distribution. The vulnerabilities available in PDF derive from PDF's support of various types of data in addition to text such as JavaScript, Flash, media files, interactive forms or links to external files and URLs. Moreover, a lot of the PDF's content (JS, URLs, etc.) may be invisible to the user opening it. \indent In addition to that, PDF files are believed to be less suspicious than executable files. It is a common security practice for an IT administrator to define a policy to block executable files from staff e-mail attachments or web downloads, but it is rare to block PDF documents in such a manner. Users normally open PDF files because they have confidence in this format, and thus allow malwares to run due to vulnerabilities found in the readers. Therefore, many threat analysis platforms are trying to identify the main functions that characterize the identity and behavior of malicious PDF files by analyzing their contents, in order to learn how to automatically recognize old and new attacks. \section{Related Work} \indent In this project we will focus on phishing carried out through PDF files, that means making the user download some file believing it is safe or entering an unsecure website not knowingly. This can be done easily using PDF because PDF files allow hyperlinks to be embedded in them for easy use, and also enables the use of JavaScript and PDF object streams in the body part of the file, and this way code can be executed without the user knowing, or with his knowledge but without the awareness that this could be unsafe for him. URLs are a main method for this kind of phishing, as the user can click or hover over a hyperlink in a PDF file and be directed to a malicious website or to download a malicious file. Moreover, because of the JavaScript and streams that can be part of the PDF file, this can happen without the user knowing, in the background \cite{Bonan2018ML} \cite{JSSrndic2011Laskov}. \indent In general, there are many tools available in the software market for the classification of files and URLs as malicious or benign. Anti-virus products normally work in a static way, producing signatures for the identification of malicious files/URLs. In addition to that, dictionaries are in use in a remote server. Signatures are kept in these dictionaries in order to identify malicious files/URLs, and are frequently updated. Most of the AVs also use blacklists, these lists contain URLs, IPs and more, and every time the AVs identify a communication with some address in the blacklists, they will block it. Apart from the static tools there are dynamic analysis tools using sandbox environments that examine the behavior of a file/URL in a separated environment \cite{patil2018malicious}. \indent In previous works, we have seen projects that were meant to identify malicious PDF files, and in surveys we have read \cite{BGU2014survey} \cite{Baldoni2018survey}, there have been various attempts to identify malicious files by using a range of features of the PDF file (notice that features are also tags used in the objects of the file). These features were combined to create a feature vector for the files and provided as input to machine learning algorithms (from variated kinds) that classified the files benign or malicious \cite{torres2018malicious} \cite{Bonan2018ML}. Most of these surveys based their work on Didier Stevens \cite{1} and Otsubo's \cite{OtsuboChecker} research and tools. The objective of these researches was to create algorithms that identify malicious files with low FP (False Positive) and FN (False Negative) rates and classify files efficiently with the shortest time and resources needed. \indent In Torres and De Los Santos research \cite{torres2018malicious}, they have combined four different PDF analysis tools, different sets of features (based on 21 features Didier Stevens chose in his research \cite{1}), and three machine learning algorithms (Support Vector Machine, Random Forest, Multilayer Perceptron) to find the most efficient way to classify if a PDF is malicious or benign, using machine learning. The achieved results are shown in the table below. In their research, MLP algorithm showed the best results. \begin{table}[htb] \centering \begin{tabular}[c]{|c|c|c|c|c|} \hline \textbf{Algorithm} & \textbf{Accuracy} & \textbf{Recall} & \textbf{F1-Score} & \textbf{ROC-AUC}\\ \hline \textbf{SVM} & 0.50 & 0 & 0 & 0.70\\ \hline \textbf{RF} & 0.92 & 0.94 & 0.92 & 0.98\\ \hline \textbf{MLP} & 0.96 & 0.967 & 0.96 & 0.98\\ \hline \end{tabular} \caption{Table of Torres and De Los Santos \cite{torres2018malicious} test results.} \end{table} \indent In another aspect of our work, we have read articles and researches about URL analysis, and how to classify if a URL is malicious or benign. From what we have read we have seen that not only static analysis tools and blacklists were used, but also a lexicographic approach was used in most of them, to improve the success rates of classifying URLs. \clearpage \newpage \indent In D. R. Patil and J. B. Patil’s research \cite{patil2018malicious}, they have provided an effective hybrid methodology to classify URLs. They have used supervised decision tree learning classification models and performed the experiments on a balanced dataset. The experimental results show that, by including new features, the decision tree learning classifiers worked well on the dataset, achieving 98\%-99\% detection accuracy with very low FP and FN rates. Moreover, using the majority voting technique, the experiments achieved 99.29\% detection accuracy with very low FP and FN rates, which is better than the existing anti-virus and anti-malware solutions for URLs. \indent In our project we will create an ensemble machine that will focus on three different areas of a PDF file. These three areas are: 1) The image of the first page of the PDF file, 2) The text of the first page of the PDF file, and 3) The features of the PDF file. Although the third area has already been quite researched, the two first areas have not yet been researched, therefore specific researches about it haven't been found. This means we will enter a new field of research, hoping to reach some progress in finding an efficient and effective way to identify these kinds of attacks. \begin{figure}[h] \centering \includegraphics[width=1.00\textwidth]{machines} \\ \caption{our schema for project} \label{fig:machines} \end{figure} \section[PDF Based Attack Techniques]{PDF Based Attack Techniques \cite{BGU2014malicious}} \indent It is known that there are many types of attacks that are based on PDF files. This is a subject that has been researched thoroughly in the past decade. Using a PDF file as an attack vector can be very simple. As mentioned above, AVs do not always have the best solution to deal with malicious PDF files, as they base their signatures on a file’s MD5 or hash, and even a small change in the file can change the MD5 or hash, and the vulnerabilities can continue to be used. \indent There are many ways that PDF files can be used as an attack vector. Readers have many vulnerabilities that ease the use of PDF files, and many of the characteristics of the PDF file make it a great attack vector. In this section we will present the existing approaches of utilizing PDF files in order to conduct an attack \cite{BGU2014malicious}. Social engineering takes a big part of most PDF based attacks, as the user-visible content of the PDF file can exist only for social engineering causes, while the not-user-visible content of the file can be extremely malicious. \indent \underline{JavaScript code attacks}: PDF files can contain JavaScript code for legitimate purposes, for example multimedia content and form validation. The main indicator for JavaScript code embedded in a PDF file is the presence of the ‘/JS’ tag \cite{1} \cite{JSSrndic2011Laskov} \cite{Bonan2018ML} \cite{JAST2018}. Normally, the goal of malicious JavaScript in a PDF file is to exploit a vulnerability in the PDF reader in order to execute embedded malicious JavaScript code. Downloading an executable file can also be carried out using JavaScript. Alternatively, JavaScript code can also open a malicious website that can perform a variety of malicious operations. \indent JavaScript code obfuscation is legitimately used to prevent reverse engineering for copyright purposes. However, it can also be used by attackers to conceal malicious JavaScript code, prevent it from being recognized by signature based or lexical analysis tools \cite{JSSrndic2011Laskov}, and to reduce readability by a human security analyst. Data in the streams of a PDF can be compressed and this way hide malicious JavaScript code in them. \indent \underline{Embedded file attack}: A PDF file can contain other file types in it, including HTML, JavaScript, executables, Microsoft Office files, and even additional PDF files. An attacker can use this functionality in order to embed a malicious file inside a benign file. This way, the attacker can utilize the vulnerabilities of other file types in order to perform malicious activity. The embedded file can be opened when the PDF file is opened using embedded JavaScript code or by other techniques such as PDF tags (such as ‘/Launch’). Usually, embedded malicious files are obfuscated in order to avoid detection. Adobe Reader PDF viewer versions 9.3.3 and above restrict file formats that can be opened, using a blacklist which is based on file extension. \indent \underline{Form submission and URL / URI attacks}: Adobe Reader supports the option of submitting the PDF form from a client to a specific server using the '/SubmitForm’ command. Adobe then generates a file from a PDF in order to send the data to a specified URL. If the URL belongs to a remote web server, it is able to respond. An attack can be performed by a simple request to a malicious website that will automatically open on the web browser, and the malicious website can exploit a vulnerability in the user's browser. Security mechanisms such as the protected mode of Adobe Reader can be disabled easily. Moreover, a URI address can be used to refer to any file type located remotely (both executable and non-executable files). \section{Our Work} \indent The attacks mentioned in the previous section can be visible to the user and require some action from the user, or be completely invisible and happen in the background without the user knowing. In our project we will focus on three ways to identify malicious attacks in PDF files: \begin{enumerate} \item \underline{Preview of the files} – Anti-viruses work by creating hashes (such as MD5) for malicious files found. For every malicious file they detect, they create a hash for it, and store it in their databases, so that if they encounter these files again, they will be able to block or warn about them. The problem of this approach is that if a single attribute of the file is changed, the hash also changes, and this way a file can be only briefly changed and pass the AVs detection. PDF files can be opened for initial preview, without opening the file itself. The initial preview shows the first page of the PDF file. Please note that ‘previewing’ the file is not proven to safe.\newline As AVs check the files by hash that can be easily changed, we want to create a detection based on the content of the file. That means, we want to be able to extract previews for the PDF file’s first page (in the form of images), and by the content of the files be able to detect malicious files. Our aim is to create an efficient image similarity engine, to detect files by their image. \item \underline{Text detection} – Text detection, similar to the image detection explained above can also help detect malicious files by their content. Normally, malicious PDF files contain rather innocent content in order to be credible to the user, convincing the user to open it. Therefore, we can learn about the characteristics of malicious PDF file from the text inside them, the same way as spam filters for e-mails work. \item \underline{PDF tags and features} – These are the structural tags and features of a PDF file that can give us a lot of information about the file. Normally they are invisible to the simple user that uses a reader in order to view the PDF files. These tags contain all the PDF's content and can give us information about the JavaScript code inside the file, links and URLs, obfuscated code, structure of the PDF file and more. \end{enumerate} \indent All the samples we have used in our work were received from our advisor, Ph.D. Ran Dubin, CEO \& Co Founder at SNDBOX. SNDBOX is an artificial intelligence malware research platform. The dataset contained 9,557 samples overall, and was made up of 9,158 benign samples and 419 malicious samples. \section{Existing Tools} \indent During our work we have found various existing tools that helped us with our research. Many of them were found while searching for specific solutions to problems we have encountered along the way. In this section we will explain about all the tools we have used. \begin{enumerate} \item \underline{PDFiD} – This tool was developed by Didier Stevens \cite{1}, and was meant to help differentiating between malicious and benign PDF files. This tool is a simple string scanner written in python. It scans a PDF file for specific tags, to check if they are included in the file, and returns the expressions and the number of times they have been found in the PDF file. \item \underline{JAST} – This tool was developed as part of a research by Fass et al \cite{JAST2018}, in order to detect malicious JavaScript instances. This solution combines the extraction of features from the code's abstract syntax tree with a random forest classifier. It is based on a frequency analysis of specific patterns, which are either predictive of benign or malicious samples. The analysis made by this tool is entirely static, and yields a high detection accuracy of almost 99.5\%. This tool also supplies a simple classification of JS code – if there is JS code in a (text) file or not, and if there is – if it is obfuscated or not. \item \underline{AnalyzePDF} – This tool analyzes PDF files by looking at their characteristics in order to add some intelligence about the file's nature – if it is malicious or benign. This tool has a module that calculates the entropy in a PDF file. It calculates the overall entropy, entropy within the streams of the PDF file, and the entropy out of the PDF file's streams \cite{AnalyzePDF2014}. \item \underline{PeePDF} - PeePDF is a python tool, that was made to explore PDF files in order to find out if the file can be harmful or not. This tool aims to provide the researcher with all necessary components in PDF analysis. This tool can extract all the objects in a PDF that contain suspicious elements (such as JS code) and supports object streams (that are compressed) and encrypted files. This tool can easily extract all the JavaScript from a PDF file \cite{Peepdf2016}. \end{enumerate} \section[First phase: Preparations]{First phase: \\ Preparations} \indent As mentioned before, the overall idea of the project was to create three machines, each one will be a classifier on its own. Each machine will classify a PDF file as malicious or benign by some kind of information about the file. The first machine will classify the file based on the image of the first page of the file. The second machine will classify the file based on the text from the first page of the file. The third machine will classify the file based on features and metadata of the file. The fourth machine, is an ensemble machine, and will classify a PDF file based on the results of all three previous machines we described. \indent In order to create the machines, we needed to do a few preparations first. That means, prepare the code that will provide the machines the information they need from a PDF file. The machines will use this information in order to classify the files. \indent As explained above, in this phase we extracted the information we need for our machines from PDF files. Firstly, we defined what information was needed for every machine, and then we wrote code that extracts each type of information from a PDF file. \begin{table}[htb] \centering \begin{tabular}{|p{3.5cm}|p{3.5cm}|p{3.5cm}|} \hline \centering{\textbf{First Machine: Image Classifier}} & \centering{\textbf{Second Machine: Text Classifier}} & \centering{\textbf{Third Machine: Feature Classifier}} \tabularnewline \hline \raggedright{Extract preview of PDF file} & \raggedright{Extract text from PDF file} & \raggedright{Extract telemetry} \tabularnewline \hline \raggedright{} & \raggedright{Extract text from image} & \raggedright{Extract URLs} \tabularnewline \hline \raggedright{} & \raggedright{} & \raggedright{Extract JavaScript} \tabularnewline \hline \end{tabular} \end{table} \indent In order to create a machine that classifies the PDF file by image, we have to extract an image of the PDF file. In this stage of the project we have decided to extract the preview of the first page of the file. For that, there are libraries in python that are made specifically for the processing of PDF files, and that eased our work. We have used ‘pdf2image’ to converts a PDF to a ‘PIL’ image object, and ‘PIL’ to add image processing capabilities to our code. \indent In order to create a machine that classifies the PDF file by text, we have to extract text from a PDF file. In this stage of the project we decided to extract the text from the first page of the file. In order to do that we used ‘PDFMiner’ and ‘pyPDF2’, tools that extract information from PDF files, including text from the file. With that said, if the PDF file holds an image in the first page (or the only page), we will have to extract text from an image, and that is another case we dealt with. For that we used ‘pytesseract’, an optical character recognition (OCR) tool for python. It recognizes and reads text embedded in images. Furthermore, we used ‘cv2’ in order to return an image of a specific page in the file. \indent In order to create a machine that classifies the PDF file by its features, we have to find a way to extract all the features we are interested in from the PDF file. For that, there are many tools that already exist and we used them to simplify our work. For the extraction of the features we are interested in we used a number of tools: ‘PDFiD’ in order to research and choose the features we want, ‘PeePDF’ in order to extract JS and information for the JS in the file and ‘pyPDF2’ in order to extract URLs from the PDF file. \section[Second phase: Image Based Classification Machine]{Second phase: \\ Image Based Classification Machine} \indent In this phase, we wanted to create an image-based classification machine. This machine should classify a PDF file only by the preview of the file. The idea of classifying a PDF file by image hasn't appeared in any research we have read, thus making our work a first in some sort of way. \subsection{Creating the vector} \indent When we started researching for the classification of a PDF file by image and saw that no research has been made about it, we looked at much simpler examples of image classification. According to an article by Adrian Rosebrock \cite{HistogramImage}, the best strategy presented to work with a picture is using a histogram, and not working with the pixels of the picture. The histogram made much more sense in our case, since relating to the picture by pixels did not give us information we could work with. \indent Whilst working on this machine, following a tip from our advisor, we tried to find meaningful characteristics that will indicate if a PDF is malicious or not. While searching, we found a characteristic we have seen in many files, and chose to refer to the blurriness of the image. We have seen many malicious samples that contained blurred images, with a message to the user that they should download some update in order to see the document. In order to calculate the blurriness of the image, we used the Laplacian method with an existing library in python (‘cv2’). This method returns a numeric value – if that value is under 150, that means that the image is blurry. Figure \ref{fig:blur} shows an example of a blurry PDF document and a non-blurry PDF document and their numeric values calculated by the Laplacian method calculation (7.02 opposed to 4837.3). \begin{figure}[h] \centering \includegraphics[width=1.00\textwidth]{blur} \\ \caption{\textit{Left side}: Blurry PDF document, \textit{Right side}: Non-blurry PDF document} \label{fig:blur} \end{figure} \indent In this stage, we had all the information we needed to construct the vector for the images, that the machine will receive as it's input. We decided that the overall size of the image vector is 513. The 512 first indexes relate to the picture's histogram, in other words, to the colors in the image (RGB – 8*8*8 = 512). The last index of the vector holds the numeric value that is returned from the Laplacian method calculation of the blurriness of the image. \subsection{Implementing the machine} \indent When we reached the phase of implementing the image-based classification machine, our advisors had recommended us to use K-Means-Clustering, a popular method for cluster analysis. K-Means-Clustering partitions its observations to a numeric k clusters, each observation belongs to the cluster with the nearest mean. \indent We have decided to implement the machine using KNN as another example, in order to have something to compare KMC with, and in order to show that what the advisors recommended was a better solution. In more advanced phases of the project we will see that KMC is indeed the better solution. \indent We chose KNN because it is a simple example of a machine learning algorithm that classifies the samples to one of two groups: benign or malicious. We wanted to check if a simple ‘yes-no’ classifier will give better results than a clustering algorithm we were recommended to implement. The logic of the clustering preference is that based solely on the image, there are very few cases that a ‘yes-no’ classifier can determine whether the file is malicious or not. On the other hand, a clustering algorithm can classify a sample in the same category as other similar samples, thus being a better ‘judge’ of a sample. \subsubsection{KNN Implementation} \indent We will start by explaining the implementations with KNN. We divided our dataset on this machine in the following way: 80\% of the samples were used for training, and 20\% for testing. At first, we had received 253 malicious samples in our dataset. We completed the dataset by adding 253 benign samples to it, so that the dataset consisted of half malicious samples and half benign samples, containing a total of 506 samples. We ran the machine with default parameters on 1, 3, 5 and 7 neighbors as the K chosen, in order to find the best result. \begin{table}[htb] \centering \begin{tabular}[c]{|c|c|} \hline K & Results\\ \hline 1 & 90.55\%\\ \hline 3 & 89.76\%\\ \hline 5 & \textbf{91.34\%}\\ \hline 7 & 89.76\%\\ \hline \end{tabular} % Or to place a caption below a table \caption{Table of results of the different Ks chosen in the KNN implementation.} \end{table} \indent As can be seen above, the best result we had got was from K=5, with 91.34\% success in classifying the test samples. \indent As the project advanced, we had received more malicious samples, and had a dataset that contained 838 samples, consisting of half malicious samples, and half benign samples. We ran the KNN image-based machine again on the whole dataset, and reached 87.5\% accuracy of the machine on K=5 neighbors. \subsubsection{KMC Implementation} \indent In order to reach best results with KMC, we wanted to check what would be the best way of defining the number of clusters the algorithm should use. We used the ‘Elbow Method’ – a method designed to help finding the appropriate number of clusters in a dataset, in such a way that adding another cluster does not give better results. We used two techniques of calculating the ideal number of clusters for our machine: ‘MinMaxScaler’ and ‘StandardScaler’, in order to verify that we are reaching the best number of clusters for our machine. The ideal number of clusters depends on the size of the dataset and the samples themselves. When we had 506 samples, the ideal number of clusters was 6, and with 838 samples, the ideal number of clusters was 19 (as seen in the Figure \ref{fig:KMC}). In the clusters we have seen malicious and benign files in the same cluster, according to the basic logic we explained of clustering the images of the files by similarity. \begin{figure}[h] \centering \includegraphics[width=1.00\textwidth]{KMC} \caption{Graph of the different clusters in the KMC implementation.} \label{fig:KMC} \end{figure} \indent As seen from the results above, basing our decision on the preview of the PDF file only is obviously not enough. Even the most basic hacker can easily dodge an image-based classification. There are plenty of attacks that do not need to change anything in the PDF’s visible content. Examples can be seen in the following paper \cite{davide2019malicious}. \section[Third phase: Text Based Classification Machine]{Third phase: \\ Text Based Classification Machine} \indent In this phase, we wanted to create a text-based classification machine. This machine should classify a PDF file only by the text that the file contains. The idea of classifying a PDF file by the text it contains hasn't been implemented in previous researches we have read, but the overall idea of checking a file by its contents exists. \indent The first step we made regarding this phase, as part of the first phase, was to extract the text from the first page of the PDF file. In some cases, we have noticed that there was an exaggerated amount of characters, or no characters at all, that were read from a single page of the PDF file. The reason for this was that there can be some stream that represents an image or some graphical content in the file, causing us to read the stream as a whole, and that does not give us the actual content of the file. In these cases, we have decided to deal with this by extracting the text from the image of the first page of the file. This was made as explained in the first phase. \subsection{Creating the vector} \indent The next step is defining the vector that this machine will receive as the input for every sample. We have seen many articles about creating a text vector, and normally this was made by using word embeddings. Word embedding is a collective name for a set of language modeling and feature learning techniques in natural language processing. In word embedding, words or phrases from the vocabulary are mapped to vectors of real numbers. We have decided to try three different word embedding techniques in our project and compare between them. \indent The word embedding techniques we used were ‘doc2vec’, ‘word2vec’, and ‘TF-IDF’. ‘Word2vec’ uses dictionaries that can be given to it. After some research we have decided to use ‘GoogleNews-vectors-negative300-SLIM.bin.gz’, that uses only words – in order to meet the memory limitations we had. We fixed the size of the vector to 300. \begin{table}[htb] \centering \begin{tabular}{|p{3.5cm}|p{3.5cm}|p{3.5cm}|} \hline \centering{Doc2Vec} & \centering{Word2Vec} & \centering{TF-IDF}\tabularnewline \hline \raggedright{Default vector made by library.} & \raggedright{Default vector with google vocabulary.} & \raggedright{Vector is made by counting words and default parameters.}\tabularnewline \hline \end{tabular} % Or to place a caption below a table \caption{Table explains vectors.} \end{table} \subsection{Implementing the machine} \indent When implementing the text-based machine, we followed our advisor's advice to use deep learning algorithms. We chose logistic regression as the deep learning algorithm we will use. We ran logistic regression on all three word embedding techniques we chose. Specifically, in ‘doc2vec’, we used two different models: Distributed Bag of Words (DBOW) and Distributed Memory with Averaging (DMA), and the combination of both in one model. The best results we have got from ‘doc2vec’ were from the combination of both models. \subsection{Results} \indent In the following image we present the results of the logistic regression algorithm on all three word embedding models. The dataset was split 80\% for training and 20\% for testing. \begin{table}[htb] \centering \begin{tabular}[c]{|c|c|} \hline \centering{\textbf{Word Embedding Model}} & \centering{\textbf{Accuracy}} \tabularnewline \hline \centering{Doc2Vec (DBOW + DMA)} & \centering{83.3 \%}\tabularnewline \hline \centering{Word2Vec} & \centering{\textbf{98.4 \%}}\tabularnewline \hline \centering{TF-IDF} & \centering{\textbf{98.4 \%}}\tabularnewline \hline \end{tabular} % Or to place a caption below a table \caption{Table of results of the different word embedding with LR.} \end{table} \indent As can be seen in the results above, ‘word2vec’ and ‘TF-IDF’ gave us the best results (and the same results). We decided to run machine learning algorithms on them in order to try reaching better results. We chose the following machine learning algorithms: Random Forest Classifier, K-Nearest-Neighbors, Support Vector Machine, Multilayer Perceptron, and Naïve-Bayes. We ran these algorithms on a dataset of 506 samples (half malicious and half benign), and the results were as follows. \begin{table}[htb] \centering \begin{tabular}{|p{2.5cm}|p{2.5cm}|p{2.5cm}|p{2.5cm}|} \hline \centering{\textbf{Word2Vec}} & \centering{\textbf{Results}} & \centering{\textbf{TF-IDF}} & \centering{\textbf{Results}} \tabularnewline \hline \centering{RF Classifier} & \centering{96.8 \%} & \centering{RF Classifier} & \centering{96.8 \%}\tabularnewline \hline \centering{KNN} & \centering{96 \%} & \centering{KNN} & \centering{63.4 \%}\tabularnewline \hline \centering{SVM} & \centering{95.2 \%} & \centering{SVM} & \centering{\textbf{99.2 \%}}\tabularnewline \hline \centering{MLP} & \centering{97.6 \%} & \centering{MLP} & \centering{97.6 \%}\tabularnewline \hline \centering{NB} & \centering{88.8 \%} & \centering{NB} & \centering{96.8 \%}\tabularnewline \hline \end{tabular} % Or to place a caption below a table \caption{Table of results of the different machine learning algorithms on ‘word2vec’ and ‘TF-IDF’.} \end{table} \indent It is important to emphasize that ‘TF-IDF’ has recognized all the malicious samples with the machine learning algorithms – except with Naïve-Bayes. As can be seen above, except for KNN algorithm, we received the best results with ‘TF-IDF’ and therefore decided to continue with this model. We ran the machine again on 838 samples (half malicious and half benign), and compared the results. The dataset was split the same way, 80\% for training and 20\% for testing. \begin{table}[htb] \centering \begin{tabular}{|p{3.0cm}|p{3.0cm}|p{3.0cm}|} \hline \centering{\textbf{Algorithm}} & \centering{\textbf{506 Samples}} & \centering{\textbf{838 Samples}} \tabularnewline \hline \centering{Logistic Regression} & \centering{98.41 \%} & \centering{\textbf{97.46 \%}}\tabularnewline \hline \centering{KNN} & \centering{63.49 \%} & \centering{93.03 \%}\tabularnewline \hline \centering{MLP} & \centering{97.61 \%} & \centering{\textbf{97.46 \%}}\tabularnewline \hline \centering{NB} & \centering{96.82 \%} & \centering{93.03 \%}\tabularnewline \hline \centering{RF Classifier} & \centering{96.82 \%} & \centering{95.56 \%}\tabularnewline \hline \centering{SVM} & \centering{\textbf{99.20 \%}} & \centering{96.83 \%}\tabularnewline \hline \end{tabular} % Or to place a caption below a table \caption{Table with the results of ‘TF-IDF’ with all the algorithms we tried on two datasets.} \end{table} \indent The best results up to this stage were from the combination of MLP or LR on ‘TF-IDF’, therefore we decided to implement the machine with LR and ‘TF-IDF’. With that said, basing our decision on the text of the PDF file only is obviously not enough. Even the most basic hacker can easily dodge a text-based classification. There are plenty of attacks that do not need to change anything in the PDF’s visible content. Examples can be seen in the following paper \cite{davide2019malicious}. \section[Fourth phase: Features Based Classification Machine]{Fourth phase: \\ Features Based Classification Machine \\(JS, URL, objects and streams)} \indent In the fourth phase, we created a machine that classifies PDF samples as malicious or benign, based on their features. This type of machine is the most popular amongst researches we have read, and many researches we have read \cite{1} \cite{torres2018malicious} \cite{Bonan2018ML} \cite{JSSrndic2011Laskov} \cite{Hamon2013malicious} have focused on this approach. Most researches that focused on the file's features have not described in detail the features they have used; we will give some background about the features we have chosen. The features we have chosen come from four different areas: PDF Tags, JavaScript Objects \& Streams, URLs, and Entropy. \subsection{Creating the vector} \indent The vector we created held 32 values, each field of the vector contained a numeric value regarding one of the features we chose. The 32 features were made up of 12 PDF tags, 10 features that characterize the URLs in the file, 7 features that characterize the JS, objects and streams in the file, and 3 features regarding the entropy of the file. \subsubsection{PDF Tags} \indent In order to extract the PDF tags we were interested in, as mentioned in the project proposal chapter, there are many existing tools that were used in many researches that were available to us. In this case we have chosen to work with the ‘PDFiD’ tool by Didier Stevens \cite{1}. This tool helps us count the number of appearances of specific tags in the PDF file. The tags we have chosen were: \begin{enumerate} \item \textbf{Obj} - This tag opens an object in the PDF. \item \textbf{Endobj} - This tag closes an object in the PDF. \item \textbf{Stream} - This tag opens a stream in the PDF. \item \textbf{Endstream} - This tag closes a stream in the PDF. \item \textbf{/ObjStm} - Counts the number of object streams. An object stream is a stream object that can contain other objects, and can therefore be used to obfuscate objects (by using different filters). \item \textbf{/JS}, \textbf{/JavaScript} - These tags indicate that the PDF document contains JavaScript. Almost all malicious PDF documents found in the wild (by Didier Stevens) contain JavaScript (to exploit a JavaScript vulnerability and/or to execute a heap spray). JavaScript can also be found in PDFs without malicious intent. \item \textbf{/AA}, \textbf{/OpenAction} - These tags indicate an automatic action to be performed when the file is viewed. All malicious PDF documents with JavaScript seen in the wild (by Didier Stevens) had an automatic action to launch the JavaScript without user interaction. The combination of automatic action and JavaScript makes a PDF document very suspicious. \item \textbf{/RichMedia} - This tag can imply presence of flash in the file. \item \textbf{/Launch} - This tag counts launch actions in the file. \item \textbf{/AcroForm} - This tag is defined if a document contains form fields, and is true if it uses XML Forms Architecture. \end{enumerate} \indent The first four features were chosen specifically in order to find malformations in the PDF files format. As we have seen in previous researches, many of them have specifically linked malformations in the files to malicious intents. \subsubsection{URLs} \indent In order to extract the URL features we were interested in, we wrote a simple python parser that extracted what we were looking for. Due to the specific features and the format of URLs, that was not too hard to do. The features we chose are: \begin{enumerate} \item Overall number of URLs in the PDF file. \item Number of different URLs in the PDF file. \item Number of URLs with the expression "File://" . \item Longest length of string after the second slash in the URL. \item Number of URLs that contain another URL in them. \item Number of URLs that contain encoded characters in the hostname. \newline(Example: http://www.\%63\%6c\%69\%66\%74.com) \item Number of URLs that contain IPs in them. \item Number of URLs that contain suspicious expressions (such as: download, php, target, loader, login, =, ?, \&, +). \item Number of URLs that contain unusual ports after the colon (':'). \end{enumerate} \subsubsection{JavaScript, Objects \& Streams} \indent In order to extract the JS from the files we have used an existing tool called ‘PeePDF’. We used this tool in order to extract all the JavaScript code found in each sample and searched in the code extracted features we were interested in. The JavaScript was extracted from the objects and streams of the files. \indent Another tool we used was called ‘JAST’. This tool classifies whether a string has JavaScript in it or not, and knows how to recognize obfuscated JavaScript code. This helped us extract an important feature about JavaScript in the PDF file. \indent The features we chose are: \begin{enumerate} \item Number of objects with JavaScript in them in the PDF file. \item Number of lines with JavaScript code in the PDF file. \item Kind of JavaScript in the file: no JS, regular JS, obfuscated JS. \item Four features for special expressions: \begin{enumerate} \item Number of 'eval' expressions found. \item Number of backslash characters found. \item Number of '/u0' expressions found. \item Number of '/x' expressions found. \end{enumerate} \end{enumerate} \subsubsection{Entropy} \indent Entropy is a measurement for the lack of order or predictability of the content of the PDF files. In order to extract this feature from our samples we used an existing tool called ‘AnalyzePDF’, that calculates the entropy of the files. This feature was suggested to us by our advisors. We have decided to measure the following: \begin{enumerate} \item The overall entropy of the PDF file's content. \item Entropy inside the PDF file's streams. \item Entropy outside the PDF file's streams. \end{enumerate} \indent All the features we have chosen were based on decisions we have made and things we have thought were interesting and important, tips from our advisors and previous researches and articles. All the tools that are mentioned in this stage are introduced in the 'existing tools' section of this work. \subsection{Implementing the machine} \indent At first, we ran a few different machine learning algorithms on our 32-sized vector to find the algorithm that will give us the best results. Similar to the second machine, we chose the following algorithms: Logistic Regression, K-Nearest-Neighbors, Multilayer Perceptron, Random Forest Classifier and Support Vector Machine. We ran the machine twice, on 506 samples and on 838 samples, both datasets were made of half malicious samples and half benign samples. We split the two datasets the same way: 80\% for training and 20\% for testing. \subsection{Results} \begin{table}[htb] \centering \begin{tabular}{|p{3.0cm}|p{3.0cm}|p{3.0cm}|} \hline \centering{\textbf{Algorithm}} & \centering{\textbf{506 Samples}} & \centering{\textbf{838 Samples}} \tabularnewline \hline \centering{Logistic Regression} & \centering{84.25 \%} & \centering{82.73 \%}\tabularnewline \hline \centering{KNN} & \centering{59.84 \%} & \centering{70.23 \%}\tabularnewline \hline \centering{MLP} & \centering{57.48 \%} & \centering{61.90 \%}\tabularnewline \hline \centering{RF Classifier} & \centering{\textbf{92.12 \%}} & \centering{\textbf{94.64 \%}}\tabularnewline \hline \centering{SVM} & \centering{48.81 \%} & \centering{57.73 \%}\tabularnewline \hline \end{tabular} % Or to place a caption below a table \caption{Table with the results of the feature-based machine.} \end{table} \indent Note that these results are lower than most researches we have read that based their machine on the features of the PDF file \cite{torres2018malicious}. After receiving these disappointing results, we have come up with ways to improve this machine, but have not implemented those improvements. In the 'future work' section we will list the ideas we had to improve this machine. A feature-based machine only, similar to previous machines, has also proved to be not enough against evasion attacks \cite{Bonan2018ML}. \section[Fifth phase: Creating the ensemble classifier]{Fifth phase: \\ Creating the ensemble classifier} \indent The aim of this final phase was to create the fourth machine, an ensemble machine that will provide us with the final classification of a PDF file. This machine will classify the file based on all three classifications made by the previous machines. \subsection{Creating the vector} \label{sec:vector5phase} \indent To implement that, we decided that there are a few combinations that are worth trying and comparing, before deciding what would be the best strategy for this machine. These strategies were regarding the vector that the ensemble machine would receive as its input. The vector of this machine could be assembled in quite a few different ways, as follows: \renewcommand{\labelitemi}{$\textendash$} \begin{itemize} \item The vector could contain only 3 fields with the decisions of all three previous machines. \item The vector could contain all the vectors from previous stages (513 fields for the image, 300 fields for text, 32 fields for features = 845 fields overall). \item The vector could contain part of the vectors from previous stages and part of the classifications of previous stages, in different combinations. \end{itemize} \subsection{Implementing the machine} \indent Furthermore, depending on the vector chosen, we had to apply some machine learning algorithm, in order for this final machine to give its final classification. In order to do that, we chose six different algorithms, and ran them on the different vectors we chose. These algorithms were: AdaBoost Classifier, AdaBoost Regressor, XGB Classifier, XGB Regressor, Random Forest Classifier, Random Forest Regressor. \subsection{Improving Vector} \indent While running the fourth machine, we decided to examine the feature importance of the vectors. As explained above in the previous phase, we chose 32 features for the third machine of this project, and have decided to see if all of these features were being used or if they add value to our vector and machine. When we ran the fourth machine, using all previous machines, we checked for each feature if it was used in the different algorithms of the fourth machine on our samples (838 samples, half malicious and half benign). We discovered that four of the features were not being used at all, therefore we have removed them from the feature vector of the fourth machine. The features removed were: the '/AcroForm' PDF tag, the number of '/x' expressions, the number of '/u0' expressions and the number of URLs that contain encoded characters in the hostname. \indent Moreover, we have made the feature importance examination for all the vectors of previous machines, and have seen the following: \begin{table}[htb] \centering \begin{tabular}{|M{1.5cm}|M{1.5cm}|M{1.5cm}|M{1.5cm}|M{1.5cm}|M{1.5cm}|M{1.5cm}|} \hline \centering{} & \centering{\textbf{AdaBoost Classifier}} & \centering{\textbf{AdaBoost Regressor}} & \centering{\textbf{XGB Classifier}} & \centering{\textbf{XGB Regressor}} & \centering{\textbf{RF Classifier}} & \centering{\textbf{RF Regressor}}\tabularnewline \hline \centering{\textbf{Image Histogram (1-512)}} & \centering{V} & \centering{V} & \centering{V} & \centering{V} & \centering{V} & \centering{V}\tabularnewline \hline \centering{\textbf{Image Blur (513)}} & \centering{V} & \centering{X} & \centering{V} & \centering{V} & \centering{V} & \centering{V}\tabularnewline \hline \centering{\textbf{Text (514-813)}} & \centering{31/300} & \centering{138/300} & \centering{29/300} & \centering{73/300} & \centering{300/300} & \centering{271/300}\tabularnewline \hline \centering{\textbf{Features (814-845)}} & \centering{8/32} & \centering{16/32} & \centering{7/32} & \centering{10/32} & \centering{26/32} & \centering{21/32}\tabularnewline \hline \end{tabular} % Or to place a caption below a table \caption{Table with the usage of the features in the whole vector in the fourth machine.} \end{table} \indent As the table above shows, only AdaBoost regressor in the fourth machine does not use the blur of the image. Moreover, only the RF classifier and RF regressor use 90\%-100\% of the text vector. The same can be said about the feature vector. \subsection{Initial Results} \indent In this stage of the project we chose all sorts of possible vectors and all six machine learning algorithms mentioned above, and ran the ensemble machine in order to compare the results and find the best combination. We have tried the following vectors: \renewcommand{\labelitemi}{$\textendash$} \begin{itemize} \item Overall vector (vector that contains all previous stages vectors) with 32 features, and 28 features. \item Vector of machine results combining KNN and KMC for the first machine, and 32 and 28 features in the third machine. \item Combined vector, partly made of previous stages vectors, and partly by previous machine results. \end{itemize} \indent In the table below, the results of all types of vectors we tried on all six machine learning algorithms for the fourth machine are presented. The best result can be seen in the last row of the table. \begin{table}[htb] \centering \begin{longtable}{|M{1.5cm}|M{1.5cm}|M{1.5cm}|M{1.5cm}|M{1.5cm}|M{1.5cm}|M{1.5cm}|} \hline \centering{} & \centering{\textbf{AdaBoost Classifier}} & \centering{\textbf{AdaBoost Regressor}} & \centering{\textbf{XGB Classifier}} & \centering{\textbf{XGB Regressor}} & \centering{\textbf{RF Classifier}} & \centering{\textbf{RF Regressor}}\tabularnewline \hline \centering{\textbf{Overall vector} \\ (32 features)} & \centering{97.62 \%} & \centering{97.02 \%} & \centering{97.02 \%} & \centering{95.24 \%} & \centering{95.38 \%} & \centering{95.24 \%}\tabularnewline \hline \centering{\textbf{Overall vector} \\ (28 features)} & \centering{97.62 \%} & \centering{97.02 \%} & \centering{97.02 \%} & \centering{95.24 \%} & \centering{96.43 \%} & \centering{95.24 \%}\tabularnewline \hline \centering{\textbf{Vector of machine results} \\ (KNN + SVM + 32 features)} & \centering{92.26 \%} & \centering{90.48 \%} & \centering{94.05 \%} & \centering{94.05 \%} & \centering{92.26 \%} & \centering{94.05 \%}\tabularnewline \hline \multicolumn{7}{r}{\textit{Continued on next page}} \\ \end{longtable} \end{table} \clearpage \newpage \begin{table}[htb] \centering \begin{longtable}{|M{1.5cm}|M{1.5cm}|M{1.5cm}|M{1.5cm}|M{1.5cm}|M{1.5cm}|M{1.5cm}|} \hline \centering{\textbf{Vector of machine results} \\ (KNN + SVM + 28 features)} & \centering{92.26 \%} & \centering{91.07 \%} & \centering{92.86 \%} & \centering{94.64 \%} & \centering{94.64 \%} & \centering{94.64 \%}\tabularnewline \hline \centering{\textbf{Vector of machine results} \\ (KMC + SVM + 32 features)} & \centering{92.26 \%} & \centering{93.45 \%} & \centering{93.45 \%} & \centering{93.45 \%} & \centering{94.05 \%} & \centering{93.45 \%}\tabularnewline \hline \centering{\textbf{Vector of machine results} \\ (KMC + SVM + 28 features)} & \centering{92.26 \%} & \centering{93.45 \%} & \centering{93.45 \%} & \centering{93.45 \%} & \centering{94.64 \%} & \centering{95.24 \%}\tabularnewline \hline \centering{\textbf{Combined Vector} \\ (KMC + TEXT-VECTOR + 28 features)} & \centering{98.21 \%} & \centering{97.02 \%} & \centering{97.02 \%} & \centering{95.83 \%} & \centering{95.24 \%} & \centering{96.43 \%}\tabularnewline \hline \centering{\textbf{Combined Vector} \\ (KMC + SVM + 28 features)} & \centering{95.83 \%} & \centering{97.02 \%} & \centering{97.02 \%} & \centering{97.62 \%} & \centering{97.62 \%} & \centering{97.02 \%}\tabularnewline \hline \centering{\textbf{Combined Vector} \\ (IMAGE-VECTOR + SVM + 28 features)} & \centering{\textbf{98.81 \%}} & \centering{97.62 \%} & \centering{\textbf{98.81 \%}} & \centering{98.21 \%} & \centering{98.21 \%} & \centering{98.21 \%}\tabularnewline \hline \end{longtable} % Or to place a caption below a table \caption{Table with the results of the fourth machine on combinations of vectors and algorithms.} \end{table} \clearpage \newpage \subsection{Feature Importance} \indent In order to understand the relevance of each machine in the project, we ran the fourth machine on the vector of machine results and retrieved the feature importance of every algorithm. We ran the machine with the next definitions: KMC for the first machine (image), SVM for the second machine (text) and RF for the third machine (PDF features). The results are shown in the table below: \begin{table}[htb] \centering \begin{tabular}{|M{2.5cm}|M{2.5cm}|M{2.5cm}|M{2.5cm}|} \hline \centering{} & \centering{\textbf{Image}} & \centering{\textbf{Text}} & \centering{\textbf{Features}}\tabularnewline \hline \centering{\textbf{AdaBoost Classifier}} & \centering{0.93} & \centering{0.04} & \centering{0.03}\tabularnewline \hline \centering{\textbf{AdaBoost Regressor}} & \centering{0.18419997} & \centering{0.60848697} & \centering{0.20731306}\tabularnewline \hline \centering{\textbf{XGB Classifier}} & \centering{0.01018084} & \centering{0.92208797} & \centering{0.067731306}\tabularnewline \hline \centering{\textbf{XGB Regressor}} & \centering{0.01556311} & \centering{0.94997084} & \centering{0.03446609}\tabularnewline \hline \centering{\textbf{Random Forest Classifier}} & \centering{0.14533861} & \centering{0.53994799} & \centering{0.31471341}\tabularnewline \hline \centering{\textbf{Random Forest Regressor}} & \centering{0.06494484} & \centering{0.90513809} & \centering{0.02991707}\tabularnewline \hline \end{tabular} % Or to place a caption below a table \caption{Table with the feature importance of all algorithms in fourth machine on vector of machine results. (First machine – KMC, second machine – SVM, third machine – RF).} \label{tab:First} \end{table} \indent From the results in the above table \ref{tab:First}, it can be seen clearly that the text feature in the vector of machine results is the most significant one in feature importance in most cases. That result awed us, as it was not expected. The text and image machines are expected to be less significant since they judge the visible content of the PDF file only. As mentioned in the first and second phase chapters, visible content in the PDF does not necessarily mean that there is not more content in the file, such as JS code and all that was mentioned in phase four. \indent In order to improve our results in the fourth machine, we needed to go back and rethink our strategy for all three previous machines. At first, when we built the first three machines, we ran them on a dataset of 506 samples. When we ran the machines again on 838 samples – we saw different results (shown in the first, second and third phase chapters). For example, with 506 samples we saw that for the first machine (image), the best result we received was using KNN, but with 838 samples we got the best results using KMC. Moreover, with 506 samples we saw that for the second machine (text), the best result we received was using SVM, but with 838 samples we got the best results using LR. \indent In addition to that, during our project we had a meeting with another researcher that focuses on evasion attacks. From his point of view, the first and second machines of our project were very ineffective against an attacker that is acquainted with the machines (that knows that the first page is always taken for image and text extraction). Therefore, we have added to the functionality of these machines, the option of choosing a random page of the file to extract the image and text from. The random selection for the page is done twice, once for the text, and once for the image, meaning that there may be two different pages chosen from the file (unless the length of the file is one page only). \section{Results} \indent In this chapter we will present our results and explain them. We will present the final results of the ensemble machine. The vector chosen from all the options present in section \hyperref[sec:vector5phase]{10.1} was the vector of machine results, with 3 features, each representing a result from a machine (image, text, PDF features). The chosen algorithms for the first three machines are: KMC for the image classification machine, LR for the text classification machine, and RF for the PDF features classification machine. \begin{table}[htb] \centering \begin{tabular}{|M{3.0cm}|M{3.0cm}|M{3.0cm}|} \hline \centering{\textbf{Result of image machine}} & \centering{\textbf{Result of text machine}} & \centering{\textbf{Result of PDF features machine}}\tabularnewline \hline \end{tabular} % Or to place a caption below a table \caption{Vector of machine results.} \end{table} \indent The results are on the following two datasets in two variations for each, depending on the way the text and image pages are selected: \begin{table}[htb] \centering \begin{tabular}{|M{2.5cm}|M{2.5cm}|M{2.5cm}|M{2.5cm}|} \hline \centering{\textbf{Dataset Index}} & \centering{\textbf{Number of samples}} & \centering{\textbf{Dataset content}} & \centering{\textbf{Page Selection}}\tabularnewline \hline \centering{\textbf{1}} & \centering{838} & \centering{50\% malicious, 50\% benign} & \centering{First page}\tabularnewline \hline \centering{\textbf{2}} & \centering{838} & \centering{50\% malicious, 50\% benign} & \centering{Random}\tabularnewline \hline \centering{\textbf{3}} & \centering{9,577} & \centering{95.625\% benign, 4.375\% malicious} & \centering{First page}\tabularnewline \hline \centering{\textbf{4}} & \centering{9,577} & \centering{95.625\% benign, 4.375\% malicious} & \centering{Random}\tabularnewline \hline \end{tabular} % Or to place a caption below a table \caption{Table with the groups that we refer to in the final results.} \end{table} \indent For all four groups presented above, the two datasets were divided in the following way: \renewcommand{\labelitemi}{$\textendash$} \begin{itemize} \item Train first three machines: 70\% of the samples (benign and malicious respectively). \item Train ensemble machine: 20\% of the samples (benign and malicious respectively). \item Test: 10\% of the samples (benign and malicious respectively). \end{itemize} \subsection{Results of first group (838 samples; 50\% malicious / 50\% benign; first page for text \& image):} \indent Firstly, we present the accuracy parameters of all six algorithms on the dataset. As can be seen in the table below, here we have received the best results from the XGB classifier, with 96.43\% accuracy. An interesting thing that can be seen in the table is that XGB regressor had more than one percent less accuracy than XGB classifier, but the precision on the benign files and the recall on the malicious files was 100\%. \noindent\underline{Accuracy Parameters:} \begin{table}[htb] \centering \begin{tabular}{|M{1.2in}|M{0.6in}|M{0.5in}|M{0.6in}|M{0.4in}|M{0.4in}|}\hline & \centering{\textbf{Accuracy}} & \centering{\textbf{Class}} & \centering{\textbf{Precision}} & \centering{\textbf{Recall}} & \centering{\textbf{F1-Score}}\tabularnewline \cline{1-6} \multirow{2}{1.2in}{\textbf{AdaBoost Classifier}} &\multirow{2}{*}{\centering{95.24\%}} & \multirow{1}{*}{\centering{Benign}} & \centering{98\%} & \centering{93\%} & \centering{95\%}\tabularnewline \cline{3-6} && \multirow{1}{*}{\centering{Malicious}} & \centering{93\%} & \centering{98\%} & \centering{95\%}\tabularnewline \cline{1-6} \multirow{2}{1.2in}{\textbf{AdaBoost Regressor}} &\multirow{2}{*}{\centering{92.86\%}} & \multirow{1}{*}{\centering{Benign}} & \centering{95\%} & \centering{91\%} & \centering{93\%}\tabularnewline \cline{3-6} && \multirow{1}{*}{\centering{Malicious}} & \centering{91\%} & \centering{95\%} & \centering{93\%}\tabularnewline \cline{1-6} \multirow{2}{1.2in}{\textbf{XGB Classifier}} &\multirow{2}{*}{\centering{\textbf{96.43\%}}} & \multirow{1}{*}{\centering{Benign}} & \centering{95\%} & \centering{98\%} & \centering{97\%}\tabularnewline \cline{3-6} && \multirow{1}{*}{\centering{Malicious}} & \centering{97\%} & \centering{95\%} & \centering{96\%}\tabularnewline \cline{1-6} \multirow{2}{1.2in}{\textbf{XGB Regressor}} &\multirow{2}{*}{\centering{95.24\%}} & \multirow{1}{*}{\centering{Benign}} & \centering{100\%} & \centering{91\%} & \centering{95\%}\tabularnewline \cline{3-6} && \multirow{1}{*}{\centering{Malicious}} & \centering{91\%} & \centering{100\%} & \centering{95\%}\tabularnewline \cline{1-6} \multirow{2}{1.2in}{\textbf{Random Forest Classifier}} &\multirow{2}{*}{\centering{94.05\%}} & \multirow{1}{*}{\centering{Benign}} & \centering{95\%} & \centering{93\%} & \centering{94\%}\tabularnewline \cline{3-6} && \multirow{1}{*}{\centering{Malicious}} & \centering{93\%} & \centering{95\%} & \centering{94\%}\tabularnewline \cline{1-6} \multirow{2}{1.2in}{\textbf{Random Forest Regressor}} &\multirow{2}{*}{\centering{94.05\%}} & \multirow{1}{*}{\centering{Benign}} & \centering{95\%} & \centering{93\%} & \centering{94\%}\tabularnewline \cline{3-6} && \multirow{1}{*}{\centering{Malicious}} & \centering{93\%} & \centering{95\%} & \centering{94\%}\tabularnewline \cline{1-6} \end{tabular} % Or to place a caption below a table \caption{Accuracy parameters for all algorithms on first group.} \end{table} \clearpage \newpage \indent As in the previous chapter, we wanted to examine the feature importance of our vector, made of all three previous machines results for this group: \noindent\underline{Feature Importance:} \begin{table}[htb] \centering \begin{tabular}{|M{2.5cm}|M{2.5cm}|M{2.5cm}|M{2.5cm}|} \hline \centering{} & \centering{\textbf{Image}} & \centering{\textbf{Text}} & \centering{\textbf{Features}}\tabularnewline \hline \centering{\textbf{AdaBoost Classifier}} & \centering{0.95} & \centering{0.03} & \centering{0.02}\tabularnewline \hline \centering{\textbf{AdaBoost Regressor}} & \centering{0.23878133} & \centering{0.28937013} & \centering{0.47184864}\tabularnewline \hline \centering{\textbf{XGB Classifier}} & \centering{0.02476732} & \centering{0.87546283} & \centering{0.099769983}\tabularnewline \hline \centering{\textbf{XGB Regressor}} & \centering{0.02019422} & \centering{0.91034234} & \centering{0.06946351}\tabularnewline \hline \centering{\textbf{Random Forest Classifier}} & \centering{0.1556851} & \centering{0.46782244} & \centering{0.037649246}\tabularnewline \hline \centering{\textbf{Random Forest Regressor}} & \centering{0.10489435} & \centering{0.58775198} & \centering{0.30735367}\tabularnewline \hline \end{tabular} % Or to place a caption below a table \caption{Feature importance for all algorithms on first group.} \end{table} \indent As can be seen above, the results of the feature importance change significantly between the algorithms. The result of the text importance is still relatively high in as to what we expect it to be in most algorithms. \indent \underline{Confusion Matrices}: (Positive = Benign, Negative = Malicious) \indent In the following table we present the confusion matrices for all algorithms, on the first group. The columns of the table are as follows: \renewcommand{\labelitemi}{$\textendash$} \begin{itemize} \item Pos. as Pos. – Benign samples classified as benign. \item Pos. as Neg. - Benign samples classified as malicious. \item Neg. as Pos. - Malicious samples classified as benign. \item Neg. as Neg. - Malicious samples classified as malicious. \end{itemize} \clearpage \newpage \begin{table}[htb] \centering \begin{tabular}{|M{2.0cm}|M{2.0cm}|M{2.0cm}|M{2.0cm}|M{2.0cm}|} \hline \centering{} & \centering{\textbf{Pos. as Pos.}} & \centering{\textbf{Neg. as Pos.}} & \centering{\textbf{Pos. as Neg.}} & \centering{\textbf{Neg. as Neg.}}\tabularnewline \hline \centering{\textbf{AdaBoost Classifier}} & \centering{40} & \centering{1} & \centering{3} & \centering{40}\tabularnewline \hline \centering{\textbf{AdaBoost Regressor}} & \centering{39} & \centering{3} & \centering{4} & \centering{39}\tabularnewline \hline \centering{\textbf{XGB Classifier}} & \centering{42} & \centering{2} & \centering{1} & \centering{39}\tabularnewline \hline \centering{\textbf{XGB Regressor}} & \centering{39} & \centering{\textbf{0}} & \centering{4} & \centering{41}\tabularnewline \hline \centering{\textbf{Random Forest Classifier}} & \centering{40} & \centering{2} & \centering{3} & \centering{39}\tabularnewline \hline \centering{\textbf{Random Forest Regressor}} & \centering{40} & \centering{2} & \centering{3} & \centering{39}\tabularnewline \hline \end{tabular} % Or to place a caption below a table \caption{Confusion matrices for all algorithms on first group. Note that 84 samples are shown in the table (10\% of samples that were used for test).} \end{table} \indent An interesting fact from the confusion matrices is that the XGB regressor has not classified even a single malicious file as benign. \indent In addition to the results above, we decided to run the trained machines on the first group (machines that were trained and tested on 838 samples with the first page selected for the extraction of image and text of the PDF file) on a dataset that contained only benign samples. This all-benign dataset contained 8,739 samples that none of them were used in the previous dataset of 838 samples. The results of running the trained machines on the all-benign dataset were as follows: \clearpage \newpage \begin{table}[htb] \centering \begin{tabular}{|M{4.0cm}|M{2.0cm}|M{2.0cm}|M{2.0cm}|} \hline \centering{} & \centering{\textbf{Accuracy}} & \centering{\textbf{Pos. as Pos.}} & \centering{\textbf{Pos. as Neg.}}\tabularnewline \hline \centering{\textbf{AdaBoost Classifier}} & \centering{95.23\%} & \centering{8322} & \centering{417} \tabularnewline \hline \centering{\textbf{AdaBoost Regressor}} & \centering{93.80\%} & \centering{8197} & \centering{542}\tabularnewline \hline \centering{\textbf{XGB Classifier}} & \centering{97.12\%} & \centering{8487} & \centering{252}\tabularnewline \hline \centering{\textbf{XGB Regressor}} & \centering{95.38\%} & \centering{8335} & \centering{404}\tabularnewline \hline \centering{\textbf{Random Forest Classifier}} & \centering{95.28\%} & \centering{8414} & \centering{325}\tabularnewline \hline \centering{\textbf{Random Forest Regressor}} & \centering{95.28\%} & \centering{8414} & \centering{325}\tabularnewline \hline \end{tabular} % Or to place a caption below a table \caption{Result over remaining benign samples, first group} \end{table} \subsection{Results of second group (838 samples; 50\% malicious / 50\% benign; random pages for text \& image):} \indent We present here the results of the second group. The difference between this group and the previous one is the way the choice of the page for the image and text is made. Here the choice for these is made randomly. \noindent\underline{Accuracy Parameters:} \indent In the accuracy parameters of this group we can see that the random choice of page affected the results in all algorithms, causing a decrease in the accuracy. \begin{table}[htb] \centering \begin{tabular}{|M{1.2in}|M{0.6in}|M{0.5in}|M{0.6in}|M{0.4in}|M{0.4in}|}\hline & \centering{\textbf{Accuracy}} & \centering{\textbf{Class}} & \centering{\textbf{Precision}} & \centering{\textbf{Recall}} & \centering{\textbf{F1-Score}}\tabularnewline \cline{1-6} \multirow{2}{1.2in}{\textbf{AdaBoost Classifier}} &\multirow{2}{*}{\centering{90.48\%}} & \multirow{1}{*}{\centering{Benign}} & \centering{95\%} & \centering{86\%} & \centering{90\%}\tabularnewline \cline{3-6} && \multirow{1}{*}{\centering{Malicious}} & \centering{87\%} & \centering{95\%} & \centering{91\%}\tabularnewline \cline{1-6} \multirow{2}{1.2in}{\textbf{AdaBoost Regressor}} &\multirow{2}{*}{\centering{91.67\%}} & \multirow{1}{*}{\centering{Benign}} & \centering{93\%} & \centering{91\%} & \centering{92\%}\tabularnewline \cline{3-6} && \multirow{1}{*}{\centering{Malicious}} & \centering{90\%} & \centering{93\%} & \centering{92\%}\tabularnewline \cline{1-6} \multirow{2}{1.2in}{\textbf{XGB Classifier}} &\multirow{2}{*}{\centering{91.67\%}} & \multirow{1}{*}{\centering{Benign}} & \centering{93\%} & \centering{91\%} & \centering{92\%}\tabularnewline \cline{3-6} && \multirow{1}{*}{\centering{Malicious}} & \centering{90\%} & \centering{93\%} & \centering{92\%}\tabularnewline \cline{1-6} \multirow{2}{1.2in}{\textbf{XGB Regressor}} &\multirow{2}{*}{\centering{92.86\%}} & \multirow{1}{*}{\centering{Benign}} & \centering{93\%} & \centering{93\%} & \centering{93\%}\tabularnewline \cline{3-6} && \multirow{1}{*}{\centering{Malicious}} & \centering{93\%} & \centering{93\%} & \centering{93\%}\tabularnewline \cline{1-6} \multirow{2}{1.2in}{\textbf{Random Forest Classifier}} &\multirow{2}{*}{\centering{92.86\%}} & \multirow{1}{*}{\centering{Benign}} & \centering{95\%} & \centering{91\%} & \centering{93\%}\tabularnewline \cline{3-6} && \multirow{1}{*}{\centering{Malicious}} & \centering{91\%} & \centering{95\%} & \centering{93\%}\tabularnewline \cline{1-6} \multirow{2}{1.2in}{\textbf{Random Forest Regressor}} &\multirow{2}{*}{\centering{92.86\%}} & \multirow{1}{*}{\centering{Benign}} & \centering{95\%} & \centering{91\%} & \centering{93\%}\tabularnewline \cline{3-6} && \multirow{1}{*}{\centering{Malicious}} & \centering{91\%} & \centering{95\%} & \centering{93\%}\tabularnewline \cline{1-6} \end{tabular} % Or to place a caption below a table \caption{Accuracy parameters for all algorithms on second group.} \end{table} \clearpage \newpage \noindent\underline{Feature Importance:} \indent The feature importance was also affected from the random choice of pages for text and image extraction. The table below shows a significant decrease in the importance of the text feature, and a significant increase in the third feature, that contains the classification of the third machine on PDF features. \begin{table}[htb] \centering \begin{tabular}{|M{2.5cm}|M{2.5cm}|M{2.5cm}|M{2.5cm}|} \hline \centering{} & \centering{\textbf{Image}} & \centering{\textbf{Text}} & \centering{\textbf{Features}}\tabularnewline \hline \centering{\textbf{AdaBoost Classifier}} & \centering{0.95} & \centering{0.02} & \centering{0.03}\tabularnewline \hline \centering{\textbf{AdaBoost Regressor}} & \centering{0.44241618} & \centering{0.08196731} & \centering{0.475616652}\tabularnewline \hline \centering{\textbf{XGB Classifier}} & \centering{0.01986038} & \centering{0.04689374} & \centering{0.9332459}\tabularnewline \hline \centering{\textbf{XGB Regressor}} & \centering{0.02112981} & \centering{0.03368637} & \centering{0.9451838}\tabularnewline \hline \centering{\textbf{Random Forest Classifier}} & \centering{0.30664552} & \centering{0.22409619} & \centering{0.46925829}\tabularnewline \hline \centering{\textbf{Random Forest Regressor}} & \centering{0.11688741} & \centering{0.05400321} & \centering{0.82910937}\tabularnewline \hline \end{tabular} % Or to place a caption below a table \caption{Feature importance for all algorithms on second group.} \end{table} \noindent\underline{Confusion Matrices}: (Positive = Benign, Negative = Malicious) \indent Here we present the confusion matrices for all algorithms on the second group. Note that the columns of the table are the same as in presented in the previous group's results. We can see a slight decrease in the classification of the samples. \clearpage \newpage \begin{table}[htb] \centering \begin{tabular}{|M{2.0cm}|M{2.0cm}|M{2.0cm}|M{2.0cm}|M{2.0cm}|} \hline \centering{} & \centering{\textbf{Pos. as Pos.}} & \centering{\textbf{Neg. as Pos.}} & \centering{\textbf{Pos. as Neg.}} & \centering{\textbf{Neg. as Neg.}}\tabularnewline \hline \centering{\textbf{AdaBoost Classifier}} & \centering{37} & \centering{2} & \centering{6} & \centering{39}\tabularnewline \hline \centering{\textbf{AdaBoost Regressor}} & \centering{39} & \centering{3} & \centering{4} & \centering{38}\tabularnewline \hline \centering{\textbf{XGB Classifier}} & \centering{39} & \centering{3} & \centering{4} & \centering{38}\tabularnewline \hline \centering{\textbf{XGB Regressor}} & \centering{40} & \centering{3} & \centering{3} & \centering{38}\tabularnewline \hline \centering{\textbf{Random Forest Classifier}} & \centering{39} & \centering{2} & \centering{4} & \centering{39}\tabularnewline \hline \centering{\textbf{Random Forest Regressor}} & \centering{39} & \centering{2} & \centering{4} & \centering{39}\tabularnewline \hline \end{tabular} % Or to place a caption below a table \caption{Confusion matrices for all algorithms on second group. Note that 84 samples are shown in the table (10\% of samples that were used for test).} \end{table} \indent As on the first group, here too we ran the machines that were trained on the second group (machines that were trained and tested on 838 samples with random page selection for image and text extraction) on the all-benign dataset. The results of running the trained machines on the second group on the all-benign dataset were as follows: \begin{table}[htb] \centering \begin{tabular}{|M{4.0cm}|M{2.0cm}|M{2.0cm}|M{2.0cm}|} \hline \centering{} & \centering{\textbf{Accuracy}} & \centering{\textbf{Pos. as Pos.}} & \centering{\textbf{Pos. as Neg.}}\tabularnewline \hline \centering{\textbf{AdaBoost Classifier}} & \centering{93.50\%} & \centering{8171} & \centering{568} \tabularnewline \hline \centering{\textbf{AdaBoost Regressor}} & \centering{96.28\%} & \centering{8414} & \centering{325}\tabularnewline \hline \centering{\textbf{XGB Classifier}} & \centering{93.60\%} & \centering{8180} & \centering{559}\tabularnewline \hline \centering{\textbf{XGB Regressor}} & \centering{94.01\%} & \centering{8216} & \centering{523}\tabularnewline \hline \centering{\textbf{Random Forest Classifier}} & \centering{90.03\%} & \centering{7868} & \centering{871}\tabularnewline \hline \centering{\textbf{Random Forest Regressor}} & \centering{94.01\%} & \centering{8216} & \centering{523}\tabularnewline \hline \end{tabular} % Or to place a caption below a table \caption{Result over remaining benign samples, second group.} \end{table} \subsection{Results of third group (9,577 samples; 95.625\% malicious / 4.375\% benign; first page for text \& image):} \indent In this group, as written above, we reach the larger dataset, with mostly benign samples. This dataset contains all the samples we have, and the difference in the amounts of malicious and benign samples affects the results. \noindent\underline{Accuracy Parameters:} \indent In the accuracy parameters of this group we note that the AdaBoost classifier has returned the best result, 0.21\% higher than all the other algorithms that returned the same result in the accuracy parameter. \begin{table}[htb] \centering \begin{tabular}{|M{1.2in}|M{0.6in}|M{0.5in}|M{0.6in}|M{0.4in}|M{0.4in}|}\hline & \centering{\textbf{Accuracy}} & \centering{\textbf{Class}} & \centering{\textbf{Precision}} & \centering{\textbf{Recall}} & \centering{\textbf{F1-Score}}\tabularnewline \cline{1-6} \multirow{2}{1.2in}{\textbf{AdaBoost Classifier}} &\multirow{2}{*}{\centering{\textbf{99.05\%}}} & \multirow{1}{*}{\centering{Benign}} & \centering{99\%} & \centering{100\%} & \centering{100\%}\tabularnewline \cline{3-6} && \multirow{1}{*}{\centering{Malicious}} & \centering{92\%} & \centering{86\%} & \centering{89\%}\tabularnewline \cline{1-6} \multirow{2}{1.2in}{\textbf{AdaBoost Regressor}} &\multirow{2}{*}{\centering{98.84\%}} & \multirow{1}{*}{\centering{Benign}} & \centering{99\%} & \centering{99\%} & \centering{99\%}\tabularnewline \cline{3-6} && \multirow{1}{*}{\centering{Malicious}} & \centering{88\%} & \centering{86\%} & \centering{87\%}\tabularnewline \cline{1-6} \multirow{2}{1.2in}{\textbf{XGB Classifier}} &\multirow{2}{*}{\centering{98.84\%}} & \multirow{1}{*}{\centering{Benign}} & \centering{99\%} & \centering{99\%} & \centering{99\%}\tabularnewline \cline{3-6} && \multirow{1}{*}{\centering{Malicious}} & \centering{88\%} & \centering{86\%} & \centering{87\%}\tabularnewline \cline{1-6} \multirow{2}{1.2in}{\textbf{XGB Regressor}} &\multirow{2}{*}{\centering{98.84\%}} & \multirow{1}{*}{\centering{Benign}} & \centering{99\%} & \centering{99\%} & \centering{99\%}\tabularnewline \cline{3-6} && \multirow{1}{*}{\centering{Malicious}} & \centering{88\%} & \centering{86\%} & \centering{87\%}\tabularnewline \cline{1-6} \multirow{2}{1.2in}{\textbf{Random Forest Classifier}} &\multirow{2}{*}{\centering{98.84\%}} & \multirow{1}{*}{\centering{Benign}} & \centering{99\%} & \centering{99\%} & \centering{99\%}\tabularnewline \cline{3-6} && \multirow{1}{*}{\centering{Malicious}} & \centering{88\%} & \centering{86\%} & \centering{87\%}\tabularnewline \cline{1-6} \multirow{2}{1.2in}{\textbf{Random Forest Regressor}} &\multirow{2}{*}{\centering{98.84\%}} & \multirow{1}{*}{\centering{Benign}} & \centering{99\%} & \centering{99\%} & \centering{99\%}\tabularnewline \cline{3-6} && \multirow{1}{*}{\centering{Malicious}} & \centering{88\%} & \centering{86\%} & \centering{87\%}\tabularnewline \cline{1-6} \end{tabular} % Or to place a caption below a table \caption{Accuracy parameters for all algorithms on third group.} \end{table} \noindent\underline{Feature Importance:} \indent The feature importance has also changed from previous groups on the smaller dataset. Here we can see a real mix-and-match in the feature importance in the algorithms. All algorithms except our leading algorithm in accuracy, AdaBoost classifier, have given very little importance to the image, and AdaBoost classifier has given almost all the importance to the image feature. This fact is a bit confusing. \clearpage \newpage \begin{table}[htb] \centering \begin{tabular}{|M{2.5cm}|M{2.5cm}|M{2.5cm}|M{2.5cm}|} \hline \centering{} & \centering{\textbf{Image}} & \centering{\textbf{Text}} & \centering{\textbf{Features}}\tabularnewline \hline \centering{\textbf{AdaBoost Classifier}} & \centering{0.95} & \centering{0.03} & \centering{0.02}\tabularnewline \hline \centering{\textbf{AdaBoost Regressor}} & \centering{0.19335592} & \centering{0.68552074} & \centering{0.12112334}\tabularnewline \hline \centering{\textbf{XGB Classifier}} & \centering{0.01837745} & \centering{0.29502392} & \centering{0.6865986}\tabularnewline \hline \centering{\textbf{XGB Regressor}} & \centering{0.00695719} & \centering{0.44185147} & \centering{0.5511914}\tabularnewline \hline \centering{\textbf{Random Forest Classifier}} & \centering{0.03115066} & \centering{0.4734846} & \centering{0.49536474}\tabularnewline \hline \centering{\textbf{Random Forest Regressor}} & \centering{0.01641713} & \centering{0.35635909} & \centering{0.62722378}\tabularnewline \hline \end{tabular} % Or to place a caption below a table \caption{Feature importance for all algorithms on third group.} \end{table} \noindent\underline{Confusion Matrices}: (Positive = Benign, Negative = Malicious) \indent Here we present the confusion matrices for all algorithms on the third group. These results resemble the results of the accuracy parameters in the fact that all the amounts are the same for all algorithms, except for AdaBoost classifier. The difference in the accuracy parameters can be explained by the table below. \clearpage \newpage \begin{table}[htb] \centering \begin{tabular}{|M{2.0cm}|M{2.0cm}|M{2.0cm}|M{2.0cm}|M{2.0cm}|} \hline \centering{} & \centering{\textbf{Pos. as Pos.}} & \centering{\textbf{Neg. as Pos.}} & \centering{\textbf{Pos. as Neg.}} & \centering{\textbf{Neg. as Neg.}}\tabularnewline \hline \centering{\textbf{AdaBoost Classifier}} & \centering{904} & \centering{6} & \centering{3} & \centering{36}\tabularnewline \hline \centering{\textbf{AdaBoost Regressor}} & \centering{902} & \centering{6} & \centering{5} & \centering{36}\tabularnewline \hline \centering{\textbf{XGB Classifier}} & \centering{902} & \centering{6} & \centering{5} & \centering{36}\tabularnewline \hline \centering{\textbf{XGB Regressor}} & \centering{902} & \centering{6} & \centering{5} & \centering{36}\tabularnewline \hline \centering{\textbf{Random Forest Classifier}} & \centering{902} & \centering{6} & \centering{5} & \centering{36}\tabularnewline \hline \centering{\textbf{Random Forest Regressor}} & \centering{902} & \centering{6} & \centering{5} & \centering{36}\tabularnewline \hline \end{tabular} % Or to place a caption below a table \caption{Confusion matrices for all algorithms on third group. Note that 949 samples are shown in the table (10\% of samples that were used for test).} \end{table} \subsection{Results of fourth group (9,577 samples; 95.625\% malicious / 4.375\% benign; random pages for text \& image):} \indent In this group, as written above, the dataset is much bigger, with mostly benign samples. The difference between this group and the previous one is the way the choice of the page for the image and text is made. Here the choice for these is made randomly. \noindent\underline{Accuracy Parameters:} \indent As can be seen in the accuracy parameters below, the random choice of pages for the image and text extraction has decreased the performance of the fourth machine. The highest accuracy was received from AdaBoost classifier and RF classifier with 98.63\% accuracy. The rest of the algorithms had the same accuracy, 98.52\%, only 0.11\% less than the two leading algorithms. \clearpage \newpage \begin{table}[htb] \centering \begin{tabular}{|M{1.2in}|M{0.6in}|M{0.5in}|M{0.6in}|M{0.4in}|M{0.4in}|}\hline & \centering{\textbf{Accuracy}} & \centering{\textbf{Class}} & \centering{\textbf{Precision}} & \centering{\textbf{Recall}} & \centering{\textbf{F1-Score}}\tabularnewline \cline{1-6} \multirow{2}{1.2in}{\textbf{AdaBoost Classifier}} &\multirow{2}{*}{\centering{\textbf{98.63\%}}} & \multirow{1}{*}{\centering{Benign}} & \centering{99\%} & \centering{100\%} & \centering{99\%}\tabularnewline \cline{3-6} && \multirow{1}{*}{\centering{Malicious}} & \centering{91\%} & \centering{76\%} & \centering{83\%}\tabularnewline \cline{1-6} \multirow{2}{1.2in}{\textbf{AdaBoost Regressor}} &\multirow{2}{*}{\centering{98.52\%}} & \multirow{1}{*}{\centering{Benign}} & \centering{99\%} & \centering{100\%} & \centering{99\%}\tabularnewline \cline{3-6} && \multirow{1}{*}{\centering{Malicious}} & \centering{89\%} & \centering{76\%} & \centering{82\%}\tabularnewline \cline{1-6} \multirow{2}{1.2in}{\textbf{XGB Classifier}} &\multirow{2}{*}{\centering{98.52\%}} & \multirow{1}{*}{\centering{Benign}} & \centering{99\%} & \centering{100\%} & \centering{99\%}\tabularnewline \cline{3-6} && \multirow{1}{*}{\centering{Malicious}} & \centering{89\%} & \centering{76\%} & \centering{82\%}\tabularnewline \cline{1-6} \multirow{2}{1.2in}{\textbf{XGB Regressor}} &\multirow{2}{*}{\centering{98.52\%}} & \multirow{1}{*}{\centering{Benign}} & \centering{99\%} & \centering{100\%} & \centering{99\%}\tabularnewline \cline{3-6} && \multirow{1}{*}{\centering{Malicious}} & \centering{89\%} & \centering{76\%} & \centering{82\%}\tabularnewline \cline{1-6} \multirow{2}{1.2in}{\textbf{Random Forest Classifier}} &\multirow{2}{*}{\textbf{\centering{98.63\%}}} & \multirow{1}{*}{\centering{Benign}} & \centering{99\%} & \centering{100\%} & \centering{99\%}\tabularnewline \cline{3-6} && \multirow{1}{*}{\centering{Malicious}} & \centering{91\%} & \centering{76\%} & \centering{83\%}\tabularnewline \cline{1-6} \multirow{2}{1.2in}{\textbf{Random Forest Regressor}} &\multirow{2}{*}{\centering{98.52\%}} & \multirow{1}{*}{\centering{Benign}} & \centering{99\%} & \centering{100\%} & \centering{99\%}\tabularnewline \cline{3-6} && \multirow{1}{*}{\centering{Malicious}} & \centering{89\%} & \centering{76\%} & \centering{82\%}\tabularnewline \cline{1-6} \end{tabular} % Or to place a caption below a table \caption{Accuracy parameters for all algorithms on fourth group.} \end{table} \noindent\underline{Feature Importance:} \indent The feature importance has also been affected by the random selection of the pages for the image and text extraction. Most algorithms have increased the importance of the third feature - classification of the PDF features machine. \begin{table}[htb] \centering \begin{tabular}{|M{2.5cm}|M{2.5cm}|M{2.5cm}|M{2.5cm}|} \hline \centering{} & \centering{\textbf{Image}} & \centering{\textbf{Text}} & \centering{\textbf{Features}}\tabularnewline \hline \centering{\textbf{AdaBoost Classifier}} & \centering{0.92} & \centering{0.02} & \centering{0.06}\tabularnewline \hline \centering{\textbf{AdaBoost Regressor}} & \centering{0.16147962} & \centering{0.12584921} & \centering{0.71267117}\tabularnewline \hline \centering{\textbf{XGB Classifier}} & \centering{0.01837745} & \centering{0.29502392} & \centering{0.71267117}\tabularnewline \hline \centering{\textbf{XGB Regressor}} & \centering{0.00457967} & \centering{0.29886076} & \centering{0.69655967}\tabularnewline \hline \centering{\textbf{Random Forest Classifier}} & \centering{0.03530152} & \centering{0.31630768} & \centering{0.6483908}\tabularnewline \hline \centering{\textbf{Random Forest Regressor}} & \centering{0.01775447} & \centering{0.15577358} & \centering{0.82647196}\tabularnewline \hline \end{tabular} % Or to place a caption below a table \caption{Feature importance for all algorithms on fourth group.} \end{table} \noindent\underline{Confusion Matrices}: (Positive = Benign, Negative = Malicious) \indent Here we present the confusion matrices for all algorithms on the fourth group. As can be seen in the table below, the confusion matrices of all six algorithms are similar. \begin{table}[htb] \centering \begin{tabular}{|M{2.0cm}|M{2.0cm}|M{2.0cm}|M{2.0cm}|M{2.0cm}|} \hline \centering{} & \centering{\textbf{Pos. as Pos.}} & \centering{\textbf{Neg. as Pos.}} & \centering{\textbf{Pos. as Neg.}} & \centering{\textbf{Neg. as Neg.}}\tabularnewline \hline \centering{\textbf{AdaBoost Classifier}} & \centering{904} & \centering{10} & \centering{3} & \centering{32}\tabularnewline \hline \centering{\textbf{AdaBoost Regressor}} & \centering{903} & \centering{10} & \centering{4} & \centering{32}\tabularnewline \hline \centering{\textbf{XGB Classifier}} & \centering{903} & \centering{10} & \centering{4} & \centering{32}\tabularnewline \hline \centering{\textbf{XGB Regressor}} & \centering{903} & \centering{10} & \centering{4} & \centering{32}\tabularnewline \hline \centering{\textbf{Random Forest Classifier}} & \centering{904} & \centering{10} & \centering{3} & \centering{32}\tabularnewline \hline \centering{\textbf{Random Forest Regressor}} & \centering{903} & \centering{10} & \centering{4} & \centering{32}\tabularnewline \hline \end{tabular} % Or to place a caption below a table \caption{Confusion matrices for all algorithms on fourth group. Note that 949 samples are shown in the table (10\% of samples that were used for test).} \end{table} \subsection{Comparing between results from third and fourth groups:} \indent In this section we will show the differences between the performances of the models of the third and fourth groups on the second dataset. As can be seen below, there is a slight decrease when the page choice for the extraction of the image and text is done randomly. Each of these groups (third and fourth) has its own advantage. The third group returns slightly better results, but the fourth group handles evasion attacks in the image and text machines. \begin{table}[htb] \centering \begin{tabular}{|M{1.5cm}|M{1.5cm}|M{1.5cm}|M{1.5cm}|M{1.5cm}|M{1.5cm}|M{1.5cm}|} \hline \centering{} & \centering{\textbf{AdaBoost Classifier}} & \centering{\textbf{AdaBoost Regressor}} & \centering{\textbf{XGB Classifier}} & \centering{\textbf{XGB Regressor}} & \centering{\textbf{RF Classifier}} & \centering{\textbf{RF Regressor}}\tabularnewline \hline \centering{\textbf{First Page}} & \centering{\textbf{99.05 \%}} & \centering{98.84 \%} & \centering{98.84 \%} & \centering{98.84 \%} & \centering{98.84 \%} & \centering{98.84 \%}\tabularnewline \hline \centering{\textbf{Second Page}} & \centering{\textbf{98.63 \%}} & \centering{98.52 \%} & \centering{98.52 \%} & \centering{95.24 \%} & \centering{\textbf{98.63 \%}} & \centering{98.52 \%}\tabularnewline \hline \end{tabular} % Or to place a caption below a table \caption{Overall accuracy of the algorithms on the third and fourth group.} \end{table} \indent There are plenty of researches that have dealt with evasion attacks in the PDF feature area, therefore we do not focus on this in our work. \subsection{Comparing our results to results from previous work:} \indent In our project we have focused on three areas of interest in a PDF file: image, text and PDF features. Most of the previous researches we have read have focused on one main area of interest. These vary from static lexical analysis, JavaScript analysis, PDF features and more. These previous researches have either supplied a tool that identifies malicious files, or a classification model using machine learning. \indent In this section we will compare the results we received from our ensemble machine to results from two previous researches. The first research focuses on the PDF feature extraction and machine learning techniques on the vector extracted. This resembles the third machine we have created in our project. The second research focused on the structural properties of PDF files (learning the difference between the structural trees of malicious and benign files). It delves into the relations between the objects of the file in its structural tree. \indent In Torres and De Los Santos research that was made in 2018 \cite{torres2018malicious}, they have combined one set of features and three ML algorithms to classify if a PDF is malicious or benign. The features they chose were not presented, but they explained that they were only related to PDF tags (such as /JS, etc.). The machine learning algorithms they used were SVM, RF, and MLP. Their dataset contained 1,712 samples – half of the samples were malicious and half benign. They divided their dataset in the following way: 995 samples (about 58\%) for training, 217 samples for validation (about 12\%), 500 samples (30\%) for testing. Their results are presented in the table below: \begin{table}[htb] \centering \begin{tabular}{|l|l|} \hline \centering{\textbf{Algorithm}} & \centering{\textbf{Accuracy}}\tabularnewline \hline \centering{\textbf{SVM}} & \centering{50\%}\tabularnewline \hline \centering{\textbf{RF}} & \centering{92\%}\tabularnewline \hline \centering{\textbf{MLP}} & \centering{96\%}\tabularnewline \hline \end{tabular} % Or to place a caption below a table \caption{Overall accuracy of the algorithms.} \end{table} \indent As the table above shows, the highest accuracy reached in this research was 96\% with MLP algorithm. We will compare these results with the results presented for the first and second group in the results chapter because of the resemblance between the division of malicious and benign samples in the dataset (made of half malicious and half benign in both). It is important to note that we had less than half the number of samples in the dataset for the first and second group (838 samples) that Torres and De Los Santos had in their research (1,712 samples). \clearpage \newpage \begin{table}[htb] \centering \begin{tabular}{|l|M{2.0cm}|M{2.0cm}|M{2.0cm}|M{2.0cm}|} \hline & \centering{\textbf{SVM - Previous work}} & \centering{\textbf{First Group}} & \centering{\textbf{Second Group}} & \centering{\textbf{Combined Vector} (IMAGE -VECTOR + SVM + 28 features)}\tabularnewline \hline \centering{\textbf{Accuracy}} & \centering{96\%} & \centering{96.43\%} & \centering{92.86\%} & \centering{98.81\%}\tabularnewline \hline \end{tabular} % Or to place a caption below a table \caption{Comparison of overall accuracy between our work and Torres \& De Los Santos’s research.} \end{table} \indent In the table above, we compare the results of the first two groups in the results chapter and the best result we have gotten on 838 samples from the fifth phase of our project to the results from the research \cite{torres2018malicious}. In the second group we have worse results, but in the first group and the combined vector, we can see that we reach higher results with less samples. Overall we are pretty close to the results they have gotten from PDF tags feature only. \indent In another research by researchers from Ben Gurion University \cite{BGU2014malicious} we have seen a different and interesting approach for the identification of malicious PDF files. They implemented the idea presented in Srndic and Laskov \cite{Srndic2013Laskov}. Srndic and Laskov presented a static analysis approach on the hierarchical structural path feature extraction method. This approach makes use of essential differences in the structural properties of malicious and benign PDF files. The structural paths represent the file’s properties and possible actions. \indent The model presented in \cite{BGU2014malicious} receives new samples every day, part of them were used for retraining the machine, and part used for retesting. The initial dataset contained 76\% benign samples (446 samples) and 24\% malicious (128 samples). Each day they added 160 new samples to the model (adjusting the data to resemble the initial malicious/benign ratio). If an interesting file was located by the model, it was used to retrain the machine. The algorithm they used in their model was SVM-Margin. At the end of 10 days they reached the following results: \begin{table}[htb] \centering \begin{tabular}{|l|l|l|} \hline & \centering{\textbf{TPR}} & \centering{\textbf{FPR}}\tabularnewline \hline \centering{\textbf{Third Group (benign = pos.)}} & \centering{99.67\%} & \centering{7.14\%}\tabularnewline \hline \centering{\textbf{Fourth Group (benign = pos.)}} & \centering{99.67\%} & \centering{7.14\%}\tabularnewline \hline \centering{\textbf{Third Group (malicious = pos.)}} & \centering{85.71\%} & \centering{0.66\%}\tabularnewline \hline \centering{\textbf{Fourth Group (malicious = pos.)}} & \centering{76.19\%} & \centering{1.10\%}\tabularnewline \hline \centering{\textbf{SVM-Margin (10 day)}} & \centering{96\%} & \centering{0.05\%}\tabularnewline \hline \end{tabular} % Or to place a caption below a table \caption{Comparison of overall TPR and FPR between our work and researchers from Ben Gurion University.} \end{table} \indent In the results of \cite{BGU2014malicious}, they have not shown accuracy, only the TPR (true positive rate) and FPR (false positive rate) measurements of their work. In their work they have related to malicious samples as positive. In the table above these measurements are compared to ours. As can be seen, their research has returned better results. By the measurements above we can see that our model is very good for the identification of benign PDF files, but not as much for malicious PDF files. The fact that there were very few malicious samples in our dataset (4.375\% malicious samples in our dataset compared to 24\% malicious samples in their dataset) made the machines look for very specific things about the malicious files, therefore reaching not good enough results for our model. \indent In order to compare the models in a better way (to both researches), we need to increase the number of malicious samples in our datasets, and adjust the datasets in a way that will match the datasets in the two researches shown above in a better way (size and content of our dataset). \subsection{Conclusion} \indent The aim of our project was to create an ensemble machine that will serve as a classifier for PDF files. This ensemble machine focuses on three different areas of the PDF file: image, text and file features. The first machine uses a clustering algorithm on a vector of the image histogram and blur, extracted from the first page of the file. The second machine uses ‘TFIDF’ as the word embedding and logistic regression as the machine learning algorithm to classify the file by the text extracted from the first page of the file. The third machine uses a vector of features extracted from the whole file, and random forest as the machine learning algorithm to classify the file by its features. \indent During the different phases of our work, while working on each machine, we had tried different approaches for the way of building the vectors, and different deep learning and machine learning algorithms. In addition to that, during our work we met with our advisors and another researcher and talked about different ways that an attacker can evade our ensemble machine. We built an algorithm that instead of always choosing the first page for our two first machines, chooses a random pages of the file for that. \indent We have compared our results to two researches that have two different approaches, each focusing on one area of interest of the PDF file. We have indicated what has to be improved in our project and dataset in order to reach more realistic results. The improvements include adding malicious samples to our dataset, and creating a recommended proportion between the malicious and benign files in the dataset as can be seen in \cite{BGU2014malicious}. \section{Future Work} \indent During the research and execution of our work, we have encountered many approaches, methods and tools that deal with the identification of malicious PDF files. There are many researches that have developed new approaches and tools with good results. When we started our project, we have focused on three specific areas of interest and machine learning techniques for those three areas. Combining three approaches in one work is relatively not as common, but there are many more methods that can be used and combined in our work. \noindent Overall the biggest issues in our work that we would like to address in the future are: \begin{enumerate} \item\underline{The dataset} – We would like to increase the number of malicious samples in the dataset and find the best balance between malicious and benign samples in the dataset \cite{BGU2014malicious}, relevant for all the machines in the project. \item\underline{Integration of more methods} – Another option is increasing the number of methods used to classify the files. That means adding machines with more types of classification and increasing the size of the ensemble machine's vector. By methods we mean the feature selection methods (more types of data regarding the file such as object structure, etc.), and the detection approaches (using not only machine learning as has been done, but adding more approaches such as statistical analysis and clustering) \cite{BGU2014survey} \cite{Baldoni2018survey}. \item\underline{Training method} – In our work we have trained the machines once, and tested them directly. Another approach that can be implemented is using a retraining method, that trains the machines periodically or every time there is more samples, this way maybe we can achieve better results. \end{enumerate} \noindent With all that said, each phase of our project can be improved itself, we present these improvements in the sections below. \subsection{Phase 2 Improvements} \subsubsection{Additional methods of picture classification} \indent Histogram is clearly not the only approach existing in the classification of images. Comparing the vector that we created with other approaches may yield better results than the results we achieved. \subsubsection{Near Similar Image Matching} \indent Image matching may be another way to reach image classification in PDF files. It works by detecting special features of the image and comparing to others. \subsection{Phase 3 Improvements } \subsubsection{Extraction of text to work for more languages} \indent In our project the extraction of text from image works only for Latin and Cyrillic characters. That means that if there is another language and we have to extract the text from the image, the text machine won't work. Furthermore, if the text is directly extracted from the file or from the image of the file, and is not in English, it will be needed to write a machine suitable for that language (the machine we built works only on English text). \subsubsection{Improving text vector} \indent During our work we considered trying another approach of creating the text vector using ‘Word2vec’ with cosine similarity. Due to time limitations we did not get to do it. \indent Moreover, we used a slim dictionary due to memory limitations, but there are many advanced dictionaries that also include relations between words and phrases that can be used and may improve results. \subsection{Phase 4 Improvements} \subsubsection{Feature selection improvement} \indent During our work, we have read many articles and researches that focus on whole file features, PDF tags, and more. Many of these did not mention specifically what they used, and many did. We could not include everything in our work, but we could try to make a few combinations to see if we can reach better results just by changing the features in our vector. \indent An example for features we did not know about was ‘SubmitForm’, a PDF tag we discovered while writing the project book \cite{Hamon2013malicious}. Another example is “/GoTo” \cite{BGU2014malicious}. \indent Another approach of improving the vector is combining features, for example instead of creating a feature for the /Obj tag and another for /EndObj tag, creating one feature that indicates if the number of objects opened and the number of the objects closed are the same or not. \indent Another idea is adding the approach that uses N-Grams in order to distinguish between benign and malicious PDF files. This can be seen in \cite{Joachims1999Thorsten}. \indent Lastly, we could create a list of all possible features, and choose randomly from it repeatedly until we find the list that yields us the best classification. \subsection{Phase 5 Improvements} \subsubsection{Additional methods} \indent In the ensemble machine we could continue trying to improve the vector and the algorithms used in order to reach better results. \subsection{Missing trailer \& malformation in ‘xref’ table} \indent We have encountered numerous problems with files that did not have trailers. Even though in Didier Stevens’s work \cite{1} it is mentioned that this does not necessarily affect the files, it caused us not to be able to open them. \indent Moreover, we did not delve into the area of malformations in the PDF file. Throughout our work we have seen researches that focused specifically in this area \cite{torres2018malicious} \cite{OtsuboChecker}, and learnt for example that if there are objects in the file that do not appear in the ‘xref’ table, there is a high chance that a benign file was altered to contain malicious contents. %-------------------------------------------------------REFERENCES---------------------------------------------- \medskip \begin{thebibliography}{9} \bibitem{patil2018malicious} Patil, Dharmaraj R and Patil, JB. \textit{Cybernetics and Information Technologies}. [\textit{Malicious URLs Detection Using Decision Tree Classifiers and Majority Voting Technique}]. pages:11–29, 2018. \bibitem{torres2018malicious} Torres, Jose and De Los Santos, Sergio. \textit{Malicious PDF Documents Detection using Machine Learning Techniques}. 2018. \bibitem{1} Didier Stevens. \textit{Pdf Tools}. \texttt{https://blog.didierstevens.com/programs/pdf-tools/} 2017. \bibitem{davide2019malicious} Davide Maiorca, Battista Biggio and Giorgio Giacinto. \textit{Towards Adversarial Malware Detection: Lessons Learned from PDF-based Attacks}. April 2019. \bibitem{BGU2014malicious} Nir Nissim, Aviad Cohen, Robert Moskovitch, Asaf Shabtai, Matan Edri, Oren BarAd and Yuval Elovici. \textit{Keeping pace with the creation of new malicious PDF files using an active-learning based detection framework}. \bibitem{BGU2014survey} Nir Nissim, Aviad Cohen, Chanan Glezer and Yuval Elovici. \textit{Detection of Malicious PDF files and directions for enhancements: A state-of-the-art survey}. \bibitem{JSSrndic2011Laskov} Nedim Srndic and Pavel Laskov. \textit{Static Detection of Malicious JavaScript-Bearing PDF Documents}. 2011. \bibitem{Srndic2013Laskov} Nedim Srndic and Pavel Laskov. \textit{Detection of Malicious PDF Files Based on Hierarchical Document Structure}. 2013. \bibitem{Baldoni2018survey} Michele Elingiusti, Leonardo Aniello, Leonardo Querzoni and Roberto Baldoni. \textit{PDF-Malware Detection: A Survey and Taxonomy of Current Techniques}. 2018. \bibitem{Hamon2013malicious} Hamon Valentin. \textit{Malicious URI Resolving in PDF Documents}. 2013. \bibitem{Joachims1999Thorsten} Joachims Thorsten. \textit{Making Large Scale SVM Learning Practical}. 1999. \bibitem{OtsuboChecker} Yuhei Otsubo. \textit{O-checker: Detection of Malicious Documents through Deviation from File Format Specification}. \bibitem{Bonan2018ML} Bonan Cuan, Aliénor Damien, Claire Delaplace, Mathieu Valois. \textit{Malware Detection in PDF Files Using Machine Learning}. 2018. \bibitem{JAST2018} Aurore Fass, Robert P. Krawczyk, Michael Backes, Ben Stock. \textit{JaSt: Fully Syntactic Detection of Malicious (Obfuscated) JavaScript}. 2018. \bibitem{AnalyzePDF2014} Hiddenillusion. \textit{AnalyzePDF}. \texttt{https://github.com/hiddenillusion/AnalyzePDF} 2014. \bibitem{Peepdf2016} Jesparza. \textit{PeePDF}. \texttt{https://github.com/jesparza/peepdf} 2016. \bibitem{HistogramImage} Adrian Rosebrock. \textit{KNN Classifier for Image Classification}. \texttt{https://www.pyimagesearch.com/2016/08/08/k-nn-classifier-for-image-classification/} 2016. \end{thebibliography} \end{document}
{ "alphanum_fraction": 0.7579969097, "avg_line_length": 82.4342012667, "ext": "tex", "hexsha": "6e2fc5764044fb987ed035ad55c3a8d3714131f5", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "61bf9018a1d7729412a30bff91a46fc2768722c0", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "AUFinalProject/ProjectDefence", "max_forks_repo_path": "FINAL_noID.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "61bf9018a1d7729412a30bff91a46fc2768722c0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "AUFinalProject/ProjectDefence", "max_issues_repo_path": "FINAL_noID.tex", "max_line_length": 967, "max_stars_count": null, "max_stars_repo_head_hexsha": "61bf9018a1d7729412a30bff91a46fc2768722c0", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "AUFinalProject/ProjectDefence", "max_stars_repo_path": "FINAL_noID.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 32234, "size": 117139 }
\section{LHCb Experiment} %Brief overview of LHCb setting the scene for the rest of the section; physics aims, motivates the shape, it collects stuff and is made or many subdetectors, look here is a picture. % Should define what upstream and downstream are Physics motivations is the indirect search/evidence for new physics by looking at CP violation and rare decays of beauty and charm hadrons. (Almost a quote). lower lunimiscoity means that events are dominated by a single pp interaction which reduced the occupancy of the detector and also reduces radiation damage. LHCb must have a good trigger (efficiency) to select B hadrons with a vareity of final states. To identify B hadrons we need excellent vertex and momentum resolution which are necessary to have good proper time resolution and mass resolution. We need to be able to identify protons, kaons and pions. Detector geometry is chosen because at high energies b and anti-b hadrons are produed in either a forward of backward cone around the beam pipe. %Perhaps I could add a section about b physics and LHCb design and go into more detail about the production? Then the start of this section could just be an overview of LHCb as an introduction that sets the scene for the next sections. \subsection{\bbbar production} The LHC experiment was designed to make precise measurements of processes predicted with the SM to indirectly search for hines ot New Physics processes. %Simon has a reference here. In particular it was designed to measure both CP violating decays and race decays originating from $b$-hadrons. The physics motivations for the LHCb experiment are quite different compared to those the the general purpose detectors, ATLAS and CMS at the LHC, these differences are reflected in the experimental design. The LHCb experiment is a single-arm forward spectrometer that covers the angular region 10 to 250 mrad in the (non-bending) vertical direction and 10 to 300 mrad in the (bending) horisontal direction. The central acceptance is determined by the beampipe. The reduced abgular acceptance of LHC compare to the 4\pi acceptance of the GPDs is due to the prcesses that dominate \bbbar pair production in proton collisions. At the LHC \bbbar pairs are mainly produced through quark anti-quark annihilation, gluon - gluon fusion and gluon splitting, the Feyman diagrams for these processes are shown in Fig. XX. %there are references in some theses for this These production processes lead to the \bbbar pairs being produced at small angles relative to the beampipe. The angular distribution of \bbbar pairs produced is shown in Fig. XX. Therefore although LHCb has a smaller angular acceptance compared to teh GPDs it accepts approximately 40\% of $b$ quarks produced at the LHC. \subsection{Tracking} %What is tracking in general? - reconstruct electrically charged particles as the travel through the detector - with the addition of a magnet we can measure the momentum of charged particles as well - momentum is also necessary to interpert the information taken by the RICH - usually silicon or MWPC - measures the electric charge - tracks can be extraploated/interpolated into other detectors to link what is recoreded there with the tracks of charged particles %Why is is needed? - accurate tracking of the flights of charged particles is essential for all physics analyses - good tracking is needed to get good mometum resolution which then leads to good mass resolution which enables sucessful seperation of signal and background events. - allows the location and seperation of production and decay verticies to be measured, this is necessary to determine the mass of decaying particles and also the decay time. Also allows the seperation of signal and background events if this SV is precisely known by looking at isolating tracks. - displaced SVs are a signature of heacy flavour decays (Susan has some references) - provide accurate spatial measurements of charged particle tracs which allows the calculation of momentum and impact parameters - LHCb need excellent mass and proper time resolution, therefore needs great momentum and vertex resolutions. Examples of this could be given, such as the oscillation of the Bs anti-Bs system, or linking it to what I am trying to measure? - Bs lifetime is about 1ps, therefore it will fly ~1cm before it decays, need good vertex resolution to find this and identify the Bs decay products. %How does it work in LHCb (~3 parts of the detector, and then reconstruction), ie what is it made of and briefly how do they fit together. - vertex locator (VELO) surrounds the interaction point. - The tracking system in LHCb consists of the VELO, TT, T1-3 and the magent. VELO is a silcon detector designed to reconstructed PVs and SVs, the magnet bends charged particles allowing momentum of the particles to the determined and the tracking stations TT and T1-3 record the bent tracks. TT is just before the magnet, the T1-3 stations consists of an inner tracker made of silicon and an outer tracker made of straw tubes. - Reconstruction algorithms join up hits in the different tracking detectors to reconstruct events. - precision tracking is done by the VELO and the full tracjectories are reconstructed using information from the velo, TT, T1-3, along with momentum resolution. - hits are connected in the different detectors during track reconstruction %Special considerations - minimise material in the detector? - The momentum resolution is directly effectec by multiple scattering in the detectors therefore tracking detectors are desugned to minimise the material in the detector acceptance. \subsubsection{VELO} %What is the VELO, what is it designed to do and why is that important for physics? - designed to track charged particles and to reconstruct decay verticies, including secondary verticies of B mesons - the Velo is situated at the interaction point where protons collide - identify production verticies, which is why interaction point is within the VELO. - identifies the displacement of the SV from the PV - charateristic of B hadron decays. - B hadrons travel about 1cm before decaying because they have a long lifetime ~1.5ps for the B meson. the VELO must providetrack resolution of the order of 1mm. If we have good vertex resolution this property of B mesons can be used to get rid of backgroud events - Tracks from B hadron decays usually have large impact parameters compared to the PV, this is useful to remove background decays %How is the VELO made and how does this acheive the physics goals. The velo is made of 2 half which contain … > There are 21 stations in the VELO on each side arranged along the beam direction > velo stips are 100 micro meters in width > The VELO is silicon micro-strip detectors and they alternate as to whether they cover radial or azimuthal coordinates. Each half of the VELO is made to overlap so stop edge effects > each half of the velo is the same as the other but they are displaced by 150mm in the z direction which allows them to overlap - the VELO extends upstream of the interaction point so that PVs can be correctly reconstructed > Each half of the VELO is in an aluminum box, since the velo site in the LHC vacuum the box protects the vacuum from potential gas leaks from the modules and also keeps the velo electronics free of the RF EM fields generated by the beam. These are called RF boxed. Where the boxes meet they are corrugated to let them come together well, this part is called the RF foil and is part of the material budget of the VELO > material budget comes to 17.5\% or a radiation length. > - The VELO can make the best vertex locations and impact parameters if it is very close to the beam. The Velo can retract, stable beams 8mm from beam during injection 3cm. The VELO can make the best vertex locations and impact parameters if it is very close to the beam. > There are R-sensors and phi-sensors that alternate. The position on the sensors and z-distance of the sensor shows where the particle went > R sensors have concentric rings from the ceneter but the distance between strips gradually gets larger (pitch) from the center outwards (Nice picture in Ed's thesis) > R sensor strips are split into 4 45degree regions so have low capacitance and occupancy, the closest is 8.2mm from the nominal beam axis. > phi sensor have an inner and outer region where the inner strips are shorts and the 2 sections are skewed wrt each other (Nice picture in Ed's thesis) > phi sensor split in two due to occupancny and resolution reasons > skewed phi strips help with pattern recognition - cylindrical geometry of the VELO allows for fast reconstruction used for triggering events with b-hadrons - - z axis is chosen to keep full LHCb acceptance and to ensure that any track with LHCb angular acceptance should pass through at least 3 sensors - velo is in a vacuum to reduce the material seem by particles - there is a gap at the center of the velo sensors to allow the beam to pass through the gap - the velo sides overlap during stable beams to there is feedback about the relative positions of each side which helps with detector alignment %Additional VELO facts. - velo also computes the luminoscity using van de meer scans - the velo is used to locate tracks/verticies, in the velo the magnet's field is very low, straight track trajectories made in the velo are used as seeds in track resonstruction, used as a pileup detector. > 2 additions stations upstream (away from the rest of the detector) of the interaction point that are used in the pile-up veto system to remove events that contain too many proton interactions > the pile up part is 2 segments/things that detect the number of primary interactions and also the track multiplicity in one bunch crossing for LHCb we want about 1 interaction per event (I think) %Performance of the VELO only? > vertex resolution in the transverse plane 10-20 mircons, in the z directions 50-100 microns depending on the number of tracks in teh vertex. It can determine inpact parameters to a resolution of 13 microns for high momentum tracks the velo forms part of the tracking system > best resolution is 4 micrometers which allows a lifetime measurement of 50 fs (there's a reference in Ed's thesis) > In a typical event at LHCb 30-35 tracks per interaction vertex are reconstructed which leads to a PV resolution of 12 micro meters in the transverse plan and 65 along the beam axis. %Images - picture of the modules - picture of the VELO schematic - red and blue picture - layout of the r and phi sensors - picture of how the foils fit together - plots to performance %references - Harry has references for the sub-dectectors - he references the TDRs, he also has references for LHCb in the start of the chapter and his plots all have references too. Simon's is similar. \subsubsection{Magnet} %What is it, what is it designed to do and why is that important for physics? %How does is it made and how does it acheive the physics goals? \subsubsection{Tracking Stations} %Or split us the TT and T1-3. Tracker Turicensis and T1-3 that are split as the inner tracker (silicon) and the outer tracker (straw tubes). \subsubsection{Track resconstruction and preformance} \subsection{Particle Identification} \subsubsection{RICH} \subsubsection{Calrimeters} \subsubsection{Muon Stations} \subsubsection{Combined PID information and performance} \subsection{Trigger and event filtering} \subsection{MC and Software} \subsection{LHCb data collected so far}
{ "alphanum_fraction": 0.8027924725, "avg_line_length": 96.0916666667, "ext": "tex", "hexsha": "75fe808d692213a3237f32f1e83c41a020dd843f", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2018-02-19T16:03:23.000Z", "max_forks_repo_forks_event_min_datetime": "2018-02-19T16:03:23.000Z", "max_forks_repo_head_hexsha": "f930fcb2d9682beae829f11fe7c7fce4caeaee33", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "haevans/Thesis", "max_forks_repo_path": "LHC_LHCb/LHCb_draft.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f930fcb2d9682beae829f11fe7c7fce4caeaee33", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "haevans/Thesis", "max_issues_repo_path": "LHC_LHCb/LHCb_draft.tex", "max_line_length": 969, "max_stars_count": null, "max_stars_repo_head_hexsha": "f930fcb2d9682beae829f11fe7c7fce4caeaee33", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "haevans/Thesis", "max_stars_repo_path": "LHC_LHCb/LHCb_draft.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2545, "size": 11531 }
% arara: pdflatex: {shell: yes, files: [latexindent]} \subsubsection{\texttt{afterHeading} code blocks}\label{subsubsec-headings-no-add-indent-rules} Let's use the example \cref{lst:headings2} for demonstration throughout this \namecref{subsubsec-headings-no-add-indent-rules}. As discussed on \cpageref{lst:headings1}, by default \texttt{latexindent.pl} will not add indentation after headings. \cmhlistingsfromfile{demonstrations/headings2.tex}{\texttt{headings2.tex}}{lst:headings2} On using the YAML file in \cref{lst:headings3yaml} by running the command \begin{commandshell} latexindent.pl headings2.tex -l headings3.yaml \end{commandshell} we obtain the output in \cref{lst:headings2-mod3}. Note that the argument of \texttt{paragraph} has received (default) indentation, and that the body after the heading statement has received (default) indentation. \begin{minipage}{.45\textwidth} \cmhlistingsfromfile{demonstrations/headings2-mod3.tex}{\texttt{headings2.tex} using \cref{lst:headings3yaml}}{lst:headings2-mod3} \end{minipage}% \hfill \begin{minipage}{.45\textwidth} \cmhlistingsfromfile[style=yaml-LST]{demonstrations/headings3.yaml}[yaml-TCB]{\texttt{headings3.yaml}}{lst:headings3yaml} \end{minipage} If we specify \texttt{noAdditionalIndent} as in \cref{lst:headings4yaml} and run the command \begin{commandshell} latexindent.pl headings2.tex -l headings4.yaml \end{commandshell} then we receive the output in \cref{lst:headings2-mod4}. Note that the arguments \emph{and} the body after the heading of \texttt{paragraph} has received no additional indentation, because we have specified \texttt{noAdditionalIndent} in scalar form. \begin{minipage}{.45\textwidth} \cmhlistingsfromfile{demonstrations/headings2-mod4.tex}{\texttt{headings2.tex} using \cref{lst:headings4yaml}}{lst:headings2-mod4} \end{minipage}% \hfill \begin{minipage}{.45\textwidth} \cmhlistingsfromfile[style=yaml-LST]{demonstrations/headings4.yaml}[yaml-TCB]{\texttt{headings4.yaml}}{lst:headings4yaml} \end{minipage} Similarly, if we specify \texttt{indentRules} as in \cref{lst:headings5yaml} and run analogous commands to those above, we receive the output in \cref{lst:headings2-mod5}; note that the \emph{body}, \emph{mandatory argument} and content \emph{after the heading} of \texttt{paragraph} have \emph{all} received three tabs worth of indentation. \begin{minipage}{.55\textwidth} \cmhlistingsfromfile{demonstrations/headings2-mod5.tex}{\texttt{headings2.tex} using \cref{lst:headings5yaml}}{lst:headings2-mod5} \end{minipage}% \hfill \begin{minipage}{.42\textwidth} \cmhlistingsfromfile[style=yaml-LST]{demonstrations/headings5.yaml}[yaml-TCB]{\texttt{headings5.yaml}}{lst:headings5yaml} \end{minipage} We may, instead, specify \texttt{noAdditionalIndent} in `field' form, as in \cref{lst:headings6yaml} which gives the output in \cref{lst:headings2-mod6}. \begin{minipage}{.45\textwidth} \cmhlistingsfromfile{demonstrations/headings2-mod6.tex}{\texttt{headings2.tex} using \cref{lst:headings6yaml}}{lst:headings2-mod6} \end{minipage}% \hfill \begin{minipage}{.45\textwidth} \cmhlistingsfromfile[style=yaml-LST]{demonstrations/headings6.yaml}[yaml-TCB]{\texttt{headings6.yaml}}{lst:headings6yaml} \end{minipage} Analogously, we may specify \texttt{indentRules} as in \cref{lst:headings7yaml} which gives the output in \cref{lst:headings2-mod7}; note that mandatory argument text has only received a single space of indentation, while the body after the heading has received three tabs worth of indentation. \begin{minipage}{.45\textwidth} \cmhlistingsfromfile{demonstrations/headings2-mod7.tex}{\texttt{headings2.tex} using \cref{lst:headings7yaml}}{lst:headings2-mod7} \end{minipage}% \hfill \begin{minipage}{.45\textwidth} \cmhlistingsfromfile[style=yaml-LST]{demonstrations/headings7.yaml}[yaml-TCB]{\texttt{headings7.yaml}}{lst:headings7yaml} \end{minipage} Finally, let's consider \texttt{noAdditionalIndentGlobal} and \texttt{indentRulesGlobal} shown in \cref{lst:headings8yaml,lst:headings9yaml} respectively, with respective output in \cref{lst:headings2-mod8,lst:headings2-mod9}. Note that in \cref{lst:headings8yaml} the \emph{mandatory argument} of \texttt{paragraph} has received a (default) tab's worth of indentation, while the body after the heading has received \emph{no additional indentation}. Similarly, in \cref{lst:headings2-mod9}, the \emph{argument} has received both a (default) tab plus two spaces of indentation (from the global rule specified in \cref{lst:headings9yaml}), and the remaining body after \texttt{paragraph} has received just two spaces of indentation. \begin{minipage}{.45\textwidth} \cmhlistingsfromfile{demonstrations/headings2-mod8.tex}{\texttt{headings2.tex} using \cref{lst:headings8yaml}}{lst:headings2-mod8} \end{minipage}% \hfill \begin{minipage}{.45\textwidth} \cmhlistingsfromfile[style=yaml-LST]{demonstrations/headings8.yaml}[yaml-TCB]{\texttt{headings8.yaml}}{lst:headings8yaml} \end{minipage} \begin{minipage}{.45\textwidth} \cmhlistingsfromfile{demonstrations/headings2-mod9.tex}{\texttt{headings2.tex} using \cref{lst:headings9yaml}}{lst:headings2-mod9} \end{minipage}% \hfill \begin{minipage}{.45\textwidth} \cmhlistingsfromfile[style=yaml-LST]{demonstrations/headings9.yaml}[yaml-TCB]{\texttt{headings9.yaml}}{lst:headings9yaml} \end{minipage}
{ "alphanum_fraction": 0.78099631, "avg_line_length": 57.6595744681, "ext": "tex", "hexsha": "7c06266697883327bda372c3726ba622b9bfa624", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d6b63002cdcecf291e2abc7a399e0d7af4bd9038", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "digorithm/latex-linter", "max_forks_repo_path": "project/server/dependencies/latexindent.pl-master/documentation/subsubsec-headings.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d6b63002cdcecf291e2abc7a399e0d7af4bd9038", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "digorithm/latex-linter", "max_issues_repo_path": "project/server/dependencies/latexindent.pl-master/documentation/subsubsec-headings.tex", "max_line_length": 154, "max_stars_count": null, "max_stars_repo_head_hexsha": "d6b63002cdcecf291e2abc7a399e0d7af4bd9038", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "digorithm/latex-linter", "max_stars_repo_path": "project/server/dependencies/latexindent.pl-master/documentation/subsubsec-headings.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1838, "size": 5420 }
\subsection{Cyder2.0: Goals and Current Work} \begin{frame} \frametitle{Cyder2.0: Goals and Current Work} \begin{block}{Goals for Cyder2.0} \begin{enumerate} \item Create a Waste Conditioning Facility archetype and a Simple Heat-Limited Repository archetype that can be optimized to load waste packages into the repository based on a thermal constraints. \item Add both archetypes into the Predicting the Past \Cyclus simulation to simulate loading U.S. nuclear waste packages into a final waste repository. \end{enumerate} \end{block} \begin{block}{Current Work} Creating the Conditioning Facility archetype \end{block} \begin{block}{How the Conditioning Facility archetype works} The user specifies the commodity and material resource object that the archetype accepts and produces. Based on that, the archetype conditions the input and puts it into the form of the user-specified output. \end{block} \end{frame} \subsection{Cyder2.0: Expectations} \begin{frame} \frametitle{Cyder2.0: Expectations} \begin{block}{When will it be ready?} Waste Conditioning Archetype: August 2018 \\ Simple Heat-Limited Repository Archetype: October 2018 \end{block} \begin{block}{Who is working on this?} Gwendolyn Chee, UIUC Graduate Student Researcher \end{block} \begin{block}{Where is Cyder?} Cyder can be found in: \href{https://github.com/arfc/cyder}{arfc/cyder} \end{block} \end{frame}
{ "alphanum_fraction": 0.7801570307, "avg_line_length": 34.1707317073, "ext": "tex", "hexsha": "bf327b45edb1dcf0690174d923b5eeea19a46461", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8f5c1b02669a6e68394e241250baf5339bc1f297", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "arfc/2018-07-27-cyclus-call", "max_forks_repo_path": "cyder.tex", "max_issues_count": 5, "max_issues_repo_head_hexsha": "8f5c1b02669a6e68394e241250baf5339bc1f297", "max_issues_repo_issues_event_max_datetime": "2018-07-27T00:50:48.000Z", "max_issues_repo_issues_event_min_datetime": "2018-07-25T14:48:24.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "arfc/2018-07-27-cyclus-call", "max_issues_repo_path": "cyder.tex", "max_line_length": 208, "max_stars_count": null, "max_stars_repo_head_hexsha": "8f5c1b02669a6e68394e241250baf5339bc1f297", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "arfc/2018-07-27-cyclus-call", "max_stars_repo_path": "cyder.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 364, "size": 1401 }
\section{Properties of the Basic Model} \label{sec:props} It is amazing how much the semantics of \textsection\ref{sec:model} ``gets right'' out of the box, including value range analysis, internal reads, and SC access, all of which can be complex in other models. In this section, we walk through several litmus tests, valid rewrites and invalid rewrites. The examples show that \ref{5a}--\ref{5d} and \ref{rf3}--\ref{rf4} are understandable as \emph{general principles}. The interaction of these principles is limited to a single, global, pomset order. We discuss tweaks to the semantics in \textsection\ref{sec:refine}. \subsection{Litmus Tests} \label{sec:litmus} \citet{PughWebsite} developed a set of litmus tests for the java memory model. Our model gives the expected result for all but cases 16, 19 and 20 (unrolling loops): we discuss \ref{TC16} below; \textsc{tc19} and \textsc{tc20} involve a thread join operation, which is not expressible in our language. Our model also agrees with the \oota{} examples of \citet[\textsection 4]{DBLP:conf/esop/BattyMNPS15} and the ``surprising and controversial behaviors'' of \citet[\textsection 8]{Manson:2005:JMM:1047659.1040336}. %See \cite{DBLP:conf/esop/PaviottiCPWOB20} for an exhaustive list of litmus tests. \myparagraph{Buffering} Consider the \emph{store buffering} and \emph{load buffering} litmus \labeltext[SB]{tests}{SB}: \begin{align*} \taglabel{SB/LB} \begin{gathered} \PW{x}{0}\SEMI \PW{y}{0}\SEMI ( \PW{x}{1}\SEMI\PR{y}{\aReg} \PAR \PW{y}{1}\SEMI \PR{x}{\aReg}) \\[-1.5ex] \hbox{\begin{tikzinline}[node distance=.9em] \event{wx0}{\DW{x}{0}}{} \event{wy0}{\DW{y}{0}}{right=of wx0} \event{wx}{\DW{x}{1}}{right=3em of wy0} \event{ry}{\DR{y}{0}}{right=of wx} \event{wy}{\DW{y}{1}}{right=3em of ry} \event{rx}{\DR{x}{0}}{right=of wy} \rf[out=15,in=165]{wy0}{ry} \rf[out=10,in=170]{wx0}{rx} \wk{ry}{wy} \wk[out=-165,in=-15]{rx}{wx} \end{tikzinline}} \end{gathered} && \begin{gathered} \PR{y}{\aReg}\SEMI \PW{x}{1} \PAR \PR{x}{\aReg}\SEMI \PW{y}{1} \\ \hbox{\begin{tikzinline}[node distance=.9em] \event{ry}{\DR{y}{1}}{} \event{wx}{\DW{x}{1}}{right=of ry} \event{rx}{\DR{x}{1}}{right=3em of wx} \event{wy}{\DW{y}{1}}{right=of rx} \rf{wx}{rx} \rf[out=-165,in=-15]{wy}{ry} \end{tikzinline}} \end{gathered} \end{align*} Because there are no intra-thread dependencies, the desired outcomes are allowed, as shown. \myparagraph{Publication} \ref{rf3}--\ref{rf4} and \ref{5b}--\ref{5c} ensure correct publication, prohibiting stale reads: \begin{gather} \taglabel{Pub1} \begin{gathered} \PW{x}{0}\SEMI %\PW{y}{0}\SEMI \PW{x}{1}\SEMI \PW[\mRA]{y}{1} \PAR \PR[\mRA]{y}{r}\SEMI \PR{x}{s} \\[-.4ex] \nonumber \hbox{\begin{tikzinline}[node distance=1.5em] \event{wx0}{\DW{x}{0}}{} \event{wx1}{\DW{x}{1}}{right=of wx0} \event{wy1}{\DWRel{y}{1}}{right=of wx1} \event{ry1}{\DRAcq{y}{1}}{right=2.5em of wy1} \event{rx0}{\DR{x}{0}}{right=of ry1} \sync{wx1}{wy1} \sync{ry1}{rx0} \rf{wy1}{ry1} \wk{wx0}{wx1} \end{tikzinline}} \end{gathered} \end{gather} This pomset is disallowed, since $(\DR x0)$ fails to satisfy \ref{rf4}: $(\DW x0) \lt (\DW x1) \lt (\DR x0)$. Attempting to satisfy this requirement, one might order $(\DR x0)$ before $(\DW x1)$, but this would create a cycle. \myparagraph{Coherence} Our model of coherence does not correspond to either Java or C11. We have chosen the model to validate \ref{CSE} (unlike C11 relaxed atomics) and the local \drfsc{} theorem (unlike Java). Since reads are not ordered by \ref{5b}, we {allow} the following unintuitive behavior. C11 includes read-read coherence between relaxed atomics in order to forbid this: \begin{gather*} \taglabel{Co2} \begin{gathered} \PW{x}{1}\SEMI \PW{x}{2} \PAR \PW{y}{x} \SEMI \PW{z}{x} \\[-1ex] \hbox{\begin{tikzinline}[node distance=1.5em] \event{a}{\DW{x}{1}}{} \event{b}{\DW{x}{2}}{right=of a} \wk{a}{b} \event{c}{\DR{x}{2}}{right=3em of b} \event{d}{\DW{y}{2}}{right=of c} \po{c}{d} \event{e}{\DR{x}{1}}{right=of d} \event{f}{\DW{z}{1}}{right=of e} \po{e}{f} \rf{b}{c} \rf[out=10,in=170]{a}{e} \end{tikzinline}} \end{gathered} \end{gather*} Here, the reader sees $2$ then $1$, although they are written in the reverse order. This behavior is allowed by Java in order to validate \ref{CSE} without requiring aliasing analysis. However, our model is more coherent than Java, which permits the following: \begin{gather*} \taglabel{TC16} \begin{gathered} \PR{x}{r}\SEMI \PW{x}{1} \PAR \PR{x}{s}\SEMI \PW{x}{2} \\[-1ex] \hbox{\begin{tikzinline}[node distance=1.5em] \event{a1}{\DR{x}{2}}{} \event{a2}{\DW{x}{1}}{right=of a1} \wk{a1}{a2} \event{b1}{\DR{x}{1}}{right=3em of a2} \event{b2}{\DW{x}{2}}{right=of b1} \wk{b1}{b2} \rf{a2}{b1} \rf[out=-165,in=-15]{b2}{a1} \end{tikzinline}} \end{gathered} \end{gather*} We also forbid the \labeltext{following}{page:coherence2}, which Java allows: \begin{gather*} \taglabel{Co3} \begin{gathered} \PW{x}{1}\SEMI \PW[\mRA]{y}{1} \PAR \PW{x}{2}\SEMI \PW[\mRA]{z}{1} \PAR \PR[\mRA]{z}{r} \SEMI \PR[\mRA]{y}{r} \SEMI \PR{x}{r} \SEMI \PR{x}{r} \\[-1ex] \hbox{\begin{tikzinline}[node distance=1.5em] \event{a1}{\DW{x}{1}}{} \event{a2}{\DW[\mRA]{y}{1}}{right=of a1} \sync{a1}{a2} \event{b1}{\DW{x}{2}}{right=3em of a2} \event{b2}{\DW[\mRA]{\,z}{1}}{right=of b1} \sync{b1}{b2} \event{c1}{\DR[\mRA]{\,z}{1}}{right=3em of b2} \event{c2}{\DR[\mRA]{y}{1}}{right=of c1} \event{c3}{\DR{x}{2}}{right=of c2} \event{c4}{\DR{x}{1}}{right=of c3} \sync{c1}{c2} \sync{c2}{c3} \sync[out=20,in=160]{c2}{c4} \rf[out=8,in=172]{a2}{c2} \rf{b2}{c1} \wk[out=19,in=161]{a1}{b1} \wk[out=-172,in=-8]{c4}{b1} \end{tikzinline}} \end{gathered} \end{gather*} The order from $(\DR{x}{1})$ to $(\DW{x}{2})$ is required to fulfill $(\DR{x}{1})$. The outcome is disallowed due to the cycle. %by fulfillment. If this outcome were allowed, then racing writes would be visible, even after a full synchronization; this would invalidate local reasoning about data races (\textsection\ref{sec:sc}). \myparagraph{MCA} We present a few examples that are hallmarks of \mca{} architectures. \begin{scope} \allowdisplaybreaks \begin{gather*} \taglabel{MCA1} \begin{gathered} \IF{z}\THEN \PW{x}{0} \FI \SEMI \PW{x}{1} {\PAR} \IF{x}\THEN \PW{y}{0} \FI \SEMI \PW{y}{1} {\PAR} \IF{y}\THEN \PW{z}{0} \FI \SEMI \PW{z}{1} \\[-1ex] \hbox{\begin{tikzinline}[node distance=1.5em] \event{a1}{\DR{z}{1}}{} \event{a2}{\DW{x}{0}}{right=of a1} \po{a1}{a2} \event{a3}{\DW{x}{1}}{right=of a2} \wk{a2}{a3} \event{b1}{\DR{x}{1}}{right=3em of a3} \event{b2}{\DW{y}{0}}{right=of b1} \po{b1}{b2} \event{b3}{\DW{y}{1}}{right=of b2} \wk{b2}{b3} \event{c1}{\DR{y}{1}}{right=3em of b3} \event{c2}{\DW{z}{0}}{right=of c1} \po{c1}{c2} \event{c3}{\DW{z}{1}}{right=of c2} \wk{c2}{c3} \rf{a3}{b1} \rf{b3}{c1} \rf[out=173,in=7]{c3}{a1} \end{tikzinline}} \end{gathered} \\[1ex] \taglabel{MCA2} \begin{gathered} \PW{x}{0}\SEMI \PW{x}{1} \PAR \PW{y}{x} \PAR \PR[\mRA]{y}{r} \SEMI \PR{x}{s} \\[-1ex] \hbox{\begin{tikzinline}[node distance=1.5em] \event{wx0}{\DW{x}{0}}{} \event{wx1}{\DW{x}{1}}{right=of wx0} \wk{wx0}{wx1} \event{rx1}{\DR{x}{1}}{right=3em of wx1} \event{wy1}{\DW{y}{1}}{right=of rx1} \po{rx1}{wy1} \event{ry1}{\DRAcq{y}{1}}{right=3em of wy1} \event{rx0}{\DR{x}{0}}{right=of ry1} \rf{wx1}{rx1} \rf{wy1}{ry1} \sync{ry1}{rx0} \wk[out=170,in=10]{rx0}{wx1} \end{tikzinline}} \end{gathered} \end{gather*} \end{scope} These candidate executions are invalid, due to cycles. \ref{MCA1} is an example of \emph{write subsumption} \cite[\textsection 3]{DBLP:journals/pacmpl/PulteFDFSS18}. In \ref{MCA2}, $(\DW{x}{1})$ is delivered to the second thread, but not the third; this is similar to the well know \iriw{} (Independent Reads of Independent Writes) litmus test, which is also disallowed by \mca{} architectures if the reads within each thread are ordered. If $y^\mRA$ is changed to $y^\mRLX$ in \ref{MCA2}, then there would be no order from $(\DR[\mRLX]{y}{1})$ to $(\DR{x}{0})$, and the execution would be allowed. Since read-read dependencies do not appear in pomset order, the execution would still be allowed if a control or address dependency were to be introduced between the reads. See example \ref{addr2} (\textsection\ref{sec:limits}) for further discussion. \myparagraph{Internal Reads and Value Range Analysis} The JMM causality test cases \citep{PughWebsite} are justified via compiler analysis, possibly in collusion with the scheduler: If every observed value can be shown to satisfy a precondition, then the precondition can be dropped. For example, \ref{TC1} determines that the following execution should be allowed, as it is in our model: \begin{gather*} \taglabel{TC1} \begin{gathered} \PW{x}{0} \SEMI (\PR{x}{r}\SEMI\IF{r\geq0}\THEN \PW{y}{1} \FI \PAR \PW{x}{y}) \\[-1ex] \hbox{\begin{tikzinline}[node distance=1.5em] \event{wx0}{\DW{x}{0}}{} \event{rx1}{\DR{x}{1}}{right=3em of wx0} \event{wy1}{0\geq0\mid\DW{y}{1}}{right=of rx1} \event{ry1}{\DR{y}{1}}{right=3em of wy1} \event{wx1}{\DW{x}{1}}{right=of ry1} \po{ry1}{wx1} \rf[out=-168,in=-12]{wx1}{rx1} \rf{wy1}{ry1} \wk[out=10,in=170]{wx0}{wx1} \wk{wx0}{rx1} \end{tikzinline}} \end{gathered} \end{gather*} In this example, $(\DW{x}{0})$ ``fulfills'' the read of $x$ that is used in the guard of the conditional. This is possible when prefixing $(\DR{x}{1})$ performs the substitution $[x/r]$, but does not weaken the resulting precondition $(x\geq0\mid\DW{y}{1})$. Subsequently prefixing $(\DW{x}{0})$ substitutes $[0/x]$, resulting in the tautological precondition $(0\geq0\mid\DW{y}{1})$. Note that the execution does not have an action $(\DR{x}{0})$. Our semantics is robust with respect to the introduction of concurrent writes, as in \ref{TC9}: \begin{gather*} \taglabel{TC9} \begin{gathered} \PW{x}{0} \SEMI (\PR{x}{r}\SEMI\IF{r\geq0}\THEN \PW{y}{1} \FI \PAR \PW{x}{y} \PAR \PW{x}{-2}) \\[-1ex] \hbox{\begin{tikzinline}[node distance=1.5em] \event{wx0}{\DW{x}{0}}{} \event{rx1}{\DR{x}{1}}{right=3em of wx0} \event{wy1}{0\geq0\mid\DW{y}{1}}{right=of rx1} \event{ry1}{\DR{y}{1}}{right=3em of wy1} \event{wx1}{\DW{x}{1}}{right=of ry1} \event{wx2}{\DW{x}{{-2}}}{right=3em of wx1} \po{ry1}{wx1} \rf[out=-168,in=-12]{wx1}{rx1} \rf{wy1}{ry1} \wk[out=10,in=170]{wx0}{wx1} \wk{wx0}{rx1} \wk{wx1}{wx2} \end{tikzinline}} \end{gathered} \end{gather*} The calculation of this pomset is unchanged from \ref{TC1}. Examples such as \ref{TC9} present substantial difficulties in other models. When thought of in terms of compiler optimizations, \ref{TC9} is justified by global value analysis in collusion with the thread scheduler. This execution is disallowed by our event structure model \cite{DBLP:conf/lics/JeffreyR16}. It is allowed by \citet{Pichon-Pharabod:2016:CSR:2837614.2837616}, at the cost of introducing \emph{dead reads}. The reasoning for \ref{TC2} is similar, but in this case no value is necessary to satisfy the precondition: \begin{gather*} \taglabel{TC2} \begin{gathered} \PR{x}{r}\SEMI \PR{x}{s}\SEMI \IF{r{=}s}\THEN \PW{y}{1}\FI \PAR \PW{x}{y} \\[-1ex] \nonumber \hbox{\begin{tikzinline}[node distance=1.5em] \event{a1}{\DR{x}{1}}{} \event{a2}{\DR{x}{1}}{right=of a1} \event{a3}{(x{=}x)\land(1{=}1)\mid\DW{y}{1}}{right=of a2} \event{b1}{\DR{y}{1}}{right=3em of a3} \event{b2}{\DW{x}{1}}{right=of b1} \rf{a3}{b1} \po{b1}{b2} \rf[out=169,in=11]{b2}{a2} \rf[out=169,in=11]{b2}{a1} \end{tikzinline}} \end{gathered} \end{gather*} Note that in \begin{math} \sem{\PR{x}{s}\SEMI \IF{r{=}s}\THEN \PW{y}{1}\FI}, \end{math} the precondition on $(\DW{y}{1})$ must imply $(r{=}x \land r{=}1)$. The first is imposed by \ref{5a}, the second by \ref{4c}, ensuring that the two reads see the same value. Using \armeight{} terminology, these executions involve \emph{internal reads}, which are fulfilled by a sequentially preceding write. Read actions always generate an event that must be fulfilled, and therefore cannot be ignored, even if they are unused. This fact prevents internal reads from ignoring concurrent blocking writes. \begin{gather*} \taglabel{Internal1} \begin{gathered} \PW{x}{1} \SEMI \PW[\mRA]{a}{1} \SEMI \IF{z^\mRA}\THEN \PW{y}{x} \FI \PAR \IF{a^\mRA}\THEN \PW{x}{2}\SEMI \PW[\mRA]{z}{1} \FI \\ \hbox{\begin{tikzinline}[node distance=1.2em] \event{a1}{\DW{x}{1}}{} \event{a2}{\DWRel{a}{1}}{right=of a1} \sync{a1}{a2} \event{b3}{\DRAcq{a}{1}}{below right=0em and 3em of a2} \rf{a2}{b3} \event{b4}{\DW{x}{2}}{right=of b3} \sync{b3}{b4} \event{b5}{\DWRel{z}{1}}{right=of b4} \sync{b4}{b5} \event{a6}{\DRAcq{b}{1}}{above right=0em and 3em of b5} \rf{b5}{a6} \event{a7}{\DR{x}{1}}{right=of a6} \sync{a6}{a7} \event{a8}{1{=}1\mid\DW{y}{1}}{right=of a7} \graypo{a7}{a8} \sync[out=-18,in=-162]{a6}{a8} \end{tikzinline}} \end{gathered} \end{gather*} Here $(\DR{x}{1})$ violates \ref{rf4}. The precondition $(1{=}1)$ is imposed by \ref{4c}. The pomset becomes inconsistent if we change $(\DR{x}{1})$ to $(\DR{x}{2})$, since the precondition would change to $(2{=}1)$. Internal reads are notoriously difficult to get right. Consider \cite[Ex 3.6]{DBLP:journals/pacmpl/PodkopaevLV19}: \begin{gather*} \taglabel{Internal2} \begin{gathered} \PR{x}{\aReg}\SEMI \PW[\mRA]{y}{1}\SEMI \PR{y}{\bReg}\SEMI \PW{z}{\bReg} \PAR \PW{x}{z} \\[-1ex] \nonumber \hbox{\begin{tikzinline}[node distance=1.5em] \event{a1}{\DR{x}{1}}{} \event{a2}{\DWRel{y}{1}}{right=of a1} \sync{a1}{a2} \event{a3}{\DR{y}{1}}{right=of a2} \event{a4}{1{=}1\mid\DW{z}{1}}{right=of a3} \rf{a2}{a3} \event{b1}{\DR{z}{1}}{right=3em of a4} \event{b2}{\DW{x}{1}}{right=of b1} \po{b1}{b2} \rf{a4}{b1} \rf[out=170,in=10]{b2}{a1} \end{tikzinline}} \end{gathered} \end{gather*} This behavior is allowed in our model, as it is in \armeight. Note that $\sem{\PW{z}{\bReg}}$ includes $(\bReg{=}1\mid \DW{z}{1})$. Prepending a read, $\sem{\PR{y}{\bReg} \SEMI \PW{z}{\bReg}}$ may update the precondition to $(y{=}1\mid \DW{z}{1})$ without introducing order. Further prepending $(\DWRel{y}{1})$ results in $(1{=}1\mid \DW{z}{1})$. Our model drops order into actions that depend on a read that can be fulfilled {internally}, by a prefixed write. This is natural consequence of substitution. The \armeight{} model has to jump through some hoops to ensure that internal reads are handled correctly. \armeight{} takes the symmetric approach: rather than dropping order \emph{out of} an internal read, \armeight{} drops the order \emph{into} it. This difference complicates the proof of correctness for implementing our semantics on \armeight{} (\textsection\ref{sec:arm}). \myparagraph{SC access} \ref{5d} ensures that program order between SC operations is always preserved. Combined with \ref{rf3}--\ref{rf4}, this is sufficient to establish that programs with only SC access have only SC executions; for example, the executions of \ref{SB/LB} are banned when the all actions are $\mSC$. It is also immediate that SC actions can be totally ordered, using any linearization of pomset order. Just as SC access in \armeight{} is simplified by \mca, it is simplified here by the global pomset order. SC access is not as strict as volatile access in Java. For example, our model allows the following, since there is no order from $(\DW[\mSC]{x}{2})$ to $(\DW{y}{1})$---recall that SC writes are \emph{releases}. \begin{gather*} \taglabel{SC1} \begin{gathered} \PR{y}{r}\SEMI \PW[\mSC]{x}{1}\SEMI \PR{x}{s} \PAR \PW[\mSC]{x}{2} \SEMI \PW{y}{1} \\[-1ex] \hbox{\begin{tikzinline}[node distance=1.5em] \event{a}{\DR{y}{1}}{} \event{b}{\DW[\mSC]{x}{1}}{right=of a} \sync{a}{b} \event{bb}{\DR{x}{2}}{right=of b} \wk{b}{bb} \event{d}{\DW[\mSC]{x}{2}}{right=3em of bb} \event{e}{\DW{y}{1}}{right=of d} \rf{d}{bb} \rf[out=-170,in=-10]{e}{a} \wk[in=165,out=15]{b}{d} \end{tikzinline}} \end{gathered} \end{gather*} This execution is disallowed by \citet[\textsection8.2]{Dolan:2018:BDR:3192366.3192421}, preventing them from using \texttt{stlr} to implement volatile writes on \armeight{}. Our implementation strategy does use \texttt{stlr} for SC writes, as is standard. For further discussion, see examples \ref{past} and \ref{future} in \textsection\ref{sec:sc}. \citet[\textsection3.1]{DBLP:conf/pldi/WattPPBDFPG20} noticed a similar difficulty in Javascript \cite[\textsection27]{ecma2019}: \begin{gather*} \taglabel{SC2} \begin{gathered} \PW[\mSC]{x}{1} \SEMI \PR[\mSC]{y}{r} \PAR \PW[\mSC]{y}{1} \SEMI \PW[\mSC]{y}{2} \SEMI \PW{x}{2} \SEMI \PR[\mSC]{x}{s} \\[-1ex] \hbox{\begin{tikzinline}[node distance=1.5em] \event{a}{\DW[\mSC]{x}{1}}{} \event{b}{\DR[\mSC]{y}{1}}{right=of a} \event{c}{\DW[\mSC]{y}{1}}{right=3em of b} \event{d}{\DW[\mSC]{y}{2}}{right=of c} \event{e}{\DW{x}{2}}{right=of d} \event{f}{\DR[\mSC]{x}{1}}{right=of e} \sync{a}{b} \sync{c}{d} \sync[out=15,in=165]{d}{f} \rf{c}{b} \rf[out=-8,in=-172]{a}{f} \wk[in=10,out=170]{e}{a} \wk{e}{f} \end{tikzinline}} \end{gathered} \end{gather*} This execution is allowed both by our semantics and by \armeight{} (using \texttt{stlr} for SC writes and \texttt{ldar} for SC reads). However, it is not allowed by Javascript 2019. In Javascript, the rules relating SC and relaxed access are subtle. As result of these interactions, Javascript 2019 fails to satisfy \drfsc{} \cite{DBLP:journals/pacmpl/WattRP19}. The rules are even more complex in C11; see \ref{SC3} and \ref{SC4} in \textsection\ref{sec:variants} for a discussion of SC fences in C11. In our model, only \ref{5d} is required to explain SC access. \subsection{Valid and Invalid Rewrites} \label{sec:valid} When $\sem{\aCmd} \supseteq \sem{\aCmd'}$, we say that $\aCmd'$ is a \emph{valid transformation} of $\aCmd$. In this subsection, we show the validity of specific optimizations. Let $\free(\aCmd)$ be the set of locations and registers that occur in $\aCmd$. The semantics validates many peephole optimizations. Most apply only to relaxed access. \begin{align*} \taglabel{RR} \sem{\PR{\aLoc}{\aReg} \SEMI \PR{\bLoc}{\bReg}\SEMI\aCmd} &= \sem{\PR{\bLoc}{\bReg}\SEMI \PR{\aLoc}{\aReg}\SEMI\aCmd} &&\text{if } \aReg\neq\bReg \\ \taglabel{WW} \sem{\aLoc \GETS \aExp \SEMI \bLoc \GETS \bExp\SEMI\aCmd} &= \sem{\bLoc \GETS \bExp\SEMI \aLoc \GETS \aExp\SEMI\aCmd} &&\text{if } \aLoc\neq\bLoc \\ \taglabel{RW} \sem{\aLoc \GETS \aExp \SEMI \PR{\bLoc}{\bReg} \SEMI\aCmd} &= \sem{\PR{\bLoc}{\bReg} \SEMI\aLoc \GETS \aExp\SEMI\aCmd} &&\text{if } \aLoc\neq\bLoc \textand \bReg\not\in\free(\aExp)%\disjoint{{\free(\aLoc \GETS \aExp)}}{{\free(\PR{\bLoc}{\bReg})}} \end{align*} \ref{5} imposes no order between events in \ref{RR}--\ref{RW}. %Note that \ref{RR} allows aliasing. Using augmentation closure, \ref{5} also validates roach-motel reorderings \cite{SevcikThesis}. For example, on read/write pairs: \begin{align*} \tag{\textsc{roach1}}\label{AcqW} \sem{x^\amode \GETS \aExp \SEMI\PR{y}{\bReg} \SEMI\aCmd} &\supseteq \sem{\PR{y}{\bReg} \SEMI x^\amode\GETS \aExp \SEMI \aCmd} &&\text{if } \aLoc\neq\bLoc \textand \bReg\not\in\free(\aExp)%\disjoint{{\free(\aLoc \GETS \aExp)}}{{\free(\PR{\bLoc}{\bReg})}} \\ \tag{\textsc{roach2}}\label{RelW} \sem{x \GETS \aExp \SEMI\PR[\amode]{y}{\bReg} \SEMI\aCmd} &\supseteq \sem{\PR[\amode]{y}{\bReg} \SEMI x\GETS \aExp \SEMI \aCmd} &&\text{if } \aLoc\neq\bLoc \textand \bReg\not\in\free(\aExp)%\disjoint{{\free(\aLoc \GETS \aExp)}}{{\free(\PR{\bLoc}{\bReg})}} \end{align*} Redundant load elimination \eqref{RL} follows from \ref{1}, taking $\bEv\in\Event$, regardless of the access mode: \begin{align*} \taglabel{RL} \sem{\PR[\amode]{\aLoc}{\aReg} \SEMI \PR[\amode]{\aLoc}{\bReg}\SEMI\aCmd} &\supseteq \sem{\PR[\amode]{\aLoc}{\aReg} \SEMI \bReg \GETS \aReg\SEMI\aCmd} \end{align*} Since \ref{5b} does not impose order between reads of the same location, \ref{RR} can allow the possibility that $\aLoc=\bLoc$. As a result, read optimizations are not limited by the power of aliasing analysis. By composing \ref{RR} and \ref{RL}, we validate \ref{CSE}: \begin{align*} \taglabel{CSE} \sem{r_1\GETS \aLoc \SEMI s\GETS \bLoc \SEMI r_2\GETS \aLoc\SEMI\aCmd} \supseteq \sem{r_1\GETS \aLoc \SEMI r_2\GETS r_1\SEMI s\GETS \bLoc \SEMI\aCmd} &&\textif \aReg_2\neq\bReg&&\hbox{} \end{align*} Many laws hold for the conditional, such as dead code elimination \eqref{DC} and code lifting \eqref{CL}: \begin{align*} \taglabel{DC} \sem{\IF{\aExp}\THEN\aCmd\ELSE\bCmd\FI} &= \sem{\aCmd} &&\textif \aExp \text{ is a tautology} \\ \taglabel{CL} \sem{\IF{\aExp}\THEN\aCmd\ELSE\aCmd\FI} &\supseteq \sem{\aCmd} \end{align*} Code lifting also applies to program fragments inside a conditional. For example: \begin{align*} \sem{\IF{\aExp}\THEN x\GETS \bExp \SEMI\aCmd\ELSE x\GETS \bExp \SEMI\bCmd\FI} &\supseteq \sem{x\GETS \bExp \SEMI \IF{\aExp}\THEN\aCmd\ELSE\bCmd\FI} \end{align*} We discuss the inverse of \ref{CL} in \textsection\ref{sec:refine}. As expected, %sequential and parallel composition commutes with conditionals and declarations, and conditionals and declarations commute with each other. For example, we have \emph{scope extrusion}~\cite{Milner:1999:CMS:329902}: \begin{align*} \taglabel{SE} \sem{\aCmd\PAR \VAR\aLoc\SEMI\bCmd} &= \sem{\VAR\aLoc\SEMI(\aCmd\PAR\bCmd)} &&\text{if } \aLoc\not\in\free(\aCmd) \end{align*} \myparagraph{Invalid Rewrites} The definition of location binding does not validate renaming of locations: if $\aLoc\neq\bLoc$ then $\sem{\VAR\bLoc\SEMI\aCmd}\neq\sem{\VAR\aLoc\SEMI\aCmd[\aLoc/\bLoc]}$, even if $\aCmd$ does not mention~$\aLoc$. This is consistent with support for address calculation, which is required by realistic memory allocators. \ref{Internal2} shows that---like most relaxed models---our model fails to validate \emph{thread inlining}. The given execution is impossible if the first thread is split, as in \begin{math} \sem{\PR{x}{\aReg}\SEMI \PW[\mRA]{y}{1}\PAR \PR{y}{\bReg}\SEMI \PW{z}{\bReg} \PAR \PW{x}{z}}. \end{math} The write in the first thread cannot discharge the precondition in the second, now separate. Some rewrites are invalid in a concurrent setting, such as relevant read introduction: \begin{displaymath} \sem{\PR{\aLoc}{\aReg} \SEMI \IF{\aReg {\neq} \aReg} \THEN \PW{y}{1} \FI} \not\supseteq \sem{\PR{\aLoc}{\aReg} \SEMI \PR{\aLoc}{\bReg} \SEMI \IF{\aReg {\neq}\bReg} \THEN \PW{y}{1} \FI} \end{displaymath} Observationally, these are distinguished by the context % \begin{math} \hole{} \PAR \PW{x}{1}\PAR \PW{x}{2}. \end{math} Write introduction is also invalid, even when duplicating an existing write: \begin{displaymath} \sem{\PW{\aLoc}{1}} \not\supseteq \sem{\PW{\aLoc}{1} \SEMI \PW{\aLoc}{1}} \end{displaymath} These are distinguished by the context: \begin{math} \hole{} \PAR \PR{x}{r} \SEMI \PW{x}{2} \SEMI \PR{x}{s}\SEMI \IF{\aReg {=} \bReg} \THEN \PW{\cLoc}{1} \FI. \end{math}
{ "alphanum_fraction": 0.6190955583, "avg_line_length": 36.6268436578, "ext": "tex", "hexsha": "27bdfdf6a1890c2ac116c57d749b2883f8b28bfd", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "fd606fdb6a04685d9bb0bee61a5641e4623b10be", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "chicago-relaxed-memory/memory-model", "max_forks_repo_path": "corrigendum/litmus.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "fd606fdb6a04685d9bb0bee61a5641e4623b10be", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "chicago-relaxed-memory/memory-model", "max_issues_repo_path": "corrigendum/litmus.tex", "max_line_length": 120, "max_stars_count": 3, "max_stars_repo_head_hexsha": "fd606fdb6a04685d9bb0bee61a5641e4623b10be", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "chicago-relaxed-memory/memory-model", "max_stars_repo_path": "corrigendum/litmus.tex", "max_stars_repo_stars_event_max_datetime": "2021-07-25T12:46:13.000Z", "max_stars_repo_stars_event_min_datetime": "2020-08-13T02:36:22.000Z", "num_tokens": 9424, "size": 24833 }
\documentclass[onecolumn,a4paper,10pt]{IEEEtran} %\documentclass{article} \usepackage{kblocks} %\tikzexternalize[prefix=figs/] % activate tikzpicture exports \usepackage{bm} \usepackage{listings} \usepackage{verbatim} \usepackage{fancyvrb} % extended verbatim environments \usepackage{fancyvrb-ex} \usepackage{units} \usepackage{zi4} %\usepackage{garamondx} %\usepackage[lig=true]{baskervald} %\usepackage{palatino} %\def\rmdefault{zi4} %\def\sfdefault{zi4} \usepackage[defaultsans]{lato} \usepackage{csquotes} \newcommand*{\kblocks}{\relax~\textit{k}\textsc{blocks}} \newcommand*{\Tikz}{ Ti\textit{k}Z } \newcommand*{\TikzPGF}{\relax~{Ti\textit{k}Z/\textsc{pgf}}} \newcommand*{\spacetweak}{\medskip\medskip} \DefineVerbatimEnvironment {cvl}{Verbatim} {formatcom=\color{blue!10!black!90}, numbers=left,numbersep=2mm,gobble=0, frame=lines,rulecolor=\color{gray},framesep=1mm, fontseries=,labelposition=none,fontsize=\normalsize, xrightmargin=1cm, samepage=false} % \newenvironment{apilist}{ \vspace{1ex}}% {\vspace{1em}} % \newcommand{\cvhd}[1]{{ {\subsection{#1}}{\mbox{}\break}\vspace{-5ex} }} % \fvset{formatcom=\color{darkgray}, fontfamily=tt,fontsize=\footnotesize, fontseries=b, frame=single,rulecolor=\color{olive},label=\fbox{A}, numbers=left,numbersep=5pt} %red,green,blue,cyan,magenta,yellow,black,gray,white, %darkgray,lightgray,brown,lime,olive,orange,pink,purple,teal,violet \usepackage[open]{bookmark} \newcommand*{\urlink}[1]{{ {\texttt{\url{#1}}} }} \begin{document} \title{\kblocks~Package} \author{ \IEEEauthorblockN{ \textsc{Oluwasegun~Somefun}~(\textbf{[email protected]}) \\ \footnotesize{\textsc{Department of Computer Engineering, Federal University of Technology, Akure, Nigeria}} } } \markboth{01,~February~2021.~\kblocks~Documentation.~Version~2.0}; \maketitle \section{Introduction} Welcome to the demo documentation of\kblocks. Desiring to typeset control block diagrams in \LaTeX~and dissatisfied with the other \LaTeX~macro packages that can be found online, I thought: \textit{why not write my own macro package for this purpose}. I wish to start with the question, \enquote{What is\kblocks?} The\kblocks~macro package is the product of using\TikzPGF~to directly typeset beautiful control block diagrams and signal flow graphs in my Masters' dissertation and papers directly with \LaTeX. Basically, it just defines a dedicated \enquote{kblock} environment and a number of macro commands to make drawing control block diagrams with\TikzPGF{} more structured and easier. In a sense, when you use\kblocks~you \textit{program} or typeset graphics for control block diagrams, just as you “program” graphics in your document when you use \LaTeX~using\TikzPGF. The powerful options offered by\TikzPGF~often intimidates beginner users not ready to spend careful time learning about\TikzPGF. Like all \LaTeX~packages,\TikzPGF~inherits the steep learning curve of \LaTeX, that is, no \textit{what you see is what you get}. The\kblocks~macro reduces the length of this learning curve, by focusing the graphics theme on control block diagrams only. Fortunately this documentation as it grows and gets to be improved, will come with a number of demos and proper documentation of the\kblocks{} API, which will guide you on creating control block diagrams with\kblocks{} without your having to read the\TikzPGF~manual. My wish is that you do find it useful and helpful. Please, don't forget to share and star the Github repo: \urlink{https://github.com/somefunAgba/kblocks}, if you did. I will readily welcome any issues or emails for improvement or suggestion with respect to using\kblocks{} and making it useful for researchers, students and others involved in the applications and field of control theory and signal processing. \pagebreak \section{Demos} \centering \spacetweak \subsection{Ex:A}\spacetweak \begin{SideBySideExample}[label=\fbox{A},xrightmargin=10cm] \begin{kblock} % global ref point \kJumpCS{init} %% blocks \kMarkNodeRight{0.2cm}{0cm}{$r$}{init}{rin} \kPlusPlusMinus{rin}{sb1}{0.2cm} \kTFRight[0.2cm]{sb1}{tfb1}{$\frac{1}{s}$} \kMarkNodeRight{0.2cm}{0cm}{}{tfb1}{ny} \kOutRight[0.2cm]{ny}{yout}{$y$}{0cm} %% links \kLink[]{rin}{sb1} \kLink[$e$]{sb1}{tfb1} \kLinkn[]{tfb1}{ny} \kLinkVHHVBelow[0cm]{$1$}{ny}{sb1}{0cm}{0cm} \kLinkVHHVAbove[0cm]{$1$}{ny}{sb1}{0cm}{0cm} \end{kblock} \end{SideBySideExample} \subsection{Ex:B} \spacetweak\spacetweak \begin{SideBySideExample}[label=\fbox{B},xrightmargin=10cm] \begin{kblock} % global ref point \kJumpCS{init} %% blocks \kMarkNodeRight{0.2cm}{0cm}{$r$}{init}{rin} \kPlusMinusDown{rin}{sb1}{0.2cm} \kTFRight[0.25cm]{sb1}{tfb1}{$G\left( s \right)$} \kTFBelow[0.2cm]{tfb1}{tfb2}{$H\left( s \right)$} \kMarkNodeRight{0.2cm}{0cm}{}{tfb1}{ny} \kOutRight[0.2cm]{ny}{yout}{$y$}{0cm} %% links \kLinkVH[$y$]{ny}{tfb2}{0cm}{0cm}{0cm}{} \kLinkHV[$\hat{y}$]{tfb2}{sb1}{0cm}{0cm}{9}{} \kLink[]{rin}{sb1} \kLink[$e$]{sb1}{tfb1} \kLinkn[]{tfb1}{ny} %% coverings \kCoverRect[blue]{sb1}{1cm}{2cm}{0.5cm}{3cm} \kCoverTextLeft{2cm}{1cm}{covtx}{Closed-loop system}; \end{kblock} \end{SideBySideExample} \subsection{Ex:C} \spacetweak\spacetweak \begin{SideBySideExample}[label=\fbox{C},xrightmargin=10cm] \begin{kblock} % ref \kJumpCS{spt} % top blocks \kMarkNodeRight{0cm}{0cm}{$I^\star$}{spt}{inI} \kPlusMinusDown{inI}{sb1}{1.cm} \kTFRight[0.2cm]{sb1}{tfb1}{$s$} \kTFBelow[0.5cm]{sb1}{tfb2}{$\frac{1}{2}$} \kPlusDownPlusUpL{tfb2}{sb2}{0cm} \kMinusPlusUp{tfb1}{sb3}{0cm} \kTFRight[0cm]{sb3}{tfb3}{$0.2$} \kTFRight[0cm]{tfb3}{tfb4}{$K_3$} \kMarkNodeAbove{0cm}{0cm}{$V_{dc}$}{inI}{inVdc} \kMarkNodeBelow{0cm}{0.3cm}{$V_2$}{inI}{inV2} \kMarkNodeBelow{0cm}{-0.4cm}{$V_3$}{inV2}{inV3} % bottom blocks \kMarkNodeBelow{3cm}{0cm}{$V_1$}{inI}{inV1} \kPlusPlusUpB{tfb4}{sb4}{3cm} \kPlusMinusDown{inV1}{sb5}{0cm} \kTFRight[0cm]{sb5}{tfb5}{$\lambda$} \kOutRight[0cm]{sb4}{outV}{$V_{out}^{\star}$}{0cm} \kMarkNodeBelow{0cm}{0cm}{$V_4$}{inV1}{inV4} % top links \kLinkHV[]{inVdc}{sb3}{0cm}{0cm}{0}{} \kLinkHV[]{inV2}{sb2}{0cm}{0cm}{0}{} \kLinkHV[]{inV3}{sb2}{0cm}{0cm}{0}{} \kLink[]{inI}{sb1} \kLink[]{sb1}{tfb1} \kLink[$I_{\beta}$]{tfb2}{sb1} \kLink[]{tfb1}{sb3} \kLink[]{sb2}{tfb2} \kLink[]{sb3}{tfb3} \kLink[]{tfb3}{tfb4} \kLink[$V_{\alpha}$]{tfb4}{sb4} % bottom links \kLink[]{inV1}{sb5} \kLink[]{sb5}{tfb5} \kLink[]{tfb5}{sb4} \kLinkHV[]{inV4}{sb5}{0cm}{0cm}{0}{} \end{kblock} \end{SideBySideExample} \subsection{Ex:D} \spacetweak\spacetweak \begin{SideBySideExample}[label=\fbox{D},xrightmargin=10cm] \begin{kblock} % ref \kJumpCS{ioref} % blks \kTFRight[0cm]{ioref}{tfb1}{ $\bm{\hat{m}$}\\\textbf{PID} \\\textbf{model}} \kTFRight[1cm]{tfb1}{tfb2}{$\bm{K\left(y_m,y\right)}$ \\\textbf{PID}\\\textbf{controller}} \kTFBelowRight{0.25cm}{0.5cm}{tfb2}{tfb3} {$\bm{P\left(s\right)}$\\\textbf{process}} % links \kInLeft[0cm]{tfb1}{inR}{$r$}{0cm} \kOutDown[0cm]{tfb1}{outU}{$u_m$}{0cm} \kLink[$y_m$]{tfb1}{tfb2} \kLinkHV[$u$]{tfb2}{tfb3}{0cm}{0cm}{1}{} \kLinkHV[$y$]{tfb3}{tfb2}{0cm}{0cm}{4}{} \end{kblock} \end{SideBySideExample} \subsection{Ex:E} \spacetweak\spacetweak \begin{SideBySideExample}[label=\fbox{E},xrightmargin=10cm] \begin{kblock} % ref \kJumpCS{refpt} % blks \kTFRight[0cm]{refpt}{tfb1}{ $\bm{\hat{m}$}\\\textbf{closed PID-loop} \\\textbf{model}} \kTFRight[2cm]{tfb1}{tfb2}{\textbf{PID}$\bm{(y_m,y)}$} \kTFBelow[0.25cm]{tfb2}{tfb3} {$\bm{P\left(s\right)}$\\\textbf{process}} % links \kInLeft[0cm]{tfb1}{inR}{$r$}{0cm} \kOutDown[0cm]{tfb1}{outU}{$u_m$}{0cm} \kLink[$y_m$]{tfb1}{tfb2} \kLinkHVHRight[0]{$u$}{tfb2}{tfb3}{0cm}{0cm}{1cm} \kLinkHVHLeft[0.8cm]{$y$}{tfb3}{tfb2}{0cm}{-0.2cm} \end{kblock} \end{SideBySideExample} \subsection{Ex:F} \spacetweak\spacetweak \begin{SideBySideExample}[label=\fbox{F},xrightmargin=10cm] \begin{kblock} % generic coordinate reference points %\kJumpCS[$(0,0)$]{i} \kJumpCS{i} \kJumpCSRight[-0.5cm]{i}{iR}{0cm}{3} \kJumpCSLeft[-0.5cm]{i}{iL}{0cm}{9} \kJumpCSAbove[-0.5cm]{i}{iA}{0cm}{12} \kJumpCSBelow[-0.5cm]{i}{iB}{0cm}{6} % blks \kTFBelow[]{iB}{tfb1}{\kmT{\mathcal{K}(\cdot)}} \kTFBelow[]{tfb1}{tfb2}{\kmT{\mathcal{P}(s)}} % links \kInLeftM[0cm]{tfb1}{inR}{$r$}{0.05cm}{6} \kMarkNodeLeft{0cm}{0cm}{}{tfb2}{ny} \kOutLeft[-0.5cm]{ny}{outY}{$y$}{0cm} \kLinkn[]{ny}{tfb2} \kLinkVH[$y$]{ny}{tfb1}{-0.1cm}{0cm}{2}{} \kLinkHVHRight[0.6cm]{$u$}{tfb1}{tfb2}{0cm}{0cm} % covers \kCoverRect[magenta!5!red]{tfb2} {0.1cm}{0.1cm}{0.3cm}{0.3cm} \kCoverTextBelow{0cm}{0cm}{txt1} {physical system (e.g: a dc motor)}; % \kCoverRect[green!75!blue!80!]{tfb1} {0.1cm}{0.1cm}{0.2cm}{0.2cm} \kCoverTextAbove{0cm}{0cm}{txt2} {computing system (embedded control algorithm)}; \end{kblock} \end{SideBySideExample} \subsection{Ex:G} \spacetweak\spacetweak \begin{SideBySideExample}[label=\fbox{G},xrightmargin=10cm] \centering \begin{kblock} \kJumpCS{R} % blks \kTFRight{R}{tfb1}{$\bm{P}$} \kTFBelow{tfb1}{tfb2}{$\bm{C}$} % links \kInLeftM[0cm]{tfb1}{inW}{$w$}{0.05cm}{2} \kInLeftM[0cm]{tfb2}{inR}{$r$}{-0.05cm}{5} \kOutRight[0cm]{tfb1}{outZ}{$z$}{0.05cm} \kOutRight[0cm]{tfb2}{outV}{$v$}{-0.05cm} \kLinkHVHRight[0.6cm]{$y$}{tfb1}{tfb2}{-0.1cm}{0.1cm} \kLinkHVHLeft[0.6cm]{$u$}{tfb2}{tfb1}{0.1cm}{-0.1cm} % covers \kCoverRect[blue!50!]{tfb1} {0.1cm}{0.1cm}{0.3cm}{0.3cm} \kCoverTextAbove{0cm}{0cm}{txt1}{Physical System}; % \kCoverRect[red]{tfb2} {0.1cm}{0.1cm}{0.3cm}{0.3cm} \kCoverTextBelow{0cm}{0cm}{txt2}{Computing System}; \end{kblock} \end{SideBySideExample} \subsection{Ex:H} \spacetweak\spacetweak \begin{SideBySideExample}[label=\fbox{H},xrightmargin=10cm] % Description: Closed PID-loop \centering \begin{kblock} % global ref point \kJumpCS{SRef} %% blocks \kTFAbove[5cm]{SRef}{tfD}{\kmT{D}} \kMarkNodeLeft{0.1cm}{0cm}{}{tfD}{yin} \kPlusMinusDownPlaceAbove{yin}{S1}{0cm} \kTFAbove[]{tfD}{tfA}{\kmT{A}} \kTFAbove[]{tfA}{tfB}{\kmT{B}} \kPlusPlusMinus{tfA}{S2}{0cm} \kTFRight[0.5cm]{S2}{tfP}{\kmT{\mathcal{P}(s)}} %% other nodes-paths \kMarkNodeRight{0cm}{0cm}{}{tfP}{ycut} \kInLeft[0.1cm]{S1}{rin}{$r$}{0cm} \kOutRight[0.1cm]{ycut}{yout}{$y^*$}{0cm} \kMarkNodeRight{-0.4cm}{0cm}{}{rin}{rcut} %% links \kLink[$e$]{S1}{tfA} \kLinkVH[]{rcut}{tfB}{0cm}{0cm}{0}{} \kLink[]{tfA}{S2} \kLinkHV[]{tfB}{S2}{0cm}{0cm}{0}{} \kLink[$u^*$]{S2}{tfP} \kLinkHV[]{tfD}{S2}{0cm}{0cm}{0}{} \kLinkn[]{tfP}{ycut} \kLinknVHHVBelow[1.2cm]{}{ycut}{yin}{0cm}{0cm} \kLink[]{yin}{tfD} \kLink[]{yin}{S1} \end{kblock} \end{SideBySideExample} \subsection{Ex:I} \spacetweak\spacetweak \begin{SideBySideExample}[label=\fbox{I},xrightmargin=10cm] % Description: Closed PID-loop \centering \begin{kblock} % global ref point \kJumpCS{SRef} %% blocks \kTFAbove[]{SRef}{tfA}{\kmT{A}} \kPlusMinusDownL{tfA}{S1}{0cm} \kTFAbove[]{tfA}{tfB}{\kmT{B}} \kPlusPlusMinus{tfA}{S2}{0cm} \kTFRight[0.5cm]{S2}{tfP}{\kmT{\mathcal{P}(s)}} \kTFBelow[0cm]{tfP}{tfD}{\kmT{D}} \kMarkNodeRight{0cm}{0cm}{}{tfP}{ycut} \kMarkNodeBelow{0cm}{0cm}{}{ycut}{yin} \kInLeftM[0.1cm]{S1}{rin}{$r$}{0cm}{0} \kOutRight[0.1cm]{ycut}{yout}{$y^*$}{0cm} \kMarkNodeRight{-0.4cm}{0cm}{}{rin}{rcut} %% links \kLink[$e$]{S1}{tfA} \kLinkVH[]{rcut}{tfB}{0cm}{0cm}{0}{} \kLink[]{tfA}{S2} \kLinkHV[]{tfB}{S2}{0cm}{0cm}{0}{} \kLink[$u^*$]{S2}{tfP} \kLinkHV[]{tfD}{S2}{0cm}{0cm}{0}{} \kLinkn[]{tfP}{ycut} \kLinkn[]{ycut}{yin} \kLinknVHHVBelow[0.5cm]{}{yin}{S1}{0cm}{0cm} \kLink[]{yin}{tfD} \end{kblock} \end{SideBySideExample} \subsection{Ex:J} \spacetweak\spacetweak \begin{SideBySideExample}[label=\fbox{J},xrightmargin=10cm] % Description: Anon \centering \begin{kblock} % global ref point \kJumpCS{o} %% blocks \kTFAbove[0cm]{o}{tfi}{\kmT{f_i(\cdot)}} \kTFAbove[0.5cm]{tfi}{tfd}{\kmT{f_d(\cdot)}} \kTFAbove[0.5cm]{tfd}{tfp}{\kmT{f_p(\cdot)}} %% links \kInUpM[0cm]{tfp}{inu}{$u$}{0cm}{0} \kOutRight[0.3cm]{tfp}{kp}{$K_p$}{0cm} \kOutRight[0.3cm]{tfi}{ki}{$K_i$}{0cm} \kOutRight[0.3cm]{tfd}{kd}{$K_d$}{0cm} \kInLeftM[0cm]{tfd}{inwn}{$\omega_n$}{0cm}{6} \kInLeftM[0cm]{tfp}{iny}{$y$}{0.15cm}{6} \kInLeftM[0cm]{tfp}{inym}{$y_m$}{-0.15cm}{4} \kMarkNodeLeft{-0.35cm}{0cm}{}{kp}{kpcut} \kLinkVH[$\lambda$]{kpcut}{tfd}{0.2cm}{0cm}{2}{kpcutb} \kLinkVH[]{kpcutb}{tfi}{0.2cm}{0cm}{0}{} \kMarkNodeRight{-0.8cm}{0cm}{}{inwn}{wncut} \kLinkVH[]{wncut}{tfi}{0cm}{0cm}{0}{} \end{kblock} \end{SideBySideExample} \subsection{Ex:K} \spacetweak \begin{SideBySideExample}[label=\fbox{K},xrightmargin=10cm] % DESCRIPTION: CPLMFC-Algorithm \begin{kblock} % global ref point \kJumpCS{SRef} %% blocks % place TF_fts right of global ref. \kTFRight[4cm]{SRef}{TF_fts}{ \kmTw{f_\mathrm{t_s}} } % place TF_mfc at h cm above TF_fts \kTFAbove[0.3cm]{TF_fts}{TF_mfc}{ \kmT{f_\mathrm{MFC}} } \kTFAbove[0.3cm]{TF_mfc}{TF_pid}{ \kmT{f_\mathrm{PID}} } \kTFRight[3cm]{TF_mfc}{TF_sys}{ \kmTw{ \mathcal{P} } } %% nodes and links % mark visible node N1 right of TF_mfc \kMarkNodeRight{0.6cm}{0cm}{}{TF_mfc}{N1} \kMarkNodeBelow{-0.8cm}{0cm}{}{N1}{N2} \kMarkNodeRight{-0.6cm}{0cm}{}{N1}{N3} \kMarkNodeRight{0cm}{}{}{TF_sys}{N4} % extend node-path outwards \kOutRight[0.1cm]{N4}{Y1}{$y^*$}{0cm} \kMarkNodeLeft{0.3cm}{0cm}{}{TF_fts}{N5} \kMarkNodeLeft{0.3cm}{-0.12cm}{}{TF_mfc}{N6} \kInLeft[0.12cm]{TF_pid}{R1}{$r$}{-0.12cm} \kMarkNodeLeft{0cm}{-0.12cm}{}{TF_pid}{N7} % link TF_sys to N4 \kLinkn[]{TF_sys}{N4} \kLink[$u^*$]{N3}{TF_sys} \kLink[]{N1}{TF_mfc} \kLinkHV[]{TF_fts}{N2}{0cm}{0cm}{0}{} % HV link from TF_pid to N1 %\kLinkHV[]{$(TF_pid.east) + (0,0cm)$}{N1}{0cm}{0cm}{0}{} \kLinkHV[]{TF_pid}{N1}{0cm}{0cm}{0}{} % link N1 to N3 %\kLink[]{N1}{N3} % link N2 to N3 \kLink[]{N2}{N3} % VHHV feedback link from N4 to N5 \kLinknVHHVBelow[1.5cm]{}{N4}{N5}{0cm}{0cm} \kLink[]{N5}{TF_fts} \kLink[]{N6}{$(TF_mfc.west) + (0,-0.12cm)$} % arrowless link N5 and N6 \kLinkn[]{N5}{N6} \kLinkVH[]{N6}{TF_pid}{0.12cm}{0cm}{0}{} \kLinkVH[]{N7}{TF_mfc}{0.12cm}{0cm}{0}{} %% vector links % link from inside TF_fts to TF_mfc \kVecLink[$$]{TF_fts}{TF_mfc} % link from inside TF_mfc to TF_pid %\kVecLink[$$]{TF_mfc}{TF_pid} \kVecLink[$$]{$(TF_mfc.north) + (-0.15cm,0cm)$} {$(TF_pid.south) + (-0.15cm,0cm)$} % link from inside TF_pid to TF_mfc \kVecLink[$$]{$(TF_pid.south) + (0.15cm,0cm)$} {$(TF_mfc.north) + (0.15cm,0cm)$} %% cover-sectioning \kCoverRect[gray]{TF_mfc} {1.8cm}{1.8cm}{1.5cm}{1.6cm} \kCoverTextAbove{0cm}{0cm}{CT1}{CPLMFC Algorithm};% \end{kblock} \end{SideBySideExample} % \kLinkVH[$\bm{\omega_n}$]{T3}{T1}{0cm}{0cm}{0}{} % % \kVecLinkVH[$\bm{\kappa_{pid}}$]{T2}{T3}{0.1cm}{-0.1cm}{8} % \kVecLinkHV[$\bm{\kappa_{pid,\lambda_{id}}}$]{T3}{T2}{-0.1cm}{0.1cm}{1} % \kVecInUp[-0.3cm]{T3}{TS2}{$\bm{t_s,\tau_l}$}{0.5cm} \subsection{Ex:L}\spacetweak\spacetweak \spacetweak\spacetweak \begin{SideBySideExample}[label=\fbox{L},xrightmargin=10cm] \begin{kblock} \kJumpCS{fspt} % blks \kTFBelow[0.5cm]{fspt}{plt}{\kmT{P(s)}} \kTFBelow[0cm]{plt}{pidcm}{ \textbf{PID closed-loop model}\\ $ \begin{array}{c} \bm{\dot{x}_m=\mathcal{S}({x_m},{r})} \end{array} $ } \kTFBelow[0.33cm]{pidcm}{tscalc} {\kmT{f_{\omega_n}(\cdot)}} \kTFBelow[0.33cm]{tscalc}{fis}{\kmT{f_{x_s}(\cdot)}} \kTFBelow[0.2cm]{fis}{pid}{ \textbf{PID Control Law}\\ \kmT{u = f_{pid}(\cdot)} } \kTFRight[2.5cm]{tscalc}{obs}{ \textbf{State Observer}\\ $ \begin{array}{c} \bm{\hat{\dot{x}}=\mathcal{S}(\hat{x},{r})}\\ \end{array} $ } % links \kInDown[-0.4cm]{pid}{cp}{\kmT{\lambda_p}}{-0.6cm} \kInDown[-0.4cm]{pid}{ci}{\kmT{\lambda_i}}{0.6cm} \kInDown[-0.2cm]{pid}{cd}{\kmT{\lambda_d}}{0cm} \kLinkHVHLeft[2cm]{\kmT{u}}{pid}{plt}{0cm}{0cm}{} \kLinkHV[\kmT{y}]{plt}{obs}{0cm}{0cm}{1}{} \kLinkVH[\kmT{\hat{x}}]{obs}{pid}{-0.1cm}{0cm}{3}{} \kLinkHVHRight[]{\kmT{{x}_m}}{pidcm}{pid}{0cm}{0.1cm} \kLink[$\bm{\omega_n}$]{tscalc}{pidcm} \kLink[$\bm{x_s}$]{fis}{tscalc} \kLink[$\bm{b,c}$]{pid}{fis} \kMarkNodeAbove{-0.5cm}{0cm}{}{tscalc}{mkwn} \kLinkHV[]{mkwn}{pid}{0cm}{1cm}{0cm}{} \kMarkNodeAbove{-0.5cm}{0cm}{}{fis}{mkxts} \kLinkHV[]{mkxts}{pid}{0cm}{0.8cm}{0}{} \kInLeftM[0cm]{pidcm}{rin}{$\bm{r}$}{-0.15cm}{3} \kOutLeft[0cm]{pidcm}{umout}{$\bm{u_m}$}{0.15cm} \kInLeftM[0.2cm]{tscalc}{tsl}{$\bm{t_s,t_l}$}{0cm}{6} \kMarkNodeRight{-0.84cm}{0cm}{}{tsl}{mktsl} \kLinkVH[]{mktsl}{pid}{0.15cm}{0cm}{0}{} \kMarkNodeBelow{0cm}{0cm}{}{obs}{mkxhat} \kLinkHV[]{mkxhat}{obs}{0cm}{-0.5cm}{0}{} \end{kblock} \end{SideBySideExample} \pagebreak %% API \spacetweak\spacetweak \section{\kblocks{} API} \bfseries TODO ... %\begin{apilist} %\cvhd{Place an invisible Node at origin, as reference point} %\begin{cvl} %\kJumpCS{current coordinate label} %\end{cvl} % %\cvhd{Place Node with variable x-y coordinate shift} %\begin{cvl} %\kMarkNodeLeft{node x distance shift}{node y distance shift} %{node text-label}{from node label}{to node or current node label} % %\kMarkNodeRight{node x distance shift}{node y distance shift} %{node text-label}{from node label}{to node or current node label} % %\kMarkNodeAbove{node x distance shift}{node y distance shift} %{node text-label}{from node label}{to node or current node label} % %\kMarkNodeBelow{node x distance shift}{node y distance shift} %{node text-label}{from node label}{to node or current node label} %\end{cvl} % %\cvhd{Place a Node at a specific coordinate} %\begin{cvl} %\kMarkNode{optional x distance shift}{optional y distance shift} %{node label}{current node coordinate} %\end{cvl} % %\cvhd{Arithmetic Summer Blocks} %\begin{cvl} %\kPlusPlusMinus{from node label}{to current sum node label} %{optional horizontal position shift} %\end{cvl} % %\cvhd{Transfer-Function block} %\begin{cvl} %\kTFRight[optional shift dimension]{from node label} %{to current tf node label}{tf text content} %\end{cvl} % %\cvhd{Scalar Link (arrowed) and Linkn (no arrow)} %\begin{cvl} %\kLink[optional signal label]{from node label}{to node label} % %\kLinkn[optional signal label]{from node label}{to node label} %\end{cvl} % % %\cvhd{Output Link from a node point} %\begin{cvl} %\kOutRight[optional distance shift]{from node label} %{to current node label}{out signal label}{direction shift} %\end{cvl} % % %\cvhd{Scalar Link Full Feedback/FeedForward Vertical (Up or Down) to Horizontal (Right or Left) to Vertical (Up or Down)} %\begin{cvl} %\kLinkVHHVBelow[optional link shift]{unity link label} %{from node{to node}{from node direction shift}{to node direction shift} % %\kLinkVHHVAbove[optional link shift]{unity link label} %{from node}{to node}{from node direction shift}{to node direction shift} %\end{cvl} % % % % %\end{apilist} \spacetweak\spacetweak \end{document}
{ "alphanum_fraction": 0.6911636045, "avg_line_length": 29.214057508, "ext": "tex", "hexsha": "fb99dabb115e626843b6f5b211c390890effe101", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2021-07-02T13:40:34.000Z", "max_forks_repo_forks_event_min_datetime": "2020-09-17T03:12:47.000Z", "max_forks_repo_head_hexsha": "c5f8d6e7484c9dbcb5edce7e72e8746a96dc8a97", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "somefunAgba/kblocks", "max_forks_repo_path": "kblocks-doc.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "c5f8d6e7484c9dbcb5edce7e72e8746a96dc8a97", "max_issues_repo_issues_event_max_datetime": "2019-10-13T18:18:16.000Z", "max_issues_repo_issues_event_min_datetime": "2019-10-13T18:17:44.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "somefunAgba/kblocks", "max_issues_repo_path": "kblocks-doc.tex", "max_line_length": 266, "max_stars_count": 5, "max_stars_repo_head_hexsha": "c5f8d6e7484c9dbcb5edce7e72e8746a96dc8a97", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "somefunAgba/kblocks", "max_stars_repo_path": "kblocks-doc.tex", "max_stars_repo_stars_event_max_datetime": "2021-09-24T09:23:01.000Z", "max_stars_repo_stars_event_min_datetime": "2019-10-13T18:21:37.000Z", "num_tokens": 7808, "size": 18288 }
\documentclass[UKenglish, aspectratio = 169]{beamer} \usetheme{OsloMet} \usepackage{style} \author[Hansen \& Helsø] {Nikolai Bjørnestøl Hansen \texorpdfstring{\\}{} Martin Helsø} \title{Beamer example} \subtitle{Usage of the theme \texttt{OsloMet}} \begin{document} \section{Overview} % Use % % \begin{frame}[allowframebreaks] % % if the TOC does not fit one frame. \begin{frame}{Table of contents} \tableofcontents \end{frame} \section{Mathematics} \subsection{Theorem} %% Disable the logo in the lower right corner: \hidelogo \begin{frame}{Mathematics} \begin{theorem}[Fermat's little theorem] For a prime~\(p\) and \(a \in \mathbb{Z}\) it holds that \(a^p \equiv a \pmod{p}\). \end{theorem} \begin{proof} The invertible elements in a field form a group under multiplication. In particular, the elements \begin{equation*} 1, 2, \ldots, p - 1 \in \mathbb{Z}_p \end{equation*} form a group under multiplication modulo~\(p\). This is a group of order \(p - 1\). For \(a \in \mathbb{Z}_p\) and \(a \neq 0\) we thus get \(a^{p-1} = 1 \in \mathbb{Z}_p\). The claim follows. \end{proof} \end{frame} %% Enable the logo in the lower right corner: \showlogo \subsection{Example} \begin{frame}{Mathematics} \begin{example} The function \(\phi \colon \mathbb{R} \to \mathbb{R}\) given by \(\phi(x) = 2x\) is continuous at the point \(x = \alpha\), because if \(\epsilon > 0\) and \(x \in \mathbb{R}\) is such that \(\lvert x - \alpha \rvert < \delta = \frac{\epsilon}{2}\), then \begin{equation*} \lvert \phi(x) - \phi(\alpha)\rvert = 2\lvert x - \alpha \rvert < 2\delta = \epsilon. \end{equation*} \end{example} \end{frame} \section{Highlighting} \SectionPage \begin{frame}{Highlighting} Some times it is useful to \alert{highlight} certain words in the text. \begin{alertblock}{Important message} If a lot of text should be \alert{highlighted}, it is a good idea to put it in a box. \end{alertblock} You can also highlight with the \structure{structure} colour. \end{frame} \section{Lists} \begin{frame}{Lists} \begin{itemize} \item Bullet lists are marked with a yellow box. \end{itemize} \begin{enumerate} \item \label{enum:item} Numbered lists are marked with a black number inside a yellow box. \end{enumerate} \begin{description} \item[Description] highlights important words with blue text. \end{description} Items in numbered lists like \enumref{enum:item} can be referenced with a yellow box. \begin{example} \begin{itemize} \item Lists change colour after the environment. \end{itemize} \end{example} \end{frame} \section{Effects} \begin{frame}{Effects} \begin{columns}[onlytextwidth] \begin{column}{0.49\textwidth} \begin{enumerate}[<+-|alert@+>] \item Effects that control \item when text is displayed \item are specified with <> and a list of slides. \end{enumerate} \begin{theorem}<2> This theorem is only visible on slide number 2. \end{theorem} \end{column} \begin{column}{0.49\textwidth} Use \textbf<2->{textblock} for arbitrary placement of objects. \pause \medskip It creates a box with the specified width (here in a percentage of the slide's width) and upper left corner at the specified coordinate (x, y) (here x is a percentage of width and y a percentage of height). \end{column} \end{columns} \only<1, 3> { \begin{textblock}{0.3}(0.45, 0.55) \includegraphics[width = \textwidth]{example-image-a} \end{textblock} } \end{frame} \section{References} \begin{frame}[allowframebreaks]{References} \begin{thebibliography}{} % Article is the default. \setbeamertemplate{bibliography item}[book] \bibitem{Hartshorne1977} R.~Hartshorne. \newblock \emph{Algebraic Geometry}. \newblock Springer-Verlag, 1977. \setbeamertemplate{bibliography item}[article] \bibitem{Artin1966} M.~Artin. \newblock On isolated rational singularities of surfaces. \newblock \emph{Amer. J. Math.}, 80(1):129--136, 1966. \setbeamertemplate{bibliography item}[online] \bibitem{Vakil2006} R.~Vakil. \newblock \emph{The moduli space of curves and Gromov--Witten theory}, 2006. \newblock \url{http://arxiv.org/abs/math/0602347} \setbeamertemplate{bibliography item}[triangle] \bibitem{AM1969} M.~Atiyah and I.~Macdonald. \newblock \emph{Introduction to commutative algebra}. \newblock Addison-Wesley Publishing Co., Reading, Mass.-London-Don Mills, Ont., 1969 \setbeamertemplate{bibliography item}[text] \bibitem{Fraleigh1967} J.~Fraleigh. \newblock \emph{A first course in abstract algebra}. \newblock Addison-Wesley Publishing Co., Reading, Mass.-London-Don Mills, Ont., 1967 \end{thebibliography} \end{frame} \end{document}
{ "alphanum_fraction": 0.6152995392, "avg_line_length": 27.125, "ext": "tex", "hexsha": "a613de3e5aebc26f7747e79af3270b39623eff20", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2021-01-26T10:17:51.000Z", "max_forks_repo_forks_event_min_datetime": "2020-06-11T10:47:32.000Z", "max_forks_repo_head_hexsha": "59dbf6c7bf70c8b29a156e9ee026faff4db3c5cb", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "myke-oliveira/Desenvolvimento-Orientado-a-Comportamento", "max_forks_repo_path": "main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "59dbf6c7bf70c8b29a156e9ee026faff4db3c5cb", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "myke-oliveira/Desenvolvimento-Orientado-a-Comportamento", "max_issues_repo_path": "main.tex", "max_line_length": 133, "max_stars_count": 6, "max_stars_repo_head_hexsha": "9946fdf0e6220e8596a24825088208615b4f15a1", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "martinhelso/OsloMet", "max_stars_repo_path": "main.tex", "max_stars_repo_stars_event_max_datetime": "2020-12-09T01:25:34.000Z", "max_stars_repo_stars_event_min_datetime": "2020-06-04T23:47:33.000Z", "num_tokens": 1560, "size": 5425 }
\documentclass[12pt,a4paper]{article} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage[toc,page]{appendix} \usepackage[english]{babel} \usepackage{color} \usepackage{etoolbox} \usepackage{fancyhdr} \usepackage{float} \usepackage{graphicx} \usepackage{makeidx} \usepackage{imakeidx} \usepackage{listings} \usepackage{tocloft} % \usepackage[margin=10pt,font=small,labelfont=bf,labelsep=endash]{caption} % \usepackage[showframe]{geometry} % \usepackage{afterpage} % \usepackage{amsmath,amssymb} % \usepackage{array} % \usepackage{fncychap} % \usepackage{hhline} % \usepackage{latexsym} % \usepackage{longtable} % \usepackage{marvosym} % \usepackage{pdflscape} % \usepackage{setspace} % \usepackage{stmaryrd} % \usepackage{tabularx} % \usepackage{wasysym} % Definition of page style with fancy header \fancypagestyle{lscape}{ \fancyhf{} \fancyfoot[LE]{ \begin{textblock}{20} (1,5){\rotatebox{90}{\leftmark}}\end{textblock} \begin{textblock}{1} (13,10.5){\rotatebox{90}{\thepage}}\end{textblock}} \fancyfoot[LO] { \begin{textblock}{1} (13,10.5){\rotatebox{90}{\thepage}}\end{textblock} \begin{textblock}{20} (1,13.25){\rotatebox{90}{\rightmark}}\end{textblock}} \renewcommand{\headrulewidth}{0pt} \renewcommand{\footrulewidth}{0pt}} % Settings for dotted line in table of content \pagestyle{fancy} \renewcommand{\cftpartleader}{\cftdotfill{\cftdotsep}} \renewcommand{\cftsecleader}{\cftdotfill{\cftdotsep}} \setlength{\headheight}{15pt} % \makeatletter % \def\verbatim{ % \scriptsize, % \@verbatim, % \frenchspacing, % \@vobeyspaces, % \@xverbatim, % } % \makeatother % \makeatletter % \preto{\@verbatim}{\topsep=0pt \partopsep=0pt } % \makeatother \author{ThirtySomething} \title{BoomBox} \date{\today} \usepackage[ pdftex, pdfauthor={ThirtySomething -- https://www.derpaul.net}, pdftitle={BoomBox}, pdfsubject={A portable music player}, colorlinks=true, linkcolor=blue, urlcolor=blue ]{hyperref} \usepackage[all]{hypcap} \pagestyle{fancy} \fancyhf{} \renewcommand{\sectionmark}[1]{\markright{#1}} \renewcommand{\subsectionmark}[1]{\markright{#1}} \renewcommand{\subsubsectionmark}[1]{\markright{#1}} \fancyhead[R]{BoomBox} \fancyhead[L]{\nouppercase{\rightmark}} \fancyfoot[C]{Page \thepage} \renewcommand{\headrulewidth}{0.4pt} \renewcommand{\footrulewidth}{0.4pt} \def\labelitemi{--} \newcommand{\bb}{\textit{\href{https://github.com/ThirtySomething/BoomBox}{BoomBox}}} \newcommand{\code}[1]{\texttt{#1}} \newcommand{\jpaimg}[2]{\begin{figure}[H]\centering\fbox{\includegraphics[width=380px]{#1}}\caption{#2}\label{fig:#2}\end{figure}} \newcommand{\rpi}{\href{https://www.raspberrypi.org/}{Raspberry Pi}\index{Raspberry Pi}} \newcommand{\vol}{\href{https://volumio.org/}{Volumio}\index{Volumio}} % Settings for bash commands (lstlisting) \definecolor{dkgreen}{rgb}{0,0.6,0} \definecolor{gray}{rgb}{0.5,0.5,0.5} \definecolor{mauve}{rgb}{0.58,0,0.82} \lstset{frame=tb, language=sh, aboveskip=3mm, belowskip=3mm, showstringspaces=false, columns=flexible, basicstyle={\scriptsize\ttfamily}, numbers=none, numberstyle=\tiny\color{gray}, keywordstyle=\color{blue}, commentstyle=\color{dkgreen}, stringstyle=\color{mauve}, breaklines=true, breakatwhitespace=true, tabsize=3 } \makeindex \begin{document} \clearpage\maketitle \thispagestyle{empty} \newpage \tableofcontents \addtocontents{toc}{\protect\thispagestyle{fancy}} \newpage \section{Motivation} There are many ways for parties to play music. One of the modern variants of this is streaming via a smartphone. However, this requires a functioning WLAN or mobile phone reception. If you want to use the telephone network, you need a corresponding data connection or the corresponding data volume. But what do you do if neither one nor the other is given? Then you use the \bb{}. This is a kind of modern ghetto blaster. The device is independent of smartphone and WLan or mobile phone reception. \section{The device} The device \bb{} consists of two components, the hardware component and the software component. \subsection{The hardware component} On the one hand, the system should be cost-effective, but on the other hand it should not be old-fashioned. The following components meet these requirements: \begin{itemize} \item A \href{https://www.raspberrypi.org/products/raspberry-pi-3-model-b-plus/}{\rpi{} 3 B+}. \item An \href{http://iqaudio.co.uk/hats/9-pi-digiamp.html}{Pi-DigiAMP+}\index{iQAudio}\index{Pi-DigiAMP+}. \item A \href{https://www.raspberrypi.org/products/raspberry-pi-touch-display/}{\rpi{} Touch Display}\index{Touch Display}. \item A \href{https://www.conrad.de/de/m2-sata-ssd-erweiterungs-platine-fuer-den-raspberry-pi-1487097.html}{M.2 to USB Adapter}\index{M.2}. \item A \href{https://www.wd.com/de-de/products/internal-ssd/wd-blue-3d-nand-sata-ssd.html}{250 GB M.2 SSD}\index{SSD}. \item A \href{https://www.amazon.de/gp/product/B002JIGJ4M/ref=ppx_yo_dt_b_asin_title_o04_s00?ie=UTF8&psc=1}{Power supply}. \item Two \href{https://www.amazon.de/gp/product/B01GJC4WRO/ref=ppx_yo_dt_b_asin_title_o07_s00?ie=UTF8&psc=1}{USB cable}. \item A \href{https://www.amazon.de/gp/product/B073S9SFK2/ref=ppx_yo_dt_b_asin_title_o07_s00?ie=UTF8&psc=1}{MicroSD card}. \item A \href{https://www.amazon.de/gp/product/B07KFFNBLJ/ref=ppx_yo_dt_b_asin_title_o03_s00?ie=UTF8&psc=1}{Step Down Converter}\index{Step Down Converter}. \item An \href{https://www.amazon.de/gp/product/B071KVWQKY/ref=ppx_yo_dt_b_asin_title_o05_s00?ie=UTF8&psc=1}{Adatper with terminal block}. \item A \href{https://www.amazon.de/gp/product/B00A6QKIEQ/ref=ppx_yo_dt_b_asin_title_o06_s00?ie=UTF8&psc=1}{Cable with plug}. \end{itemize} The \rpi{} is an inexpensive single board computer that is perfect for this project. The sound card greatly improves audio output -- the \rpi{} isn't so convincing here by nature. The original touch display is used for operation without additional input devices. On Parties can be a bit wild every now and then. So that shocks don't have any influence on the \bb{}, instead of a conventional hard disk an SSD is used as mass storage. This is connected to the system via an adapter. \subsection{The software component} The software essentially consists of only one point: the music distibution \vol{}. This distribution comes with support for the above hardware. Of course some fine-tuning is necessary to simplify operation and handling. This is described in the chapter~\ref{subsec:Fine-tuning}. \section{Preparations} Before the entire system can go to the start, a few preparations have to be made. \subsection{Updating the \rpi{} firmware} We'll connect the \rpi{} with a network cable. Then we download the latest version of \href{https://www.raspberrypi.org/downloads/raspbian/}{Raspbian}. This is the normal operating system for the \rpi{}. We write the image with the \href{https://sourceforge.net/projects/win32diskimager/}{Win32 Disk Imager} on a MicroSD card. To allow access via SSH, we create the empty file \textit{ssh}\index{ssh} in the boot partition. We boot the \rpi{} and connect to \textit{ssh}. Then we enter the following commands: \begin{figure}[H] \begin{lstlisting} sudo apt-get update sudo apt-get install git sudo wget https://raw.github.com/Hexxeh/rpi-update/master/rpi-update -O /usr/bin/rpi-update && sudo chmod +x /usr/bin/rpi-update sudo rpi-update sudo reboot \end{lstlisting} \caption{Firmware Update}\label{fig:Firmware Update} \end{figure} \subsection{SSD mounting} The SSD must be mounted on the adapter. This is very simple - insert the SSD into the slot, tighten the screw and you're done. The result looks like this: \jpaimg{./../images/ssd-prepared.png}{SSD with adapter} \subsection{Prepare SSD} We format the SSD with \href{https://en.wikipedia.org/wiki/Ext4}{ext4}. This means that the SSD can no longer be used directly on the Windows PC.\@ However, the file system is more robust than \href{https://en.wikipedia.org/wiki/File_Allocation_Table#FAT32}{FAT32}. This is especially true in the event of sudden power loss. If you later include the \bb{} in your own network, you get access to the SSD.\@ \subsection{Filling up the SSD}\label{subsec:Filling up the SSD} To be able to offer \vol{} also a music selection at the start, the SSD is initially refuelled. For this a SMB-Share is connected and the files are copied. This step must only be performed once. \begin{figure}[H] \begin{lstlisting} # Utilities to mount the SMB-Shares sudo apt-get install cifs-utils # Mount SMB-Share mount -t cifs -o user=<smbuser>,domain=<domain|workgroup> //<IP of the share>/<sharename> /mnt # Create mountpoint /music for the SSD sudo mkdir /music # Mount the SSD to /music sudo mount /dev/sda1 /music # Start filling up of the SSD sudo cd /mnt sudo cp -R * /music \end{lstlisting} \caption{Filling up the SSD}\label{fig:Filling up the SSD} \end{figure} After the copy process is finished, the \rpi{} can be shut down. We remove the MicroSD card. For the next use we save \vol{} on it. \section{The hardware component} This is about the mechanical assembly of the \bb{}. Since everything is stacked on top of each other, I'm also talking about the \textit{Hardwarestack}. \subsection{The display and the \rpi{}} We'll start with the display. When unpacking, it is noticeable that the control board is already connected and mounted. \jpaimg{./../images/display.png}{Display} This simplifies assembly for us. How this is done is simply explained in this \href{https://www.youtube.com/watch?v=tK-w-wDvRTg}{YouTube Video}. \textbf{Caution:} For the \bb{} we only connect the ribbon cable. In the video, the \rpi{} is fastened with screws. Instead of these screws we use spacer bolts M2,5~x~11mm. After the \rpi{} the sound card and the converter board for the SSD will be added. \jpaimg{./../images/dsp-pi.png}{Display with Pi} \subsection{The sound card} There's not much to explain here. The sound card is placed on the GPIO bar of the \rpi{}. Then the spacer bolts, which were supplied with the SSD adapter board (!), are screwed on for fixing. \jpaimg{./../images/dsp-pi-iq.png}{Display, Pi and iQAudio} \subsection{The converter board} At the end comes the SSD with the converter board. We have already connected both in the preparations. This board is fixed with screws on the spacer bolts of the sound card. The \rpi{} is powered by the soundcard. However, this is too less to supply the converter board with the SSD via USB.\@ Therefore we have to set the jumper \textit{PWR\_U} so that the middle pin and the pin closest to the board edge are bridged. This ensures that the converter board is not supplied with power via USB, but via the extra input. \jpaimg{./../images/dsp-pi-iq-ssd.png}{Display, Pi, iQAudio and SSD} \subsection{Adapter cable} The power supply has only one output with 19V. For the display and the converter board 5V are required. For this we need an adapter cable. The cable has a socket, into which the plug of the power supply comes. This socket has screw terminals on the other side. We connect two cables to these screw terminals. One, which again has a plug for the sound card. And one which is connected with screw terminals to a so-called \href{https://en.wikipedia.org/wiki/Buck_converter}{Step Down Converter}. This Step Down Converter has two USB ports, which we use to power the display and the converter board. \jpaimg{./../images/adaptercable.png}{Adapter cable} \subsection{The result} If everything was assembled correctly, it looks like in the following picture. \jpaimg{./../images/bbwopwr.png}{Hardwarestack} And if the cables were also connected, it looks that way: \jpaimg{./../images/cableconnected.png}{Hardwarestack with cables} \section{The software component} This is about the installation and configuration of \vol{}. \subsection{First installation} Prerequisites for the installation is the \nameref{subsec:Filling up the SSD}. And of course the assembly of the hardware stack. For this we download the image of \vol{}. After that we try the Win32 Disk Imager again and save the image on the MicroSD card. After the image is installed, the card is inserted in the \rpi{}, we start the system. Please make sure that the \rpi{} is connected to the network with a network cable. \subsection{The plugins} We can find out the IP address of the \bb{} via our router. Then we call up the IP address in the browser. The start screen will look like this. \jpaimg{./../images/vol-main.png}{Initial screen} By the time we see this image, we have already made a great deal of progress. In order for the touch screen to work, an appropriate plugin is required. To do this we go through the settings -- the gear in the upper left corner. \jpaimg{./../images/vol-setup.png}{Settings} For the plugins we select the corresponding menu item. The plugin for the touchscreen can be found under \textit{Miscellanea}, it is called \textit{Touch Display Plugin}. \jpaimg{./../images/vol-touch.png}{Touchscreen} Another plugin is a simple equalizer. We install it as well. It can be found under \textit{Audio Interface}. \jpaimg{./../images/vol-equal.png}{Equalizer} After the plugins have been installed, you have to activate them. This is done on the second tab \textit{Installed Plugins}. After they are activated, it looks like this. \jpaimg{./../images/vol-plug-active.png}{Plugins} \subsection{Fine-tuning}\label{subsec:Fine-tuning} Now it's time for some fine tuning. How to do them is explained \href{https://volumio.org/forum/guide-for-setting-touchscreen-backlight-control-t11425.html}{on this page}. \textbf{Note:} After performing one or more of these configuration steps, a reboot is necessary. Only then the changes will take effect. \subsubsection{The mouse pointer}\label{subsubsec:SSH} We start by hiding the mouse pointer. First we have to activate SSH.\@ This can be done via the browser with the following URL:\@ \\ \textit{http://<IP-the-BoomBox>/dev} -- in my case for example with \\ \textit{http://192.168.2.17/dev}. On this page we find buttons to activate and de\-activate access with SSH.\@ For our purpose we need this actively. \jpaimg{./../images/vol-dev.png}{SSH} Then we log on to the system via ssh. The username is \textit{volumio}; the password is identical to the username. Then we edit the file configuration file for the kiosk mode. We add \code{-{}- -nocursor} to the line. \begin{figure}[H] \begin{lstlisting} sudo nano /lib/systemd/system/volumio-kiosk.service # Original line # ExecStart=/usr/bin/startx /etc/X11/Xsession /opt/volumiokiosk.sh # Modified line ExecStart=/usr/bin/startx /etc/X11/Xsession /opt/volumiokiosk.sh -- -nocursor # Leave the editor by pressing CTRL+X \end{lstlisting} \caption{Kiosk mode}\label{fig:Kiosk mode} \end{figure} \subsubsection{Screensaver} From time to time the display is simply \textit{switched off}. That is, it becomes completely black. To prevent this, the following steps are necessary: \begin{figure}[H] \begin{lstlisting} sudo nano /opt/volumiokiosk.sh # Original lines # xset +dpms # xset s blank # xset 0 0 120 # Adjusted lines xset -dpms xset s off #xset 0 0 120 # Leave the editor by pressing CTRL+X \end{lstlisting} \caption{Screensaver}\label{fig:Screensaver} \end{figure} \subsubsection{Access from Windows} At \vol{} a samba is already installed by default. This allows easy access to the storage. However, the device offers different storage locations. This could cause confusion. That's why we make sure that only storage that is connected via USB can be accessed. So we can access the SSD from Windows without guesswork. \begin{figure}[H] \begin{lstlisting} sudo nano /etc/samba/smb.conf # Original lines [Internal Storage] comment = Boombox Internal Music Folder path = /data/INTERNAL read only = no guest ok = yes [USB] comment = Boombox USB Music Folder path = /mnt/USB read only = no guest ok = yes [NAS] comment = Boombox NAS Music Folder path = /mnt/NAS read only = no guest ok = yes # Adjusted lines #[Internal Storage] # comment = Boombox Internal Music Folder # path = /data/INTERNAL # read only = no # guest ok = yes [SSD] comment = Boombox SSD Music Folder path = /mnt/USB read only = no guest ok = yes #[NAS] # comment = Boombox NAS Music Folder # path = /mnt/NAS # read only = no # guest ok = yes # Leave the editor by pressing CTRL+X \end{lstlisting} \caption{Share}\label{fig:Share} \end{figure} Under Windows, the device can then be accessed under the name \code{\textbackslash{}\textbackslash{}boombox} in Windows Explorer. \jpaimg{./../images/win-bb.png}{Access from Windows} Finally, we turn on the \textit{SSH} access again. To do this we call up the corresponding page. See also chapter \nameref{subsubsec:SSH}. \clearpage{} \phantomsection{} \addcontentsline{toc}{section}{List of figures} \listoffigures\thispagestyle{fancy} \newpage % \clearpage{} % \phantomsection{} % \addcontentsline{toc}{section}{Tabellenverzeichnis} % \listoftables\thispagestyle{fancy} % \newpage \renewcommand{\indexname}{Index} \clearpage{} \phantomsection{} \addcontentsline{toc}{section}{Index} \printindex\thispagestyle{fancy} \newpage \end{document}
{ "alphanum_fraction": 0.7490345265, "avg_line_length": 40.9174528302, "ext": "tex", "hexsha": "1114bce1ade68006f69448bc758b6d73d418af52", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "082912f415715f13a3778bd8dcc1a3ecd1a91bb7", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ThirtySomething/BoomBox", "max_forks_repo_path": "EN/BoomBox.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "082912f415715f13a3778bd8dcc1a3ecd1a91bb7", "max_issues_repo_issues_event_max_datetime": "2019-10-15T18:24:43.000Z", "max_issues_repo_issues_event_min_datetime": "2019-03-05T13:54:15.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ThirtySomething/BoomBox", "max_issues_repo_path": "EN/BoomBox.tex", "max_line_length": 160, "max_stars_count": null, "max_stars_repo_head_hexsha": "082912f415715f13a3778bd8dcc1a3ecd1a91bb7", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ThirtySomething/BoomBox", "max_stars_repo_path": "EN/BoomBox.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4963, "size": 17349 }
\section{Evaluation} Our testbed consists of four machines, where a single server plays the role of both Clover memory server (MS) and metadata server (DN). Two machines are configured as Clover clients, and the last hosts our DPDK-based TOR. Physically, the machines are identical: each is equipped with two Intel Xeon E5-2640 CPUs and 256~GB of main memory evenly spread across the NUMA domains. Each server communicates using a Mellanox ConnectX-5 100-Gbps NIC installed in a 16x PCIe slot interconnected via a 100-Gbps Mellanox Onyx Switch. All Clover servers are configured with default routing settings: clients send directly to the metadata and data server. We install OpenFlow rules on the Onyx switch to redirect the Clover RDMA traffic to the DPDK ``TOR''. \subsection{Conflict resolution} \begin{figure} \includegraphics[width=0.45\textwidth]{fig/throughput.pdf} \caption{Default Clover throughput vs. Clover with write conflict detection and correction turned on} \label{fig:throughput} \vskip -1em \end{figure} We test the performance gains of resolving write conflicts using our caching TOR. Clover clients are configured to run a YCSB-A benchmark, 50\% read, 50\% write for 1 million requests. Requests for keys are based on a Zipf distribution generated with an \textit{s} value of 0.75. %% %%\todo{show exactly what this means}. %% In each experiment the number of client threads is increased which in turn increases the load on the system. Clover requests are blocking; thus, the throughput is a factor of both the request latency and the number of client threads. Figure~\ref{fig:throughput} compares the performance of native Clover (plotted in red) against our in-network conflict resolution (hatched blue). As the number of clients increases so too does the probability that two client threads will make concurrent writes to the same key. The number of conflicts resolved in flight directly correlates to throughput improvements as each successful request reduces the multiple round trips necessary to resolve write conflicts. Our current implementation provides a $1.42\times$ throughput improvement at 64 client threads. Throughput is limited by the scale of our experimental setup, i.e more client machines can produce higher throughputs. % %\todo{This is the % speculation part we should cut} % The number of in-flight conflicts is also impacted by the Zipf distribution. We use a Zipf of 0.75, however a Zipf of 1.0 would result in a distribution skewed towards fewer keys, which in turn results in more conflicts. Moreover, we find that Clover's current design leads to hardware contention on the servers themselves. In particular, ConnectX-5 NIC performance degrades as the number of RDMA \texttt{c\&s} operations to the same memory region across different queue pairs increases~\cite{design-guidelines}. As our design eliminates the need for \texttt{c\&s} operations on cached keys, future work will seek to reduce or eliminate \texttt{c\&s} operations by replacing them at the TOR with RDMA writes. \subsection{Memory consumption} Resources on networking hardware are scarce. High-end SoC SmartNICs have just a few gigabytes of RAM, and programmable switches have only megabytes of SRAM. Moreover the use of this memory is not free: using memory for any purpose other than buffering packets has a direct performance cost. %as the number of packets which can be successfully %buffered drops. Our design takes the preciousness of memory in network %into account. The metadata we cache in network is minimal: %necessary to resolve write conflicts. While Clover's meta data %consists of many MB of garbage collection and version data we only cache the virtual address of the last write per key, %In addition we track as well as the last key written per client. Clients are not explicitly known to our middlebox and are identified at runtime by their QP. Tracking clients in this way is necessary to detect write conflicts in Clover. This overhead could be eliminated by explicitly adding key information to \texttt{c\&s} requests. % Figure~\ref{fig:memory} shows the memory overhead as a function of keys. Note that 100K keys can be supported using 2.5\% of the available memory (64~MB) on a Barefoot Tofino 2 programmable switch~\cite{tofino2}. \begin{figure} \includegraphics[width=0.45\textwidth]{fig/memory.pdf} \caption{Cost of caching metadata in-network vs. key space size} \label{fig:memory} \vskip -1em \end{figure} \begin{figure} \includegraphics[width=0.45\textwidth]{fig/cache.pdf} \caption{Performance as a function of keys cached. Caching a few of the top-$N$ keys provides the greatest marginal throughput benefits.} \label{fig:cache} \end{figure} %\subsection{Caching top \textit{N} keys} %% Hot keys are the most likely to contribute to conflicts. %Caching only hot keys results in relatively %large performance gains while requiring only a small portion of the %memory required to cache the entire keyspace. We test the effect of caching only hot keys by restricting our in-network cache to track and resolve conflicts on only the top-\textit{N} keys. In this experiment RDMA requests for keys which are not cached pass through our DPDK TOR without modification; conflicts are resolved using Clover's existing reconciliation protocol. Figure~\ref{fig:cache} shows the throughput for 64 client threads when caching a varying number of keys out of a total key space size of 1024 keys. The request distribution is Zipf(0.75), therefore the vast majority of conflicts occur on the top-eight keys. The in-network memory requirement is 128 bytes, which results in $1.3\times$ throughput improvement.
{ "alphanum_fraction": 0.7897866387, "avg_line_length": 45.0236220472, "ext": "tex", "hexsha": "e6ffd6dfc0dc150df0b8163fc4f8dc5b845075d1", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7ab78d0cf6e34dd60fa8a10a92543d3b9b7ab09d", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "wantonsolutions/warm", "max_forks_repo_path": "papers/WORDS21/eval.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7ab78d0cf6e34dd60fa8a10a92543d3b9b7ab09d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "wantonsolutions/warm", "max_issues_repo_path": "papers/WORDS21/eval.tex", "max_line_length": 130, "max_stars_count": 2, "max_stars_repo_head_hexsha": "7ab78d0cf6e34dd60fa8a10a92543d3b9b7ab09d", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "wantonsolutions/warm", "max_stars_repo_path": "papers/WORDS21/eval.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-28T19:58:38.000Z", "max_stars_repo_stars_event_min_datetime": "2021-06-18T09:04:39.000Z", "num_tokens": 1372, "size": 5718 }
\documentclass[a4paper,12pt]{article} %\documentclass[a4paper,12pt]{scrartcl} \usepackage{xltxtra} \input{../preamble.tex} % \usepackage[spanish]{babel} % \setromanfont[Mapping=tex-text]{Linux Libertine O} % \setsansfont[Mapping=tex-text]{DejaVu Sans} % \setmonofont[Mapping=tex-text]{DejaVu Sans Mono} \title{Homework \#12} \author{Isaac Ayala Lozano} \date{\today} \begin{document} \maketitle \begin{enumerate} \item Propose a random process (Figure \ref{fig: samples}) and prove that it is ergodic. Let $y(\zeta, t) = \sum_{n=1}^N a_n \cos (\omega_n t + \Psi(\zeta))$ be the proposed random process, where \begin{itemize} \item $a_n$ is the amplitude of each cosine function. \item $\omega_n$ is the frequency corresponding to each $n$. \item $\Psi(\zeta) \in R $ is the random noise added to the process. \end{itemize} \begin{figure}[htb!] \centering \import{./img/}{hw12_samples.tex} \caption{Random functions from the ensemble.} \label{fig: samples} \end{figure} We prove that the process is ergodic by verifying that for the ensemble of sample functions $y_k$, the mean value $\mu_y (k)$ and the autocorrelation function $R_{yy}(\tau, k)$ do not differ over different sample functions \cite{bendat2011random}. The mean value of the sample function is obtained as follows \begin{align*} \mu_y(k) &= \lim _{T \rightarrow \infty} \frac{1}{T} \int_0^T y_k (t) dt \\ &= \lim _{T \rightarrow \infty} \frac{1}{T} \int_0^T \sum_{n=1}^N a_n \cos(\omega_n t + \Psi (\zeta)) dt\\ &= \lim _{T \rightarrow \infty} \frac{1}{T} \sum_{n=1}^N \int_0^T a_n \cos(\omega_n t + \Psi (\zeta)) dt\\ &= \lim _{T \rightarrow \infty} \frac{1}{T} \sum_{n=1}^N \left. \frac{a_n}{\omega_n} \sin(\omega_n t + \Psi (\zeta)) \right\rvert_{0}^{T} \\ &= 0 \end{align*} Given that $N < \infty$, the sum of all terms will also be number less than infinity. Thus, as $T$ tends towards infinity the mean value of the random process approaches zero. This holds true for all values of $\Psi(\zeta)$. \begin{figure}[htb!] \centering \import{./img/}{hw12_ergodic.tex} \caption{Mean vlaue for different amount of samples.} \label{fig: ergodic} \end{figure} For the autocorrelation function, a similar process is followed. \begin{align*} R_{yy} (\tau, k) & = \lim _{T \rightarrow \infty} \frac{1}{T} \int_0^T y_k (t) y_k (t + \tau) dt\\ &= \lim _{T \rightarrow \infty} \frac{1}{T} \int_0^T \sum_{n=1}^N a_n \cos(\omega_n t + \Psi (\zeta)) \sum_{n=1}^N a_n \cos(\omega_n (t+ \tau) + \Psi (\zeta)) dt \end{align*} We present a simplified version of the proof when $N$ is equal to one, though the proof holds for all values of $N$. We begin by applying the trigonometric identity of $\cos(u\pm v) = \cos(u)\cos(v) \mp \sin(u) \sin(v)$ , such that $u = \omega_n t + \Psi(\zeta)$ and $v = \omega_n \tau$. \begin{align*} R_{yy} (\tau, k) &= \lim _{T \rightarrow \infty} \frac{1}{T} \int_0^T (a \cos(u))(a\cos(u)\cos(v) - a\sin(u)\sin(v)) dt\\ &= \lim _{T \rightarrow \infty} \frac{1}{T} \int_0^T (a^2 (\cos (u))^2 \cos(v) - a^2 \cos(u)\sin(u)\sin(v)) dt\\ &= \lim _{T \rightarrow \infty} \frac{a^2}{T} ( \cos(v)\int_0^T (\cos(u))^2 dt - \sin(v) \int_0^T \cos(u) \sin(u) dt ) \end{align*} Evaluating each integral yields the following results. \begin{align*} \int_0^T (\cos(u))^2 dt &= \int_0^T (\cos(\omega t + \Psi(\zeta)))^2 dt \\ &= \left. \frac{2 (\omega t + \Psi(\zeta)) + \sin (2(\omega t + \Psi(\zeta)))}{4 \omega} \right\rvert_{0}^{T}\\ &= \frac{T}{2} + \frac{\sin(2(\omega T + \Psi(\zeta)))}{4\omega} - \frac{\sin(2\Psi(\zeta))}{4\omega} \\ % \int_0^T \cos(u) \sin(u) dt &= \int_0^T \cos(\omega t + \Psi(\zeta)) \sin(\omega t + \Psi(\zeta)) dt\\ &= \left . - \frac{\cos (2 (\omega t + \Psi(\zeta)))}{4 \omega} \right\rvert_{0}^{T}\\ &= \frac{\cos(2\Psi (\zeta))}{4\omega} - \frac{\cos (2 (\omega T + \Psi(\zeta)))}{4 \omega} \end{align*} Substituting these results into the original equation and evaluating the limit yields \begin{equation*} R_{yy} (\tau, k) = \frac{a^2}{2} \cos(\omega \tau) \end{equation*} This is due to the fact that every other term for the integrals lacks a $T$ in the numerator of their fractions. Given the absence of it, as $T$ approaches infinity the value of all the other terms will approach zero. This result can be generalized for any value of N. From the original equation we notice that the equation was a sum of integrals, hence for values of different from one it is only necessary to add the results of those other integrals. \begin{equation*} R_{yy} (\tau, k) = \sum_{n=1}^N \frac{a_n^2}{2} \cos(\omega_n \tau) \end{equation*} Observe that the autocorrelation function is not dependent on time, but on the lag between measurements. It does not vary accross sample functions because they all present the same behaviour described in the random process' equation. Given that both $\mu_y (k)$ and $R_{yy}(k)$ do not vary between sample functions, as has been proven, the proposed random process is indeed ergodic. \item Present plots of the probability density functions for a sine wave, a sine wave plus random noise, and a sample of white\footnote{Also called Gaussian} noise. \item Present the autocorrelation plots for the three previous functions. \begin{figure}[htb!] \centering \import{./img/}{hw12_sin.tex} \caption{Sine function.} \label{fig: sin} \end{figure} \newpage \pagebreak \begin{figure}[htb!] \centering \import{./img/}{hw12_noise.tex} \caption{Sine function with noise.} \label{fig: noise} \end{figure} \newpage \pagebreak \begin{figure}[htb!] \centering \import{./img/}{hw12_white.tex} \caption{White noise.} \label{fig: white} \end{figure} \end{enumerate} \printbibliography \newpage \pagebreak \appendix \section{Octave Code} \lstinputlisting[language=Matlab]{hw08_plots.m} % https://ocw.mit.edu/courses/mechanical-engineering/2-22-design-principles-for-ocean-vehicles-13-42-spring-2005/readings/r6_spectrarandom.pdf \end{document}
{ "alphanum_fraction": 0.681147266, "avg_line_length": 36.3536585366, "ext": "tex", "hexsha": "7ac16893b4736be4fbac820d45316d31d401176b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ccd3364818c673f7a6bf13d495004034d2c6ecc0", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "der-coder/CINVESTAV-Mathematics-II-2020", "max_forks_repo_path": "hw12_IsaacAyala.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ccd3364818c673f7a6bf13d495004034d2c6ecc0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "der-coder/CINVESTAV-Mathematics-II-2020", "max_issues_repo_path": "hw12_IsaacAyala.tex", "max_line_length": 248, "max_stars_count": null, "max_stars_repo_head_hexsha": "ccd3364818c673f7a6bf13d495004034d2c6ecc0", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "der-coder/CINVESTAV-Mathematics-II-2020", "max_stars_repo_path": "hw12_IsaacAyala.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2051, "size": 5962 }
\documentclass{article} \usepackage[margin=1in]{geometry} \usepackage{mathpazo,color} \StataweaveOpts{echo=false, outfmt="frame=single, fontfamily=courier, fontsize=\small"} \begin{document} \begin{Statacode}{hide} set more off include ../../fem_env.do *set up work directory global estdir ../Estimates *load estimates **Disease estimates use $estdir/hearte.ster estimates store Heart_disease estimates use $estdir/stroke.ster estimates store Stroke estimates use $estdir/hibpe.ster estimates store HBP estimates use $estdir/diabe.ster estimates store Diabetes estimates use $estdir/lunge.ster estimates store Lung estimates use $estdir/memrye.ster estimates store Memory_disorder **Functional Status estimates use $estdir/died.ster estimates store Motality estimates use $estdir/nhmliv.ster estimates store Nursing_home estimates use $estdir/iadlstat.ster estimates store IADL estimates use $estdir/adlstat.ster estimates store ADL **Predicted Medical Costs estimates use $estdir/totmd_mcbs.ster estimates store Total_MCBS estimates use $estdir/totmd_meps.ster estimates store Total_MEPS estimates use $estdir/oopmd_meps.ster estimates store OOP_MEPS estimates use $estdir/mcare_pta.ster estimates store Medicare_A estimates use $estdir/mcare_ptb.ster estimates store Medicare_B **Program Participation estimates use $estdir/ssclaim.ster estimates store SS_claim estimates use $estdir/ssiclaim.ster estimates store SSI_claim estimates use $estdir/diclaim.ster estimates store DI_claim estimates use $estdir/mcareb_takeup_newenroll.ster estimates store MedicareB_new estimates use $estdir/mcareb_takeup_curenroll.ster estimates store MedicareB_current estimates use $estdir/mcare_ptd.ster estimates store Medicare_D **Labor Outcomes estimates use $estdir/work.ster estimates store Working estimates use $estdir/dbclaim.ster estimates store DB_pension_claiming \end{Statacode} \section*{\centering Type of Estimators} \begin{Statacode} estimates dir \end{Statacode} \pagebreak \section*{\centering Summary Characteristics} \begin{Statacode} estimates stat _all \end{Statacode} \pagebreak \section*{\centering Coeffient Point Estimates} \subsection*{\centering Diseases} \begin{Statacode} estimates table Heart_disease Stroke HBP, star(.05 .01 .001) keep(black hispan hsless college male fhearte fstroke fcancre fhibpe fdiabe flunge fsmoken fwidowed fsingle flogbmi_l30 flogbmi_30p fwork) sty(columns) \end{Statacode} \pagebreak \section*{\centering Coeffient Point Estimates} \subsection*{\centering Diseases(continued)} \begin{Statacode} estimates table Diabetes Lung Memory_disorder, star(.05 .01 .001) keep(black hispan hsless college male fhearte fstroke fcancre fhibpe fdiabe flunge fadl1 fadl2 fadl3p fsmoken fwidowed fsingle flogbmi_l30 flogbmi_30p fwork) sty(columns) \end{Statacode} \pagebreak \section*{\centering Coeffient Point Estimates} \subsection*{\centering Functional Status} \begin{Statacode} estimates table Motality Nursing_home IADL ADL, star(.05 .01 .001) keep(black hispan hsless college male fhearte fstroke fcancre fhibpe fdiabe flunge fadl1 fadl2 fadl3p fsmoken fwidowed fsingle flogbmi_l30 flogbmi_30p fwork) sty(columns) \end{Statacode} \pagebreak \section*{\centering Coeffient Point Estimates} \subsection*{\centering Predicted Medical Costs} \begin{Statacode} estimates table Total_MCBS Total_MEPS , star(.05 .01 .001) keep(black hispan hsless college male hearte stroke cancre hibpe diabe lunge adl3p widowed single) sty(columns) \end{Statacode} \pagebreak \section*{\centering Coeffient Point Estimates} \subsection*{\centering Predicted Medical Costs(Continued)} \begin{Statacode} estimates table OOP_MEPS Medicare_A Medicare_B, star(.05 .01 .001) keep(black hispan hsless college male hearte stroke cancre hibpe diabe lunge adl3p widowed single) sty(columns) \end{Statacode} \pagebreak \section*{\centering Coeffient Point Estimates} \subsection*{\centering Program Participation} \begin{Statacode} estimates table SS_claim SSI_claim DI_claim, star(.05 .01 .001) keep(black hispan hsless college male fhearte fstroke fcancre fhibpe fdiabe flunge fadl1 fadl2 fadl3p fwidowed fsingle fwork) sty(columns) \end{Statacode} \pagebreak \section*{\centering Coeffient Point Estimates} \subsection*{\centering Program Participation(Continued)} \begin{Statacode} estimates table MedicareB_new MedicareB_current Medicare_D, star(.05 .01 .001) keep(black hispan hsless college male hearte stroke cancre hibpe diabe lunge adl1 adl2 adl3p widowed work) sty(columns) \end{Statacode} \pagebreak \section*{\centering Coeffient Point Estimates} \subsection*{\centering Labor Outcomes} \begin{Statacode} estimates table Working DB_pension_claiming, star(.05 .01 .001) keep(black hispan hsless college male fhearte fstroke fcancre fhibpe fdiabe flunge fadl1 fadl2 fadl3p fsmoken fwidowed fsingle fwork) sty(columns) \end{Statacode} \pagebreak \section*{\centering Coeffient Point Estimates} \subsection*{\centering Labor Outcomes} \begin{Statacode} estimates table Working DB_pension_claiming, star(.05 .01 .001) keep(black hispan hsless college male fhearte fstroke fcancre fhibpe fdiabe flunge fadl1 fadl2 fadl3p fsmoken fwidowed fsingle fwork) sty(columns) \end{Statacode} \begin{Statacode} qui set mem 300m qui use $outdata/hrs17_transition.dta *logdeltaage qui gen logdeltaage = log(age_iwe - lage_iwe) *llogbmi_l30 llogbmi_30p flogbmi_l30 flogbmi_30p qui { local log_30 = log(30) mkspline llogbmi_l30 `log_30' llogbmi_30p = llogbmi mkspline flogbmi_l30 `log_30' flogbmi_30p = flogbmi } *flogq flogaime qui gen flogq=0 qui gen flogaime=0 local age_var age_iwe gen lage62e = floor(l`age_var') == 60 if l`age_var' < . gen lage63e = floor(l`age_var') == 61 if l`age_var' < . mkspline la6 58 la7 73 la7p = l`age_var' *** GENERATE WEAVE DUMMIES gen w3 = wave == 3 gen w4 = wave == 4 gen w5 = wave == 5 gen w6 = wave == 6 gen w7 = wave == 7 \end{Statacode} \pagebreak \section*{\centering Marginal Effects} \subsection*{\centering Diseases} \begin{Statacode} **marginal effects *desease qui estimates restore Heart_disease margins, noesample matrix temp=r(b) margins, dydx(black hispan hsless college male fstroke fcancre fhibpe fdiabe flunge fsmoken fwidowed fsingle flogbmi_l30 flogbmi_30p fwork) noesample matrix temp1=r(b) matrix heart=(temp\temp1[1,1]\temp1[1,2]\temp1[1,3]\temp1[1,4]\temp1[1,5]\.\temp1[1,6]\temp1[1,7]\temp1[1,8]\temp1[1,9]\temp1[1,10]\temp1[1,11]\temp1[1,12]\temp1[1,13]\temp1[1,14]\temp1[1,15]\temp1[1,16]) matrix drop temp temp1 qui estimates restore Stroke margins, noesample matrix temp=r(b) margins, dydx(black hispan hsless college male fhearte fcancre fhibpe fdiabe flunge fsmoken fwidowed fsingle flogbmi_l30 flogbmi_30p fwork) noesample matrix temp1=r(b) matrix stroke=(temp\temp1[1,1]\temp1[1,2]\temp1[1,3]\temp1[1,4]\temp1[1,5]\temp1[1,6]\.\temp1[1,7]\temp1[1,8]\temp1[1,9]\temp1[1,10]\temp1[1,11]\temp1[1,12]\temp1[1,13]\temp1[1,14]\temp1[1,15]\temp1[1,16]) matrix drop temp temp1 qui estimates restore HBP margins, noesample matrix temp=r(b) margins, dydx(black hispan hsless college male fhearte fstroke fcancre fdiabe flunge fsmoken fwidowed fsingle flogbmi_l30 flogbmi_30p fwork) noesample matrix temp1=r(b) matrix hbp=(temp\temp1[1,1]\temp1[1,2]\temp1[1,3]\temp1[1,4]\temp1[1,5]\temp1[1,6]\temp1[1,7]\temp1[1,8]\.\temp1[1,9]\temp1[1,10]\temp1[1,11]\temp1[1,12]\temp1[1,13]\temp1[1,14]\temp1[1,15]\temp1[1,16]) matrix drop temp temp1 qui estimates restore Diabetes margins, noesample matrix temp=r(b) margins, dydx(black hispan hsless college male fhearte fstroke fcancre fhibpe flunge fsmoken fwidowed fsingle flogbmi_l30 flogbmi_30p fwork) noesample matrix temp1=r(b) matrix diab=(temp\temp1[1,1]\temp1[1,2]\temp1[1,3]\temp1[1,4]\temp1[1,5]\temp1[1,6]\temp1[1,7]\temp1[1,8]\temp1[1,9]\.\temp1[1,10]\temp1[1,11]\temp1[1,12]\temp1[1,13]\temp1[1,14]\temp1[1,15]\temp1[1,16]) matrix drop temp temp1 qui estimates restore Lung margins, noesample matrix temp=r(b) margins, dydx(black hispan hsless college male fhearte fstroke fcancre fhibpe fdiabe fsmoken fwidowed fsingle flogbmi_l30 flogbmi_30p fwork) noesample matrix temp1=r(b) matrix lung=(temp\temp1[1,1]\temp1[1,2]\temp1[1,3]\temp1[1,4]\temp1[1,5]\temp1[1,6]\temp1[1,7]\temp1[1,8]\temp1[1,9]\temp1[1,10]\.\temp1[1,11]\temp1[1,12]\temp1[1,13]\temp1[1,14]\temp1[1,15]\temp1[1,16]) matrix drop temp temp1 qui estimates restore Memory_disorder margins, noesample matrix temp=r(b) margins, dydx(black hispan hsless college male fhearte fstroke fcancre fhibpe fdiabe fsmoken fwidowed fsingle flogbmi_l30 flogbmi_30p fwork) noesample matrix temp1=r(b) matrix mem=(temp\temp1[1,1]\temp1[1,2]\temp1[1,3]\temp1[1,4]\temp1[1,5]\temp1[1,6]\temp1[1,7]\temp1[1,8]\temp1[1,9]\temp1[1,10]\.\temp1[1,11]\temp1[1,12]\temp1[1,13]\temp1[1,14]\temp1[1,15]\temp1[1,16]) matrix drop temp temp1 matrix disease=(heart,stroke,hbp,diab,lung,mem) matrix colnames disease = "heart disease""stroke""hypertension""diabetes""lung disease""memory disorder" matrix rownames disease = "mean""black" "hispanic" "<high school" "college" "male" "heart disease" "stroke" "cancer" "hypertension" "diabetes" "lung disease" "smoke" "widowed" "single" "bmi<30" "bim>30" "work" matlist disease, lines(oneline) twidth(15) \end{Statacode} \pagebreak \section*{\centering Marginal Effects} \subsection*{\centering Functional Health} \begin{Statacode} qui estimates restore Motality margins, noesample matrix temp=r(b) margins, dydx(black hispan hsless college male fhearte fstroke fcancre fhibpe fdiabe flunge fadl1 fadl2 fadl3p fsmoken fwidowed fsingle fwork) noesample matrix temp1=r(b) matrix mortality=(temp\temp1[1,1]\temp1[1,2]\temp1[1,3]\temp1[1,4]\temp1[1,5]\temp1[1,6]\temp1[1,7]\temp1[1,8]\temp1[1,9]\temp1[1,10]\temp1[1,11]\temp1[1,12]\temp1[1,13]\temp1[1,14]\temp1[1,15]\temp1[1,16]\temp1[1,17]\.\.\temp1[1,18]) matrix drop temp temp1 qui estimates restore Nursing_home margins, noesample matrix temp=r(b) margins, dydx(black hispan hsless college male fhearte fstroke fcancre fhibpe fdiabe flunge fadl1 fadl2 fadl3p fsmoken fwidowed fsingle fwork) noesample matrix temp1=r(b) matrix nursingh=(temp\temp1[1,1]\temp1[1,2]\temp1[1,3]\temp1[1,4]\temp1[1,5]\temp1[1,6]\temp1[1,7]\temp1[1,8]\temp1[1,9]\temp1[1,10]\temp1[1,11]\temp1[1,12]\temp1[1,13]\temp1[1,14]\temp1[1,15]\temp1[1,16]\temp1[1,17]\.\.\temp1[1,18]) matrix drop temp temp1 qui estimates restore IADL margins, noesample matrix temp=r(b) margins, dydx(black hispan hsless college male fhearte fstroke fcancre fhibpe fdiabe flunge fadl1 fadl2 fadl3p fsmoken fwidowed fsingle flogbmi_l30 flogbmi_30p fwork) noesample matrix temp1=r(b) matrix iadl=(temp,temp1) matrix iadl=iadl' *matrix iadl=(temp\temp1[1,1]\temp1[1,2]\temp1[1,3]\temp1[1,4]\temp1[1,5]\temp1[1,6]\temp1[1,7]\temp1[1,8]\temp1[1,9]\temp1[1,10]\temp1[1,11]\temp1[1,12]\temp1[1,13]\temp1[1,14]\temp1[1,15]\temp1[1,16]\temp1[1,17]\.\.\temp1[1,18]) matrix drop temp temp1 qui estimates restore ADL margins, noesample matrix temp=r(b) margins, dydx(black hispan hsless college male fhearte fstroke fcancre fhibpe fdiabe flunge fadl1 fadl2 fadl3p fsmoken fwidowed fsingle flogbmi_l30 flogbmi_30p fwork) noesample matrix temp1=r(b) matrix adl=(temp,temp1) matrix adl=adl' *matrix iadl=(temp\temp1[1,1]\temp1[1,2]\temp1[1,3]\temp1[1,4]\temp1[1,5]\temp1[1,6]\temp1[1,7]\temp1[1,8]\temp1[1,9]\temp1[1,10]\temp1[1,11]\temp1[1,12]\temp1[1,13]\temp1[1,14]\temp1[1,15]\temp1[1,16]\temp1[1,17]\.\.\temp1[1,18]) matrix drop temp temp1 matrix functional=(mortality, nursingh, iadl, adl) matrix colnames functional = "mortality""nursing home""iadl""adl" matrix rownames functional = "mean""black" "hispanic" "<high school" "college" "male" "heart disease" "stroke" "cancer" "hypertension" "diabetes" "lung disease" "adl=1" "adl=2" "adl>=3" "smoke" "widowed" "single" "work" matlist functional, lines(oneline) twidth(15) \end{Statacode} \pagebreak \section*{\centering Marginal Effects} \subsection*{\centering Predicted Medical Costs} \begin{Statacode} estimates table Total_MCBS Total_MEPS , star(.05 .01 .001) keep(black hispan hsless college male hearte stroke cancre hibpe diabe lunge adl3p widowed single) sty(columns) \end{Statacode} \pagebreak \section*{\centering Marginal Effects} \subsection*{\centering Program Participation} \begin{Statacode} qui estimates restore SS_claim margins, noesample matrix temp=r(b) margins, dydx(black hispan hsless college male fhearte fstroke fcancre fhibpe fdiabe flunge fadl1 fadl2 fadl3p fwidowed fsingle fwork) noesample matrix temp1=r(b) matrix ss=(temp,temp1) matrix ss=ss' matrix drop temp temp1 qui estimates restore SSI_claim margins, noesample matrix temp=r(b) margins, dydx(black hispan hsless college male fhearte fstroke fcancre fhibpe fdiabe flunge fadl1 fadl2 fadl3p fwidowed fsingle fwork) noesample matrix temp1=r(b) matrix ssi=(temp,temp1) matrix ssi=ssi' qui estimates restore DI_claim margins, noesample matrix temp=r(b) margins, dydx(black hispan hsless college male fhearte fstroke fcancre fhibpe fdiabe flunge fadl1 fadl2 fadl3p fwidowed fsingle fwork) noesample matrix temp1=r(b) matrix diclaim=(temp,temp1) matrix diclaim=diclaim' matrix drop temp temp1 matrix programp=(ss, ssi, diclaim) matrix colnames programp = "ss claim""ssi claim""di claim" matrix rownames programp = "mean""black" "hispanic" "<high school" "college" "male" "heart disease" "stroke" "cancer" "hypertension" "diabetes" "lung disease" "adl=1" "adl=2" "adl>=3" "widowed" "single" "work" matlist programp, lines(oneline) twidth(15) \end{Statacode} \pagebreak \section*{\centering Marginal Effects} \subsection*{\centering Labor Outcomes} \begin{Statacode} qui estimates restore Working margins, noesample matrix temp=r(b) margins, dydx(black hispan hsless college male fhearte fstroke fcancre fhibpe fdiabe flunge fadl1 fadl2 fadl3p fwidowed fsingle fwork) noesample matrix temp1=r(b) matrix working=(temp,temp1) matrix working=working' matrix labor=(working) matrix colnames labor = "working" matrix rownames labor = "mean""black" "hispanic" "<high school" "college" "male" "heart disease" "cancer" "hypertension" "diabetes" "lung disease" "adl=1" "adl=2" "adl>=3" "smoke" "widowed" "single" "work" matlist labor, lines(oneline) twidth(15) \end{Statacode} \end{document}
{ "alphanum_fraction": 0.7780112045, "avg_line_length": 38.8043478261, "ext": "tex", "hexsha": "870c23090cd7e23ea805f7076f349abeb1923bf2", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7db846a17f3c57e98b619d7a9c5860d3a71ccc1c", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ld-archer/E_FEM", "max_forks_repo_path": "FEM_Stata/Estimation/estimate_summary-swv.tex", "max_issues_count": 39, "max_issues_repo_head_hexsha": "7db846a17f3c57e98b619d7a9c5860d3a71ccc1c", "max_issues_repo_issues_event_max_datetime": "2022-03-21T15:32:18.000Z", "max_issues_repo_issues_event_min_datetime": "2019-11-22T10:39:07.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ld-archer/E_FEM", "max_issues_repo_path": "FEM_Stata/Estimation/estimate_summary-swv.tex", "max_line_length": 237, "max_stars_count": 2, "max_stars_repo_head_hexsha": "7db846a17f3c57e98b619d7a9c5860d3a71ccc1c", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ld-archer/E_FEM", "max_stars_repo_path": "FEM_Stata/Estimation/estimate_summary-swv.tex", "max_stars_repo_stars_event_max_datetime": "2019-12-10T10:32:02.000Z", "max_stars_repo_stars_event_min_datetime": "2019-11-22T10:59:33.000Z", "num_tokens": 5167, "size": 14280 }
\setfolder{math01} \section{Math Formulars} \example{Math formulars in Normal Text}{simple_inline.tex} \example{Math formulars as Separated Figures}{simple_figure.tex}
{ "alphanum_fraction": 0.8154761905, "avg_line_length": 33.6, "ext": "tex", "hexsha": "8362adf334f986eb1a00f8761303acc993211183", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "62fff38de6846d3cd513b3d8f14d60c3e80961ee", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "thors/latex", "max_forks_repo_path": "template/manual/example/math01/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "62fff38de6846d3cd513b3d8f14d60c3e80961ee", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "thors/latex", "max_issues_repo_path": "template/manual/example/math01/main.tex", "max_line_length": 64, "max_stars_count": null, "max_stars_repo_head_hexsha": "62fff38de6846d3cd513b3d8f14d60c3e80961ee", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "thors/latex", "max_stars_repo_path": "template/manual/example/math01/main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 46, "size": 168 }
\documentclass[11pt]{article} \usepackage[utf8]{inputenc} \usepackage[paper=a4paper, left=25mm, right=25mm, top=30mm, bottom=30mm]{geometry} \usepackage{setspace} \usepackage{hyperref} \usepackage{graphicx} \usepackage{fancyhdr} \usepackage{amssymb} \usepackage{amsmath} \usepackage{mathtools} \pagestyle{fancy} \renewcommand{\headrulewidth}{0pt} \addtolength{\headheight}{11pt} \title{\flushleft{\textbf{Problem Sheet I}}} \date{} \setcounter{secnumdepth}{0} \newcommand\balancedeq{\stackrel{\mathclap{\tiny\mbox{balanced}}}{=}} \begin{document} \maketitle \thispagestyle{fancy} \subsubsection{3.1 LDA Derivation from the Least Squares Error} We are looking for the global minimum of \begin{equation} \Delta: \mathbb{R}^{d+1} \rightarrow \mathbb{R} \quad (\mathbf{m},b) \mapsto \sum_{i=1}^N (\mathbf{w^T x_i} + b - y_i)^2 = \sum_{i=1}^N (\mathbf{x_i^T w} + b - y_i)^2 \end{equation} First, we take a closer look at the summands. Let $i \in \{1,...,N\}$.\\ The function def. by $f(x):=x^2$ is in $C^\infty(\mathbb{R})$ with derivative $f'(x) = 2x$. For the function \begin{equation} g_i: \mathbb{R}^{d+1} \rightarrow \mathbb{R} \quad (\mathbf{m},b) \mapsto \mathbf{w^T x_i} + b - y_i \end{equation} holds for $k \in {1,...,d}$, $\mathbf{w} \in \mathbb{R}^d, b \in \mathbb{R}$: \begin{equation} \partial_{w_k} g_i(\mathbf{w}, b) = \partial_{w_k} \left( \sum_{j=1}^d x_{ij} w_j + b - y_i \right) = \sum_{j=1}^d x_{ij} \delta_{jk} = x_{ik} \end{equation} \begin{equation} \partial_b g_i(\mathbf{w}, b) = 1 \end{equation} The partial derivatives are continuous, thus $g_i \ in C^1(\mathbb{R^{d+1}}$ As a composition/sum of $C^1$ functions, $\Delta$ is a $C^1$ function as well and \begin{align*} D\Delta(\mathbf{w},b) = D\left(\sum_{i=1}^N f \circ g_i \right)(\mathbf{w},b) & = \sum_{i=1}^N Df(g_i(\mathbf{w}, b)) \cdot Dg_i(\mathbf{w}, b) \\ & \sum_{i=1}^N 2g_i(\mathbf{w}, b) \cdot (\nabla_{\mathbf{w}} g_i(\mathbf{w}, b)^T, \partial_b g_i(\mathbf{w},b)) \\ & \sum_{i=1}^N 2(\mathbf{x_i^Tw} + b - y_i) (\mathbf{x_i^T}, 1)\\ \end{align*} \begin{align*} \Rightarrow \nabla_{(\mathbf{w},b)} = 2 \sum_{i=1}^N (\mathbf{x_i^T w}+b-y_i) \left( \begin{array}{c} \mathbf{x_i^T}\\ 1\\ \end{array} \right) = \left( \begin{array}{c} 2 \sum_{i=1}^N (\mathbf{x_i^T w} + b - y_i) \mathbf{x_i^T}\\ 2 \sum_{i=1}^N (\mathbf{x_i^T w} + b - y_i)\\ \end{array} \right) \end{align*} Because $\Delta \in C^1(\mathbb{R}^{d+1})$ and global maxima in an open set are local maxima, it holds for the argmax $(\mathbf{\hat{w}}, \hat{b})$: \begin{align*} \nabla_{(\mathbf{w}, b)} \Delta(\mathbf{\hat{w}, \hat{b}}) = 0 \end{align*} This implies \begin{align*} \partial_b \Delta(\mathbf{\hat{w}, \hat{b}}) = 0 & \Rightarrow 0 = \sum_{i=1}^N(\mathbf{x_i^T \hat{w}} + \hat{b} - y_i)\\ & \Rightarrow 0 = N\hat{b} + \sum_{i=1}^N(\mathbf{x_i^T \hat{w}} - y_i) \\ & \Rightarrow \hat{b} = \frac{1}{N} \sum_{i=1}^N(-\mathbf{x_i^T \hat{w}} + y_i) = \frac{-1}{N} \sum_{i=1}^N \mathbf{x_i^T \hat{w}} + \sum_{i: y_i=1} 1 - \sum_{i: y_i=-1}^N 1 \quad \balancedeq \quad -\frac{1}{N} \sum_{i=1}^N \mathbf{x_i^T \hat{w}} \end{align*} Furthermore $\Delta(\mathbf{\hat{w}, \hat{b}}) = 0$ implies \begin{align*} 0 = \sum_{i=1}^N(\mathbf{x_i^T\hat{w}}+\hat{b}-y_i)\mathbf{x_i} \end{align*} We insert our result for $\hat{b}$ into this equation: \begin{align*} 0 & = \sum_{i=1}^N \left[ \mathbf{x_i^T \hat{w}} - \frac{1}{N} \sum_{j=1}^N \mathbf{x_j^T \hat{w}} - y_i \right] \mathbf{x_i}\\ \Rightarrow & \underbrace{\frac{1}{N} \sum_{i=1}^N y_i \mathbf{x_i}}_\text{a)} = \underbrace{-\frac{1}{N} \sum_{i=1}^N \frac{1}{N} \sum_{j=1}^N (\mathbf{x_j^T\hat{w}}) \mathbf{x_i}}_\text{b)} + \underbrace{\frac{1}{N} \sum_{i=1}^N (\mathbf{x_i^T \hat{w}})\mathbf{x_i}}_\text{c)}\\ \end{align*} We will separately discuss the three terms a), b) and c):\\ \ \\ \textbf{a)} \begin{align*} \frac{1}{N} \sum_{i=1}^N y_i \mathbf{x_i} & = \frac{1}{N} \sum_{i:y_i=1}\mathbf{x_i} - \frac{1}{N} \sum_{i:y_i=-1}\mathbf{x_i}\\ & = \frac{1}{2} \left( \frac{1}{N/2} \sum_{i:y_i=1} \mathbf{x_i} - \frac{1}{N/2} \sum_{i:y_i=-1} \mathbf{x_i} \right)\\ & \balancedeq \quad \frac{1}{2} \left( \frac{1}{N_1} \sum_{i:y_i=1} \mathbf{x_i} - \frac{1}{N_2} \sum_{i:y_i=-1} \mathbf{x_i} \right)\\ & = (\mathbf{\mu_1} - \mathbf{\mu_ {-1}})/2\\ \end{align*} \ \\ \textbf{b)} \begin{align*} -\frac{1}{N} \sum_{i=1} \frac{1}{N} \sum_{j=1}^N (\mathbf{x_j^T \hat{w}}) \mathbf{x_i} & = \left[ - \frac{1}{N} \sum_{i=1}^N \mathbf{x_i}\right] \left[ \left( \frac{1}{N} \sum_{j=1}^N \mathbf{x_j^T} \right) \mathbf{\hat{w}} \right] \\ & = -\left[ \left( \frac{1}{N} \sum_{i=1}^N \mathbf{x_i} \right) \left( \frac{1}{N} \sum_{j=1}^N \mathbf{x_j^T} \right) \right] \mathbf{\hat{w}}\\ & = -\left( \left[ \left( \frac{1}{N} \sum_{i=1}^N \mathbf{x_i}y_i \right) + \left( \frac{2}{N} \sum_{i:y_i=1} \mathbf{x_i}y_i \right) \right] \left[ \left( \frac{1}{N} \sum_{j=1}^N \mathbf{x_j^T}y_j \right) + \left( \frac{2}{N} \sum_{j:y_j=1} \mathbf{x_j^T}y_j \right) \right] \right)\\ & = - (\frac{1}{2} (\mathbf{\mu_1} - \mathbf{\mu_{-1}}) + \mathbf{\mu_{-1}}) (\frac{1}{2} (\mathbf{\mu_1} - \mathbf{\mu_{-1}})^T + \mathbf{\mu_{-1}}^T) \mathbf{\hat{w}}\\ & = - \left[ \frac{1}{4} (\mathbf{\mu_1} - \mathbf{\mu_{-1}})(\mathbf{\mu_1} - \mathbf{\mu_{-1}})^T + (\mathbf{\mu_1} - \mathbf{\mu_{-1}}) \mathbf{\mu_{-1}}^T \right] \mathbf{\hat{w}}\\ & = - \left[ \frac{S_B}{4} + (\mathbf{\mu_1} - \mathbf{\mu_{-1}}) \mathbf{\mu_{-1}}^T \right] \mathbf{\hat{w}} \end{align*} \ \\ \textbf{c)} \begin{align*} \frac{1}{N} \sum_{i=1}^N (\mathbf{x_i^T \hat{w}}) \mathbf{x_i} &= \frac{1}{N} \sum_{i=1}^N (\mathbf{x_i x_i^T}) \mathbf{\hat{w}}\\ &= \frac{1}{N} \sum_{i=1}^N (\mathbf{x_i -\mu_{y_i} + \mu_{y_i}}) (\mathbf{x_i -\mu_{y_i} + \mu_{y_i}})^T \mathbf{\hat{w}}\\ & = \left[ \frac{1}{N} \sum_{i=1}^N (\mathbf{x_i -\mu_{y_i}}) (\mathbf{x_i -\mu_{y_i}})^T + \frac{2}{N} \sum_{i=1}^N (\mathbf{x_i -\mu_{y_i}}) \mathbf{\mu_{y_i}}^T + \frac{1}{N} \sum_{i=1}^N \mathbf{ \mu_{y_i}} \mathbf{\mu_{y_i}}^T \right] \mathbf{\hat{w}}\\ & = \left[ S_W + \frac{1}{N/2} \sum_{i=1}^N \mathbf{x_i \mu_{y_i}^T} - \frac{2}{N} \sum_{i=1}^N \mathbf{\mu_{y_i} \mu_{y_i}^T} + \frac{1}{N} \sum_{i=1}^N \mathbf{\mu_{y_i} \mu_{y_i}^T} \right]\mathbf{\hat{w}}\\ & = \left[ S_W + \underbrace{\frac{1}{N/2} \sum_{i:y_i=1} \mathbf{x_i \mu_{y_i}^T}}_{=\mathbf{\mu_1 \mu_{1}^T}} + \underbrace{\frac{1}{N/2} \sum_{i:y_i=-1} \mathbf{x_i \mu_{y_{-1}}^T}}_{=\mathbf{\mu_{-1} \mu_{-1}^T}} - \mathbf{\mu_1 \mu_1^T} - \mathbf{\mu_{-1} \mu_{-1}^T} + \frac{1}{2}\mathbf{\mu_1 \mu_1^T} + \frac{1}{2} \mathbf{\mu_{-1} \mu_{-1}^T} \right]\mathbf{\hat{w}}\\ & = \left[ S_W + \frac{1}{2} (\mathbf{\mu_1 - \mu_{-1}})(\mathbf{\mu_1 - \mu_{-1}})^T + (\mathbf{\mu_{1}-\mu_{-1}})\mathbf{\mu_{1}}^T \right] \mathbf{\hat{w}}\\ & = \left[ S_W + \frac{S_B}{2} + (\mathbf{\mu_{1}-\mu_{-1}})\mathbf{\mu_{1}}^T \right] \mathbf{\hat{w}}\\ \end{align*} Now we insert these results into the equation from last page. \begin{align*} (\mathbf{\mu_1 - \mu_{-1}})/2 = \left[ - \frac{S_B}{4} - (\mathbf{\mu_{1}-\mu_{-1}})\mathbf{\mu_{1}}^T + S_W + \frac{S_B}{2} + (\mathbf{\mu_{1}-\mu_{-1}})\mathbf{\mu_{1}}^T \right] \mathbf{\hat{w}} = \left[ S_W + \frac{S_B}{4} \right] \mathbf{\hat{w}} \end{align*} This is equivalent to \begin{equation} S_W \mathbf{\hat{w}} = \frac{\mathbf{\mu_1 - \mu_{-1}}}{2} + \frac{S_B}{4} \mathbf{\hat{w}} \end{equation} Because $\mathbb{R}^d$ is a finite dimensional vector space, we can choose $v_2, ...,v_d \in \mathbb{R}^d$ such that $\{(\mu_1-\mu_{-1}),v_2, ..., v_d\}$ is an orthonormal basis of $\mathbb{R}^d$. Thus, we can write: $\mathbf{\hat{w}} = \lambda_1 (\mu_1-\mu_{-1}) + \sum_{i=2}^d \lambda_i v_i$ for $\lambda_1, ...,\lambda_d \in \mathbb{R}$. This way we can show: \begin{align*} \frac{S_B}{4} \mathbf{\hat{w}} &= \frac{1}{4} (\mu_1-\mu_{-1})(\mu_1-\mu_{-1})^T \left( \lambda_1 (\mu_1-\mu_{-1}) + \sum_{i=2}^d \lambda_i v_i \right)\\ &= \frac{1}{4} \lambda_1 (\mu_1-\mu_{-1})(\mu_1-\mu_{-1})^T(\mu_1-\mu_{-1})\\ &= \frac{1}{4} \lambda_1 (\mu_1-\mu_{-1})||\mu_1-\mu_{-1}||^2 \end{align*} The second equality holds because the scalar product of $\mu_1-\mu_{-1}$ and $v_i$ vanishes for all $i \in \{2, ...,d\}$ (ONB). Thus, we obtain with the equality from above and $\tau := \frac{1}{2} + \frac{1}{4}\lambda_1 ||\mu_1-\mu_{-1}||^2$: \begin{align*} \exists \tau \in \mathbb{R}: S_W\mathbf{\hat{w}} = \tau (\mu_1-\mu_{-1}) \end{align*} Under the assumption that $S_W$ is invertible (which is true if $(x_i)$ are not located on a common ($d-1$)-dimensional hyperplane) we get: \begin{align*} \exists \tau \in \mathbb{R}: \mathbf{\hat{w}} = \tau S_W^{-1} (\mu_1-\mu_{-1}) \end{align*} \end{document}
{ "alphanum_fraction": 0.5962360005, "avg_line_length": 63.6838235294, "ext": "tex", "hexsha": "716ce14ec361ca54449239fb2ff41fec41b27853", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "cd34194464d3b06cc23b4b91523684f0f01a92f0", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Osburg/Fundamentals-of-Machine-Learning", "max_forks_repo_path": "ex03/task3/Aufgabe3.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "cd34194464d3b06cc23b4b91523684f0f01a92f0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Osburg/Fundamentals-of-Machine-Learning", "max_issues_repo_path": "ex03/task3/Aufgabe3.tex", "max_line_length": 378, "max_stars_count": null, "max_stars_repo_head_hexsha": "cd34194464d3b06cc23b4b91523684f0f01a92f0", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Osburg/Fundamentals-of-Machine-Learning", "max_stars_repo_path": "ex03/task3/Aufgabe3.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4115, "size": 8661 }
%!TEX TS-program = pdflatexmk \documentclass[cleanfoot]{asme2ej} \usepackage[]{graphicx} \graphicspath{ {images/} } \usepackage{enumitem} \usepackage{amsmath,amssymb} \usepackage{textcomp} \usepackage[urlcolor=blue,linkcolor=red,colorlinks=true]{hyperref} \usepackage[autostyle, english=american]{csquotes} \MakeOuterQuote{"} \title{Exoplanet Open-Source Imaging Mission Simulator (EXOSIMS) \\ Interface Control Document} %%% first author \author{Daniel Garrett, Christian Delacroix, and Dmitry Savransky \affiliation{ Sibley School of Mechanical and Aerospace Engineering\\ Cornell University\\ Ithaca, NY 14853 } } \def\mf{\mathbf} \def\mb{\mathbb} \def\mc{\mathcal} \newcommand{\R}{\mathbf{r}} \newcommand{\bc}{\mathbf{b}} \newcommand{\mfbar}[1]{\mf{\bar{#1}}} \newcommand{\mfhat}[1]{\mf{\hat{#1}}} \newcommand{\bmu}{\boldsymbol{\mu}} \newcommand{\blam}{\boldsymbol{\Lambda}} \newcommand{\refeq}[1]{Equation (\ref{#1})} \newcommand{\reftable}[1]{Table \ref{#1}} \newcommand{\refch}[1]{Chapter \ref{#1}} \newcommand{\reffig}[1]{Figure \ref{#1}} \newcommand{\refcode}[1]{Listing \ref{#1}} \newcommand{\intd}[1]{\ensuremath{\,\mathrm{d}#1}} \newcommand{\leftexp}[2]{{\vphantom{#2}}^{#1}\!{#2}} \newcommand{\leftsub}[2]{{\vphantom{#2}}_{#1}\!{#2}} \newcommand{\fddt}[1]{\ensuremath{\leftexp{\mathcal{#1}}{\frac{\mathrm{d}}{\mathrm{d}t}}}} \newcommand{\fdddt}[1]{\ensuremath{\leftexp{\mathcal{#1}}{\frac{\mathrm{d}^2}{\mathrm{d}t^2}}}} \newcommand{\omegarot}[2]{\ensuremath{\leftexp{\mathcal{#1}}{\boldsymbol{\omega}}^{\mathcal{#2}}}} \begin{document} \maketitle %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{abstract} {\it This document describes the extensible, modular, open source software framework EXOSIMS. EXOSIMS creates end-to-end simulations of space-based exoplanet imaging missions using stand-alone software modules. The input/output interfaces of each module and interactions of modules with each other are presented to give guidance on mission specific modifications to the EXOSIMS framework. Last Update: \today} \end{abstract} \tableofcontents %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{nomenclature} \entry{EXOSIMS}{Exoplanet Open-Source Imaging Mission Simulator} \entry{ICD}{Interface Control Document} \entry{MJD}{Modified Julian Day} \end{nomenclature} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % INTRODUCTION %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Introduction} Building confidence in a mission concept's ability to achieve its science goals is always desirable. Unfortunately, accurately modeling the science yield of an exoplanet imager can be almost as complicated as designing the mission. It is challenging to compare science simulation results and systematically test the effects of changing one aspect of the instrument or mission design. EXOSIMS (Exoplanet Open-Source Imaging Mission Simulator) addresses this problem by generating ensembles of mission simulations for exoplanet direct imaging missions to estimate science yields. It is designed to allow systematic exploration of exoplanet imaging mission science yields. It consists of stand-alone modules written in Python which may be modified without requiring modifications to other portions of the code. This allows EXOSIMS to be easily used to investigate new designs for instruments, observatories, or overall mission designs independently. This document describes the required input/output interfaces for the stand-alone modules to enable this flexibility. \subsection{Purpose and Scope} % Rework this section This Interface Control Document (ICD) provides an overview of the software framework of EXOSIMS and some details on its component parts. As the software is intended to be highly reconfigurable, operational aspects of the code are emphasized over implementational details. Specific examples are taken from the coronagraphic instrument under development for WFIRST-AFTA. The data inputs and outputs of each module are described. Following these guidelines will allow the code to be updated to accommodate new mission designs. This ICD defines the input/output of each module and the interfaces between modules of the code. This document is intended to guide mission planners and instrument designers in the development of specific modules for new mission designs. %\subsection{Glossary} %This section will contain definition of terms used throughout the document if needed. \section{Overview} The terminology used to describe the software implementation is loosely based upon object-oriented programing (OOP) terminology, as implemented by the Python language, in which EXOSIMS is built. The term module can refer to the object class prototype representing the abstracted functionality of one piece of the software, an implementation of this object class which inherits the attributes and methods of the prototype, or an instance of this class. Input/output definitions of modules refer to the class prototype. Implemented modules refer to the inherited class definition. Passing modules (or their outputs) means the instantiation of the inherited object class being used in a given simulation. Relying on strict inheritance for all implemented module classes provides an automated error and consistency-checking mechanism. The outputs of a given object instance may be compared to the outputs of the prototype. It is trivial to pre-check whether a given module implementation will work within the larger framework, and this approach allows for flexibility and adaptability. % \begin{figure}[ht] % \begin{center} % \begin{tabular}{c} % \includegraphics[width=0.9\textwidth]{codeflow5} % \end{tabular} % \end{center} % \caption{EXOSIMS modules. Each box represents a component software module that interacts with other modules as indicated by the arrows. The simulation modules pass all input modules along with their own output. Thus, the Survey Ensemble module has access to all of the input modules and all of the upstream simulation modules.} % \label{figure_framework} % \end{figure} The overall framework of EXOSIMS is depicted in \reffig{fig:instantiation_tree} which shows all of the component software modules in the order in which they are instantiated in normal operation. The modules include the Optical System, Star Catalog, Planet Population, Observatory, Planet Physical Model, Time Keeping, Zodiacal Light, Background Sources, and Post-Processing modules and Target List, Simulated Universe, Survey Simulation, and Survey Ensemble modules. Objects of all module classes can be instantiated independently, although most modules require the instantiation of other modules during their construction. Different implementations of the modules contain specific mission design parameters and physical descriptions of the universe, and will change according to mission and planet population of interest. The upstream modules (including Target List, Simulated Universe, Survey Simulation, and Survey Ensemble modules) take information contained in the downstream modules and perform mission simulation tasks. The instantiation of an object of any of these modules requires the instantiation of one or more downstream module objects. Any module may perform any number or kind of calculations using any or all of the input parameters provided. The specific implementations are only constrained by their input and output specification contained in this document. \begin{figure}[ht] \begin{center} \begin{tabular}{c} \includegraphics[width=1\textwidth]{instantiation_tree} \end{tabular} \end{center} \caption{Schematic depiction of the instantiation path of all EXOSIMS modules. The entry point to the backbone is the construction of a MissionSimulation object, which causes the instantiation of all other module objects. All objects are instantiated in the order shown here, with SurveySimulation and SurveyEnsemble constructed last. The arrows indicate calls to the object constructor, and object references to each module are always passed up directly to the top calling module, so that at the end of construction, the MissionSimulation object has direct access to all other modules as its attributes.} \label{fig:instantiation_tree} \end{figure} \begin{figure}[ht] \begin{center} \begin{tabular}{c} \includegraphics[width=0.75\textwidth]{starcatalog_flowdown} \end{tabular} \end{center} \caption{Schematic of a sample implementation for the three module layers for the Star Catalog module. The Star Catalog prototype (top row) is immutable, specifies the input/output structure of the module along with all common functionality, and is inherited by all Star Catalog class implementations (middle row). In this case, two different catalog classes are shown: one that reads in data from a SIMBAD catalog dump, and one which contains only information about a subset of known radial velocity targets. The object used in the simulation (bottom row) is an instance of one of these classes, and can be used in exactly the same way in the rest of the code due to the common input/output scheme.} \label{fig:starcatalog_flowdown} \end{figure} \begin{figure}[ht] \begin{center} \begin{tabular}{c} \includegraphics[width=0.75\textwidth]{observatory_flowdown} \end{tabular} \end{center} \caption{Schematic of a sample implementation for the three module layers for the Observatory module. The Observatory prototype (top row) is immutable, specifies the input/output structure of the module along with all common functionality, and is inherited by all Observatory class implementations (middle row). In this case, two different observatory classes are shown that differ only in the definition of the observatory orbit. Therefore, the second implementation inherits the first (rather than directly inheriting the prototype) and overloads only the orbit method. The object used in the simulation (bottom row) is an instance of one of these classes, and can be used in exactly the same way in the rest of the code due to the common input/output scheme.} \label{fig:observatory_flowdown} \end{figure} Figures \ref{fig:starcatalog_flowdown} and \ref{fig:observatory_flowdown} show schematic representations of the three different aspects of a module, using the Star Catalog and Observatory modules as examples, respectively. Every module has a specific prototype that sets the input/output structure of the module and encodes any common functionality for all module class implementations. The various implementations inherit the prototype and add/overload any attributes and methods required for their particular tasks, limited only by the preset input/output scheme. Finally, in the course of running a simulation, an object is generated for each module class selected for that simulation. The generated objects can be used in exactly the same way in the downstream code, regardless of what implementation they are instances of, due to the strict interface defined in the class prototypes. For lower level (downstream) modules, the input specification is much more loosely defined than the output specification, as different implementations may draw data from a wide variety of sources. For example, the star catalog may be implemented as reading values from a static file on disk, or may represent an active connection to a local or remote database. The output specification for these modules, however, as well as both the input and output for the upstream modules, is entirely fixed so as to allow for generic use of all module objects in the simulation. \section{Global Specifications} Common references (units, frames of reference, etc.) are required to ensure interoperability between the modules of EXOSIM. All of the references listed below must be followed. \begin{description} \item[Common Epoch] \hfill \\ J2000 \item[Common Reference Frame] \hfill \\ Heliocentric Equatorial (HE) \end{description} \subsection{Python Packages} EXOSIMS is an open source platform. As such, packages and modules may be imported and used for calculations within any of the stand-alone modules. The following commonly used Python packages are used for the WFIRST-specific implementation of EXOSIMS: \texttt{ \begin{itemize} \item astropy \begin{itemize} \item astropy.constants \item astropy.coordinates \item astropy.time \item astropy.units \end{itemize} \item copy \item importlib \item numpy \begin{itemize} \item numpy.linalg \end{itemize} \item os \begin{itemize} \item os.path \end{itemize} \item pickle/cPickle \item scipy \begin{itemize} \item scipy.io \item scipy.special \item scipy.interpolate \end{itemize} \item jplephem (\emph{optional}) \end{itemize} } Additionally, while not required for running the survey simulation, \verb+matplotlib+ is used for visualization of the results. \subsection{Coding Conventions} In order to allow for flexibility in using alternate or user-generated module implementations, the only requirement on any module is that it inherits (either directly or by inheriting another module implementation that inherits the prototype) the appropriate prototype. It is similarly expected (although not required) that the prototype constructor will be called from the constructor of the newly implemented class. An example of an Optical System module implementation follows: \begin{verbatim} from EXOSIMS.Prototypes.OpticalSystem import OpticalSystem class ExampleOpticalSystem(OpticalSystem): def __init__(self, **specs): OpticalSystem.__init__(self, **specs) ... \end{verbatim} \emph{Note that the filename must match the class name for all modules.} \subsubsection{Module Type} It is always possible to check whether a module is an instance of a given prototype, for example: \begin{verbatim} isinstance(obj,EXOSIMS.Prototypes.Observatory.Observatory) \end{verbatim} However, it can be tedious to look up all of a given object's base classes so, for convenience, every prototype will provide a private variable \verb+_modtype+, which will always return the name of the prototype and should not be overwritten by any module code. Thus, if the above example evaluates as \verb+True+, \verb+obj._modtype+ will return \verb+Observatory+. \subsubsection{Callable Attributes} Certain module attributes must be represented in a way that allows them to be parametrized by other values. For example, the instrument throughput and contrast are functions of both the wavelength and the angular separation, and so must be encodable as such in the optical system module. To accommodate this, as well as simpler descriptions where these parameters may be treated as static values, these and other attributes are defined as `callable'. This means that they must be set as objects that can be called in the normal Python fashion, i.e., \verb+object(arg1,arg2,...)+. These objects can be function definitions defined in the code, or imported from other modules. They can be \href{https://docs.python.org/2/reference/expressions.html#lambda}{lambda expressions} defined inline in the code. Or they can be callable object instances, such as the various \href{http://docs.scipy.org/doc/scipy/reference/interpolate.html}{scipy interpolants}. In cases where the description is just a single value, these attributes can be defined as dummy functions that always return the same value, for example: \begin{verbatim} def throughput(wavelength,angle): return 0.5 \end{verbatim} or even more simply: \begin{verbatim} throughput = lambda wavelength,angle: 0.5 \end{verbatim} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % BACKBONE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Backbone} By default, the simulation execution will be performed via the backbone. This will consist of a limited set of functions that will primarily be tasked with parsing the input specification described below, and then creating the specified instances of each of the framework modules, detailed in \S\ref{sec:modules}. The backbone functionality will primarily be implemented in the MissionSimulation class, whose constructor will take the input script file (\S\ref{sec:inputspec}) and generate instances of all module objects, including the SurveySimulation (\S\ref{sec:surveysim}) and SurveyEnsemble modules, which will contain the functions to run the survey simulations. Any mission-specific execution variations will be introduced by method overloading in the inherited survey simulation implementation. \reffig{fig:instantiation_tree} provides a graphical description of the instantiation order of all module objects. A simulation specification is a single JSON-formatted (\url{http://json.org/}) file that encodes user-settable parameters and module names. The backbone will contain a reference specification with \emph{all} parameters and modules set via defaults in the constructors of each of the modules. In the initial parsing of the user-supplied specification, it will be merged with the reference specification such that any fields not set by the user will be assigned to their reference (default) values. Each instantiated module object will contain a dictionary called \verb+_outspec+, which, taken together, will form the full specification for the current run (as defined by the loaded modules). This specification will be written out to a json file associated with the output of every run. \emph{Any specification added by a user implementation of any module must also be added to the \_outspec dictionary}. The assembly of the full output specification is provided by MissionSimulation method \verb+genOutSpec+. The backbone will also contain a specification parser that will check specification files for internal consistency. For example, if modules carry mutual dependencies, the specification parser will return an error if these are not met for a given specification. Similarly, if modules are selected with optional top level inputs, warnings will be generated if these are not set in the same specification files. In addition to the specification parser, the backbone will contain a method for comparing two specification files and returning the difference between them. Assuming that the files specify all user-settable values, this will be equivalent to simply performing a \verb+diff+ operation on any POSIX system. The backbone diff function will add in the capability to automatically fill in unset values with their defaults. For every simulation (or ensemble), an output specification will be written to disk along with the simulation results with all defaults used filled in. %The backbone will also contain an interactive function to help users generate specification files via a series of questions. \subsection{Specification Format}\label{sec:inputspec} The JSON specification file will contain a series of objects with members enumerating various user-settable parameters, top-level members for universal settings (such as the mission lifetime) and arrays of objects for multiple related specifications, such as starlight suppression systems and science instruments. The specification file must contain a \verb+modules+ dictionary listing the module names (or paths on disk to user-implemented classes) for all modules. \begin{verbatim} { "universalParam1": value, "universalParam2": value, ... "starlightSuppressionSystems": [ { "starlightSuppressionSystemNumber": 1, "type": "external", "detectionTimeMultiplier": value, "characterizationTimeMultiplier": value, "occulterDiameter": value, "NocculterDistances": 2, "occulterDistances: [ { "occulterDistanceNumber": 1, "occulterDistance": value, "occulterBlueEdge": value, "occulterRedEdge": value, "IWA": value, "OWA": value, "PSF": "/data/mdo1_psf.fits", "throughput": "/data/mdo1_thru.fits", "contrast": "/data1/mdo1_contrast.fits" }, { "occulterDistanceNumber": 2, "occulterDistance": value, "occulterBlueEdge": value, "occulterRedEdge": value, "IWA": value, "OWA": value, "PSF": "/data/mdo1_psf.fits", "throughput": "/data/mdo1_thru.fits", "contrast": "/data1/mdo1_contrast.fits" } ], "occulterWetMass": value, "occulterDryMass": value, }, { "starlightSuppressionSystemNumber": 2, "type": "internal", "IWA": value, "OWA": value, "PSF": "/data/coron1_psf.fits", "throughput": "/data/coron1_thru.fits", "contrast": "/data1/coron1_contrast.fits", "detectionTimeMultiplier": value, "characterizationTimeMultiplier": value, "opticaloh": value } ], "scienceInstruments": [ { "scienceInstrumentNumber": 1, "type": "imager-EMCCD", "QE": 0.88, "darkCurrent": 9e-5, "CIC": 0.0013, "readNoise": 16, "texp": 1000, "pixelPitch": 13e-6, "focalLength": 240, "ENF": 1.414, "G_EM": 500 } { "scienceInstrumentNumber": 2, "type": "IFS-CCD", "QE": 0.88, "darkCurrent": 9e-5, "CIC": 0.0013, "readNoise": 3, "texp": 1000, "Rspec": 70.0, } ], modules: { "PlanetPopulation": "HZEarthTwins", "StarCatalog": "exocat3", "OpticalSystem": "hybridOpticalSystem1", "ZodiacalLight": "10xSolZodi", "BackgroundSources": "besanconModel", "PlanetPhysicalModel": "fortneyPlanets", "Observatory": "WFIRSTGeo", "TimeKeeping": "UTCtime", "PostProcessing": "KLIPpost", "Completeness": "BrownCompleteness", "TargetList": "WFIRSTtargets", "SimulatedUniverse": "simUniverse1", "SurveySimulation": "backbone1", "SurveyEnsemble": "localIpythonEnsemble" } } \end{verbatim} \subsection{Modules Specification} The final array in the input specification (\verb+modules+) is a list of all the modules that define a particular simulation. This is the only part of the specification that will not be filled in by default if a value is missing - each module must be explicitly specified. The order of the modules in the list is arbitrary, so long as they are all present. If the module implementations are in the appropriate subfolder in the EXOSIMS tree, then they can be specified by the module name. However, if you wish to use an implemented module outside of the EXOSIMS directory, then you need to specify it via its full path in the input specification. \emph{All modules, regardless of where they are stored on disk must inherit the appropriate prototype.} \subsection{Universal Parameters} These parameters apply to all simulations, and are described in detail in their specific module definitions: \begin{itemize}[leftmargin=1in,font={\ttfamily}] \item[missionLife] (float) The total mission lifetime in $ years $. When the mission time is equal or greater to this value, the mission simulation stops. \item[missionPortion] (float) The portion of the mission dedicated to exoplanet science, given as a value between 0 and 1. The mission simulation stops when the total integration time plus observation overhead time is equal to the missionLife $\times$ missionPortion. \item[keepStarCatalog] (boolean) Boolean representing whether to delete the star catalog after assembling the target list. If true, object reference will be available from TargetList object. \item[minComp] (float) Minimum completeness value for inclusion in target list. \item[lam] (float) Detection central wavelength in $ nm $. \item[deltaLam] (float) Detection bandwidth in $ nm $. \item[BW] (float) Detection bandwidth fraction = $\Delta\lambda/\lambda$. \item[specLam] (float) Spectrograph central wavelength in $ nm $. \item[specDeltaLam] (float) Spectrograph bandwidth in $ nm $. \item[specBW] (float) Spectrograph bandwidth fraction = $\Delta\lambda_s/\lambda_s$. \item[obscurFac] (float) Obscuration factor due to secondary mirror and spiders. \item[shapeFac] (float) Telescope aperture shape factor. \item[pupilDiam] (float) Entrance pupil diameter in $m$. \item[pupilArea] (float) Entrance pupil area in $m^2$. \item[IWA] (float) Fundamental Inner Working Angle in $ arcsec $. No planets can ever be observed at smaller separations. \item[OWA] (float) Fundamental Outer Working Angle in $ arcsec $. Set to $ Inf $ for no OWA. JSON values of 0 will be interpreted as $ Inf $. \item[dMagLim] (float) Fundamental limiting $\Delta$mag (difference in magnitude between star and planet). \item[telescopeKeepout] (float) Telescope keepout angle in $ deg $ \item[attenuation] (float) Non-coronagraph attenuation, equal to the throughput of the optical system without the coronagraph elements. \item[intCutoff] (float) Maximum allowed integration time in $ day $. No integrations will be started that would take longer than this value. \item[FAP] (float) Detection false alarm probability \item[MDP] (float) Missed detection probability \item[SNimag] (float) Signal to Noise Ratio for imaging/detection. \item[SNchar] (float) Signal to Noise Ratio for characterization. \item[arange] (float) 1$\times$2 list of semi-major axis range in $ AU $. \item[erange] (float) 1$\times$2 list of eccentricity range. \item[Irange] (float) 1$\times$2 list of inclination range in $ deg $. \item[Orange] (float) 1$\times$2 list of ascension of the ascending node range in $ deg $. \item[wrange] (float) 1$\times$2 list of argument of perigee range in $ deg $. \item[prange] (float) 1$\times$2 list of planetary geometric albedo range. \item[Rprange] (float) 1$\times$2 list of planetary radius range in Earth radii. \item[Mprange] (float) 1$\times$2 list of planetary mass range in Earth masses. \item [scaleOrbits] (boolean) True means planetary orbits are scaled by the square root of stellar luminosity. \item[constrainOrbits] (boolean) True means planetary orbits are constrained to never leave the semi-major axis range (arange). \item[missionStart] (float) Mission start time in $ MJD $. \item[extendedLife] (float) Extended mission time in $ years $. Extended life typically differs from the primary mission in some way---most typically only revisits are allowed \item[settlingTime] (float) Amount of time needed for observatory to settle after a repointing in $ day $. \item[thrust] (float) Occulter slew thrust in $ mN $. \item[slewIsp] (float) Occulter slew specific impulse in $ s $. \item[scMass] (float) Occulter (maneuvering spacecraft) initial wet mass in $ kg $. \item[dryMass] (float) Occulter (maneuvering spacecraft) dry mass in $ kg $. \item[coMass] (float) Telescope (or non-maneuvering spacecraft) mass in $ kg $. \item[skIsp] (float) Specific impulse for station keeping in $ s $. \item[defburnPortion] (float) Default burn portion for slewing. \item[spkpath] (string) Full path to SPK kernel file. \item[forceStaticEphem] (boolean) Force use of static solar system ephemeris if set to True, even if jplephem module is present. \end{itemize} \section{Module Specifications}\label{sec:modules} The lower level modules include Planet Population, Star Catalog, Optical System, Zodiacal Light, Background Sources, Planet Physical Model, Observatory, Time Keeping, and Post-Processing. These modules encode and/or generate all of the information necessary to perform mission simulations. The specific mission design determines the functionality of each module, while inputs and outputs of these modules remain the same (in terms of data type and variable representations). The upstream modules include Completeness, Target List, Simulated Universe, Survey Simulation and Survey Ensemble. These modules perform methods which require inputs from one or more downstream modules as well as calling function implementations in other upstream modules. This section defines the functionality, major tasks, input, output, and interface of each of these modules. Every module constructor must always accept a keyword dictionary (\verb+**spec+) representing the contents of the specification JSON file organized into a Python dictionary. The descriptions below list out specific keywords that are pulled out by the prototype constructors of each of the modules, but implemented constructors may include additional keywords (so long as they correctly call the prototype constructor). In all cases, if a given \verb+key:value+ pair is missing from the dictionary, the appropriate object attributes will be assigned the default values listed. % PLANET POPULATION \subsection{Planet Population} The Planet Population module encodes the density functions of all required planetary parameters, both physical and orbital. These include semi-major axis, eccentricity, orbital orientation, radius, mass, and geometric albedo (see \S\ref{sec:pdfs}). Certain parameter models may be empirically derived while others may come from analyses of observational surveys. This module also encodes the limits on all parameters to be used for sampling the distributions and determining derived cutoff values such as the maximum target distance for a given instrument's IWA. \begin{figure}[ht] \begin{center} \begin{tabular}{c} \includegraphics[width=0.6\textwidth]{orbit_diagram} \end{tabular} \end{center} \caption{\label{fig:orbit_diagram} Definition of reference frames and coordinates of simulated exosystems. The observer lies along the negative $\mf s_3$ axis so that the observer-star unit vector is $+\mf s_3$.} \end{figure} The coordinate system of the simulated exosystems is defined as in \reffig{fig:orbit_diagram}. The observer looks at the target star along the $\mathbf{s}_3$ axis, located at a distance $-d\mathbf{s}$ from the target at the time of observation. The argument of periapse, inclination, and longitude of the ascending node ($\omega, I, \Omega$) are defined as a 3-1-3 rotation about the unit vectors defining the $\mathcal{S}$ reference frame. This rotation defines the standard Equinoctial reference frame ($\mfhat{e}, \mfhat{q}, \mfhat{h}$), with the true anomaly ($\nu$) measured from $\mfhat{e}$). The planet-star orbital radius vector $\mf r_{P/S}$ is projected into the $\mf s_1, \mf s_2$ plane as the projected separation vector $\mf s$, with magnitude $s$, and the phase (star-planet-observer) angle ($\beta$) is closely approximated by the angle between $\mf r_{P/s}$ and its projection onto $\mf s_3$. The Planet Population module does not model the physics of planetary orbits or the amount of light reflected or emitted by a given planet, but rather encodes the statistics of planetary occurrence and properties. \label{sec:planetpopulation} \subsubsection{Planet Population Object Attribute Initialization} \subsubsection*{Inputs} The following are all entries in the passed specs dictionary (derived from the JSON script file or another dictionary). Values not specified will be replaced with defaults, as listed. It is important to note that many of these (in particular mass and radius) may be mutually dependent, and so some implementation may choose to only use some for inputs and set the rest via the physical models. \begin{itemize} \item \begin{description} \item[arange] \hfill \\ 1$\times$2 list of semi-major axis range in $ AU $. Default value is [0.01, 100] \item[erange] \hfill \\ 1$\times$2 list of eccentricity range. Default value is [0.01,0.99] \item[Irange] \hfill \\ 1$\times$2 list of inclination range in $ deg $. Default value is [0,180] \item[Orange] \hfill \\ 1$\times$2 list of ascension of the ascending node range in $ deg $. Default value is [0,360] \item[wrange] \hfill \\ 1$\times$2 list of perigee range in $ deg $. Default value is [0,360] \item[prange] \hfill \\ 1$\times$2 list of planetary geometric albedo range. Default value is [0.1,0.6] \item[Rprange] \hfill \\ 1$\times$2 list of planetary Radius in Earth radii. Default value is [1, 30] \item[Mprange] \hfill \\ 1$\times$2 list of planetary mass in Earth masses. Default value is [1, 4131] \item [scaleOrbits] \hfill \\ Boolean where True means planetary orbits are scaled by the square root of stellar luminosity. Default value is False. \item[constrainOrbits] \hfill \\ Boolean where True means planetary orbits are constrained to never leave the semi-major axis range (arange). Default value is False. \item[eta] \hfill \\ The average occurrence rate of planets per star for the entire population. The expected number of planets generated per simulation is equal to the product of eta with the total number of targets. Note that this is the expectation value \emph{only}---the actual number of planets generated in a given simulation may vary depending on the specific method of sampling the population. \end{description} \end{itemize} \subsubsection*{Attributes} \begin{itemize} \item \begin{description} \item[arange (astropy Quantity 1$\times$2 array)] \hfill \\ Semi-major axis range defined as [a\_min, a\_max] in units of $ AU $ \item[erange (1$\times$2 ndarray)] \hfill \\ Eccentricity range defined as [e\_min, e\_max] \item[Irange (astropy Quantity 1$\times$2 array)] \hfill \\ Planetary orbital inclination range defined as [I\_min, I\_max] in units of $ deg $ \item[Orange (astropy Quantity 1$\times$2 array)] \hfill \\ Right ascension of the ascending node range defined as [O\_min, O\_max] in units of $ deg $ \item[wrange (astropy Quantity 1$\times$2 array)] \hfill \\ Argument of perigee range defined as [w\_min, w\_max] in units of $ deg $ \item[prange (1$\times$2 ndarray)] \hfill \\ Planetary geometric albedo range defined as [p\_min, p\_max] \item[Rprange (astropy Quantity 1$\times$2 array)] \hfill \\ Planetary radius range defined as [R\_min, R\_max] in units of $ km $ \item[Mprange (astropy Quantity 1$\times$2 array)] \hfill \\ Planetary mass range defined as [Mp\_min, Mp\_max] in units of $ kg $ \item[rrange (astropy Quantity 1$\times$2 array)] \hfill \\ Planetary orbital radius range defined as [r\_min, r\_max] derived from PlanetPopulation.arange and PlanetPopulation.erange, in units of $ AU $ \item [scaleOrbits (boolean)] \hfill \\ Boolean where True means planetary orbits are scaled by the square root of stellar luminosity. \item[constrainOribts (boolean)] \hfill \\ Boolean where True means planetary orbits are constrained to never leave the semi-major axis range (arange). If set to True, an additional method (\verb+gen_eccen_from_sma+) must be provided by the implementation---see below. \item[eta (float)] \hfill \\ The average occurrence rate of planets per star for the entire population. \item[PlanetPhysicalModel (object)] \hfill \\ PlanetPhysicalModel class object \end{description} \end{itemize} \subsubsection{Planet Population Value Generators} \label{sec:pdfs} For each of the parameters represented by the input attributes, the planet population object will provide a method that returns random values for the attribute, within the ranges specified by the attribute (so that, for example, there will be a \verb+gen_sma+ method corresponding to \verb+arange+, etc.). Each of these methods will take a single input of the number of values to generate. These methods will encode the probability density functions representing each parameter, and use either a rejection sampler or other (numpy or scipy) provided sampling method to generate random values. All returned values will have the same type/default units as the attributes. In cases where values need to be sampled jointly (for example if you have a joint distribution of semi-major axis and planetary radius) then the sampling will be done by a helper function which stores the last sampled values in memory, and the individual functions (i.e., \verb+gen_sma+ and \verb+gen_radius+) will act as getters for the values. In cases where there is a deterministic calculation of one parameter from another (as in mass calculated from radius) this will be provided separately in the Planet Physical module. Any non-standard distribution functions being sampled by one of these methods should be created as object attributes in the implementation constructor so that they are available to other modules. \\\\ The methods are: \begin{itemize} \item \begin{description} \item[gen\_sma] \hfill \\ Returns semi-major axis values (astropy Quantity initially set in $ AU $) \item[gen\_eccen] \hfill \\ Returns eccentricity values (numpy ndarray) \item[gen\_w] \hfill \\ Returns argument of perigee values (astropy Quantity initially set in $ deg $) \item[gen\_O] \hfill \\ Returns longitude of the ascending node values (astropy Quantity initially set in $ deg $) \item[gen\_radius] \hfill \\ Returns planetary radius values (astropy Quantity initially set in $ m $) \item[gen\_mass] \hfill \\ Returns planetary mass values (astropy Quantity initially set in $ kg $) \item[gen\_albedo] \hfill \\ Returns planetary geometric albedo (numpy ndarray) \item[gen\_I] \hfill \\ Returns values of orbital inclination (astropy Quantity initially set in $ deg $) \item[gen\_eccen\_from\_sma] \hfill \\ Required only for populations that can take a \verb+constrainOrbits=True+ input. Takes an additional argument of array of semi-major axis values (astropy Quantity). Returns eccentricity values (numpy ndarray) such that $a(1-e) \ge a_\textrm{min}$ and $a(1+e) \le a_\textrm{max}$. \end{description} \end{itemize} % PLANET PHYSICAL MODEL \subsection{Planet Physical Model} \label{sec:planetphysicalmodel} The Planet Physical Model module contains models of the light emitted or reflected by planets in the wavelength bands under investigation by the current mission simulation. It takes as inputs the physical quantities sampled from the distributions in the Planet Population module and generates synthetic spectra (or band photometry, as appropriate). The specific implementation of this module can vary greatly, and can be based on any of the many available planetary geometric albedo, spectra and phase curve models. As required, this module also provides physical models relating dependent parameters that cannot be sampled independently (for example density models relating plant mass and radius). While the specific methods will depend highly on the physical models being used, the prototype provides four stubs that will be commonly useful: \begin{itemize} \item \begin{description} \item[calc\_albedo\_from\_sma] \hfill \\ Provides a method to calculate planetary geometric albedo as a function of the semi-major axis \item[calc\_mass\_from\_radius] \hfill \\ Provides a method to calculate planetary masses from their radii \item[calc\_radius\_from\_mass] \hfill \\ Provides a method to calculate planetary radii from their masses \item[calc\_Phi] \hfill \\ Provides a method to calculate the value of the planet phase function given the phase. The prototype implementation uses the Lambert phase function. \end{description} \end{itemize} % STAR CATALOG \subsection{Star Catalog} \label{sec:starcatalog} The Star Catalog module includes detailed information about potential target stars drawn from general databases such as SIMBAD, mission catalogs such as Hipparcos, or from existing curated lists specifically designed for exoplanet imaging missions. Information to be stored, or accessed by this module will include target positions and proper motions at the reference epoch, catalog identifiers (for later cross-referencing), bolometric luminosities, stellar masses, and magnitudes in standard observing bands. Where direct measurements of any value are not available, values are synthesized from ancillary data and empirical relationships, such as color relationships and mass-luminosity relations. This module does not provide any functionality for picking the specific targets to be observed in any one simulation, nor even for culling targets from the input lists where no observations of a planet could take place. This is done in the Target List module as it requires interactions with the Planet Population (to determine the population of interest), Optical System (to define the capabilities of the instrument), and Observatory (to determine if the view of the target is unobstructed) modules. \subsubsection{Star Catalog Object Attribute Initialization} The Star Catalog prototype creates empty 1D NumPy ndarrays for each of the output quantities listed below. Specific Star Catalog modules must populate the values as appropriate. Note that values that are left unpopulated by the implementation will still get all zero array, which may lead to unexpected behavior. \subsubsection*{Inputs} \begin{itemize} \item \begin{description} \item[star catalog information] \hfill \\ Information from an external star catalog (left deliberately vague as these can be anything). \end{description} \end{itemize} \subsubsection*{Attributes} \begin{itemize} \item \begin{description} \item[Name (string ndarray)] \hfill \\ Star names \item[Spec (string ndarray)] \hfill \\ Spectral types \item[Umag (float ndarray)] \hfill \\ U magnitude \item[Bmag (float ndarray)] \hfill \\ B magnitude \item[Vmag (float ndarray)] \hfill \\ V magnitude \item[Rmag (float ndarray)] \hfill \\ R magnitude \item[Imag (float ndarray)] \hfill \\ I magnitude \item[Jmag (float ndarray)] \hfill \\ J magnitude \item[Hmag (float ndarray)] \hfill \\ H magnitude \item[Kmag (float ndarray)] \hfill \\ K magnitude \item[BV (float ndarray)] \hfill \\ B-V Johnson magnitude \item[MV (float ndarray)] \hfill \\ Absolute V magnitude \item[BC (float ndarray)] \hfill \\ Bolometric correction \item[L (float ndarray)] \hfill \\ Stellar luminosity in Solar luminosities \item[Binary\_Cut (boolean ndarray)] \hfill \\ Booleans where True is a star with a companion closer than $ 10 arcsec $ \item[dist (astropy Quantity array)] \hfill \\ Distance to star in units of $ pc $. Defaults to 1. \item[parx (astropy Quantity array)] \hfill \\ Parallax in units of $ mas $. Defaults to 1000. \item[coords (astropy SkyCoord array)] \hfill \\ \href{http://astropy.readthedocs.org/en/latest/api/astropy.coordinates.SkyCoord.html}{SkyCoord object} containing right ascension, declination, and distance to star in units of $ deg $, $ deg $, and $ pc $. \item[pmra (astropy Quantity array)] \hfill \\ Proper motion in right ascension in units of $ mas/year $ \item[pmdec (astropy Quantity array)] \hfill \\ Proper motion in declination in units of $ mas/year $ \item[rv (astropy Quantity array)] \hfill \\ Radial velocity in units of $ km/s $ \end{description} \end{itemize} \subsection{Optical System} The Optical System module contains all of the necessary information to describe the effects of the telescope and starlight suppression system on the target star and planet wavefronts. This requires encoding the design of both the telescope optics and the specific starlight suppression system, whether it be an internal coronagraph or an external occulter. The encoding can be achieved by specifying Point Spread Functions (PSF) for on- and off-axis sources, along with angular separation and wavelength dependent contrast and throughput definitions. At the opposite level of complexity, the encoded portions of this module may be a description of all of the optical elements between the telescope aperture and the imaging detector, along with a method of propagating an input wavefront to the final image plane. Intermediate implementations can include partial propagations, or collections of static PSFs representing the contributions of various system elements. The encoding of the optical train will allow for the extraction of specific bulk parameters including the instrument inner working angle (IWA), outer working angle (OWA), and mean and max contrast and throughput. Finally, the Optical System must also include a description of the science instrument. The baseline instrument is assumed to be an imaging spectrometer. The encoding must provide the spatial and wavelength coverage of the instrument as well as sampling for each, along with detector details such as read noise, dark current, and readout cycle. The Optical System module has four methods used in simulation. \verb+calc_maxintTime+ is called from the Target List module to calculate the maximum integration time for each star in the target list (see \S\ref{sec:calcmaxintTimetask}). \verb+calc_intTime+ and \verb+calc_charTime+ are called from the Survey Simulation module to calculate integration and characterization times for a target system (see \S\ref{sec:calcintTimetask} and \S\ref{sec:calccharTimetask}). \verb+Cp_Cb+ is called by \verb+calc_intTime+ and \verb+calc_charTime+ to calculate the electron count rates for planet signal and background noise (see \S\ref{sec:CpCbtask}). The inputs and outputs for the Optical System methods are depicted in \reffig{fig:opticalsysmodule}. \begin{figure}[ht] \begin{center} \begin{tabular}{c} \includegraphics[width=\textwidth]{OpticalSysTasks2} \end{tabular} \end{center} \caption{\label{fig:opticalsysmodule} Depiction of Optical System module methods including inputs and outputs (see \S\ref{sec:calcmaxintTimetask}, S\ref{sec:calcintTimetask}, S\ref{sec:calccharTimetask} and \S\ref{sec:CpCbtask}).} \end{figure} \label{sec:opticalsystem} \subsubsection{Optical System Object Attribute Initialization} The specific set of inputs to this module will vary based on the simulation approach used. Here we define the specification for the case where static PSF(s), derived from external diffraction modeling, are used to describe the system. Note that some of the inputs are coronagraph or occulter specific, and will be expected based on the "internal" or "external" starlight suppression system keyword, respectively. \subsubsection*{Inputs} Information from simulation specification JSON file organized into a Python dictionary. For multiple systems, there will be an array of dictionaries. If the below \verb+key:value+ pairs are missing from the input specification, the Optical System object attributes will be assigned the default values listed. The following are all entries in the passed specs dictionary. \begin{itemize} \item \begin{description} \item[obscurFac] \hfill \\ Obscuration factor due to secondary mirror and spiders. Default value is 0.1. \item[shapeFac] \hfill \\ Shape factor of the unobscured pupil area, so that $ shapeFac \times pupilDiam^2 \times (1-obscurFac) = pupilArea $. Default value is $ \frac{\pi}{4} $. \item[pupilDiam] \hfill \\ Entrance pupil diameter in $ m $. Default value is 4. \item[telescopeKeepout] \hfill \\ Telescope keepout angle in $ deg $. Default value is 45. \item[attenuation] \hfill \\ Non-coronagraph attenuation, equal to the throughput of the optical system without the coronagraph elements. Default value is 0.57. \item[intCutoff] \hfill \\ Maximum allowed integration time in $ day $. No integrations will be started that would take longer than this value. Default value is 50. \item[Npix] \hfill \\ Number of noise pixels. Default value is 14.3. \item[Ndark] \hfill \\ Number of dark frames used. Default value is 10. \item[IWA] \hfill \\ Fundamental Inner Working Angle in $ arcsec $. No planets can ever be observed at smaller separations. If not set, defaults to smallest IWA of all starlightSuppressionSystems. \item[OWA] \hfill \\ Fundamental Outer Working Angle in $ arcsec $. Set to $ Inf $ for no OWA. If not set, defaults to largest OWA of all starlightSuppressionSystems. JSON values of 0 will be interpreted as $ Inf $. \item[dMagLim] \hfill \\ Fundamental limiting $ \Delta$mag (difference in magnitude between star and planet). Default value is 20. \item[scienceInstruments] \hfill\\ List of dictionaries containing specific attributes of all science instruments. For each instrument, if the below attributes are missing from the dictionary, they will be assigned the default values listed, or any value directly passed as input to the class constructor. In case of multiple instruments, specified wavelength values (lam, deltaLam, BW) of the first instrument become the new default values. \begin{description} \item[type] \hfill\\ (Required) String indicating type of system. Standard values are `imaging' and `spectro'. \item[lam] \hfill \\ Central wavelength $\lambda$ in $ nm $. Default value is 500. \item[deltaLam] \hfill \\ Bandwidth $ \Delta\lambda $ in $ nm $. Defaults to lambda $ \times $ BW (defined hereunder). \item[BW] \hfill \\ Bandwidth fraction $(\Delta\lambda/\lambda)$. Only applies when deltaLam is not specified. Default value is 0.2. \item[pitch] \hfill \\ Pixel pitch in $ m $. Default value is 13e-6. \item[focal] \hfill \\ Focal length in $ m $. Default value is 140. \item[idark] \hfill \\ Detector dark-current rate in $ electrons /s /pix $. Default value is 9e-5. \item[texp] \hfill \\ Exposure time in $ s/frame $. Default value is 1e3. \item[sread] \hfill \\ Detector read noise in $ electrons/frame $. Default value is 3. \item[CIC] \hfill \\ (Specific to CCDs) Clock-induced-charge in $ electrons/pix/frame $. Default value is 0.0013. \item[ENF] \hfill \\ (Specific to EM-CCDs) Excess noise factor. Default value is 1. \item[Gem] \hfill \\ (Specific to EM-CCDs) Electron multiplication gain. Default value is 1. \item[Rs] \hfill \\ (Specific to spectrometers) Spectral resolving power defined as $\lambda/d\lambda$. Default value is 70. \item[QE] \hfill \\ Detector quantum efficiency: either a scalar for constant QE, or a two-column array for wavelength-dependent QE, where the first column contains the wavelengths in $ nm $. The ranges on all parameters must be consistent with the values for lam and deltaLam inputs. May be data or FITS filename. Default is scalar 0.9. \end{description} \item[starlightSuppressionSystems] \hfill\\ List of dictionaries containing specific attributes of all starlight suppression systems. For each system, if the below attributes are missing from the dictionary, they will be assigned the default values listed, or any value directly passed as input to the class constructor. \begin{description} \item[type] \hfill\\ (Required) String indicating the system type (e.g. internal, external, hybrid), should also contain the type of science instrument it can be used with (e.g. imaging, spectro). \item[throughput] \hfill \\ System throughput: either a scalar for constant throughput, a two-column array for angular separation-dependent throughput, where the first column contains the separations in $ arcsec $, or a 2D array for angular separation- and wavelength- dependent throughput, where the first column contains the angular separation values in $ arcsec $ and the first row contains the wavelengths in $ nm $. The ranges on all parameters must be consistent with the values for the IWA, OWA, lam and deltaLam inputs. May be data or FITS filename. Default is scalar 1e-2. \item[contrast] \hfill \\ System contrast: either a scalar for constant contrast, a two-column array for angular separation-dependent contrast, where the first column contains the separations in $ arcsec $, or a 2D array for angular separation- and wavelength- dependent contrast, where the first column contains the angular separation values in as and the first row contains the wavelengths in $ nm $. The ranges on all parameters must be consistent with the values for the IWA, OWA, lam and deltaLam inputs. May be data or FITS filename. Default is scalar 1e-9. \item[IWA] \hfill \\ Inner Working Angle of this system in $ arcsec $. If not set, or if too small for this system contrast/throughput definitions, defaults to smallest WA of contrast/throughput definitions. \item[OWA] \hfill \\ Specific Outer Working Angle of this system in $ arcsec $. Set to $ Inf $ for no OWA. If not set, or if too large for this system contrast/throughput definitions, defaults to largest WA of contrast/throughput definitions. JSON values of $ 0 $ will be interpreted as $ Inf $. \item[PSF] \hfill \\ Instrument point spread function. Either a 2D array of a single-PSF, or a 3D array of wavelength-dependent PSFs. May be data or FITS filename. Default is numpy.ones((3,3)). \item[samp] \hfill \\ Sampling of the PSF in $ arcsec $ per pixel. Default value is 10. \item[ohTime] \hfill \\ Optical system overhead time in $ day $. Default value is $ 1 $ day. This is the (assumed constant) amount of time required to set up the optical system (i.e., dig the dark hole or do fine alignment with the occulter). It is added to every observation, and is separate from the observatory overhead defined in the observatory module, which represents the observatory's settling time. Both overheads are added to the integration time to determine the full duration of each detection observation. \item[imagTimeMult]\hfill \\ Duty cycle of a detection observation. If only a single integration is required for the initial detection observation, then this value is 1. Otherwise, it is equal to the number of discrete integrations needed to cover the full field of view (i.e., if a shaped pupil with a dark hole that covers 1/3 of the field of view is used for detection, this value would equal 3). Defaults to 1. \item[charTimeMult]\hfill \\ Characterization duty cycle. If only a single integration is required for the initial detection observation, then this value is 1. Otherwise, it is equal to the number of discrete integrations needed to cover the full wavelength band and all required polarization states. For example, if the band is split into three sub-bands, and there are two polarization states that must be measured, and each of these must be done sequentially, then this value would equal 6. However, if the three sub-bands could be observed at the same time (e.g., by separate detectors) then the value would by two (for the two polarization states). Defaults to 1. \item[occulterDiameter]\hfill \\ Occulter diameter in $ m $. Measured petal tip-to-tip. \item [NocculterDistances]\hfill \\ Number of telescope separations the occulter operates over (number of occulter bands). If greater than 1, then the occulter description is an array of dicts. \item[occulterDistance]\hfill \\ Telescope-occulter separation in $km$. \item[occulterBlueEdge]\hfill \\ Occulter blue end of wavelength band in $nm$. \item[occulterRedEdge]\hfill \\ Occulter red end of wavelength band in $nm$. \end{description} \end{description} \end{itemize} For all values that may be either scalars or interpolants, in the case where scalar values are given, the optical system module will automatically wrap them in lambda functions so that they become callable (just like the interpolant) but will always return the same value for all arguments. The inputs for interpolants may be filenames (full absolute paths) with tabulated data, or NumPy ndarrays of argument and data (in that order in rows so that input[0] is the argument and input[1] is the data). When the input is derived from a JSON file, these must either be scalars or filenames. The starlight suppression system and science instrument dictionaries can contain any other attributes required by a particular optical system implementation. The only significance of the ones enumerated above is that they are explicitly checked for by the prototype constructor, and cast to their expected values. %In cases where there is only one starlight suppression system and/or one science instrument, all values from those dictionaries are copied directly to the OpticalSystem object and can be accessed as direct attributes (i.e., \verb+OpticalSystem.type+, etc.). \subsubsection*{Attributes} These will always be present in an OpticalSystem object and directly accessible as \verb+OpticalSystem.Attribute+. \begin{itemize} \item \begin{description} \item[obscurFac (float)] \hfill \\ Obscuration factor due to secondary mirror and spiders \item[shapeFac (float)] \hfill \\ Shape factor of the unobscured pupil area, so that $ shapeFac \times pupilDiam^2 \times (1-obscurFac) = pupilArea $ \item[pupilDiam (astropy Quantity)] \hfill \\ Entrance pupil diameter in units of $ m $ \item[pupilArea (astropy Quantity)] \hfill \\ Entrance pupil area in units of $ m^{2} $ \item[telescopeKeepout (astropy Quantity)] \hfill \\ Telescope keepout angle in units of $ deg $ \item[attenuation (float)] \hfill \\ Non-coronagraph attenuation, equal to the throughput of the optical system without the coronagraph elements \item[intCutoff (astropy Quantity)] \hfill \\ Maximum allowed integration time in units of $ day $ \item[Npix (float)] \hfill \\ Number of noise pixels \item[Ndark (float)] \hfill \\ Number of dark frames used \item[haveOcculter (boolean)] \hfill \\ Boolean signifying if the system has an occulter \item[IWA (astropy Quantity)] \hfill \\ Fundamental Inner Working Angle in units of $ arcsec $ \item[OWA (astropy Quantity)] \hfill \\ Fundamental Outer Working Angle in units of $ arcsec $ \item[dMagLim (float)] \hfill \\ Fundamental Limiting $ \Delta$mag (difference in magnitude between star and planet) \item[scienceInstruments (list of dicts)] \hfill \\ List of dictionaries containing all supplied science instrument attributes. Typically the first instrument will be the one used for imaging, and the last one for spectroscopy. Only required attribute is `type'. See above for other commonly used attributes. \item[Imager (dict)] \hfill \\ Dictionary containing imaging camera attributes. Default to \verb+scienceInstruments[0]+. \item[Spectro (dict)] \hfill \\ Dictionary containing spectrograph attributes. Default to \verb+scienceInstruments[-1]+. \item[starlightSuppressionSystems (list of dicts)] \hfill \\ List of dictionaries containing all supplied starlight suppression system attributes. Typically the first system will be the one used for imaging, and the second one for spectroscopy. Only required attribute is `type'. See above for other commonly used attributes. \item[ImagerSyst (dict)] \hfill \\ Dictionary containing imaging coronagraph attributes. Default to \verb+starlightSuppressionSystems[0]+. \item[SpectroSyst (dict)] \hfill \\ Dictionary containing spectroscopy coronagraph attributes. Default to \verb+starlightSuppressionSystems[-1]+. \end{description} \end{itemize} %In cases where either of the two attribute dictionary lists (starlight suppression systems and science instruments) only contain one dictionary (i.e., there's only one coronagraph and/or detector), then all attributes will be linked as direct attributes of the object as well as stored in the dictionary. \subsubsection{calc\_maxintTime Method} \label{sec:calcmaxintTimetask} The \verb+calc_maxintTime+ method calculates the maximum integration time for each star in the target list. This method is called from the Target List module. \subsubsection*{Inputs} \begin{itemize} \item \begin{description} \item[TL (object)] \hfill \\ TargetList class object, see \S\ref{sec:targetlist} for definition of available attributes \end{description} \end{itemize} \subsubsection*{Output} \begin{itemize} \item \begin{description} \item[maxintTime (astropy Quantity array)] \hfill \\ Maximum integration time for each target star in units of $ day $ \end{description} \end{itemize} \subsubsection{calc\_intTime Method} \label{sec:calcintTimetask} The \verb+calc_intTime+ method calculates the integration time required for specific planets of interest. This method is called from the SurveySimulation module. \subsubsection*{Inputs} \begin{itemize} \item \begin{description} \item[TL (object)] \hfill \\ TargetList class object, see \S\ref{sec:targetlist} for definition of available attributes \item[sInds (integer ndarray)] \hfill \\ Integer indices of the stars of interest, with the length of the number of planets of interest. For instance, if a star hosts $ n $ planets, the index of this star must be repeated $ n $ times. \item[dMag (float ndarray)] \hfill \\ Differences in magnitude between planets and their host star. \item[WA (astropy Quantity array)] \hfill \\ Working angles of the planets of interest in units of arcsec \item[fEZ (astropy Quantity array)] \hfill \\ Surface brightness of exo-zodiacal light in units of $ 1/arcsec^2 $ \item[fZ (astropy Quantity array)] \hfill \\ Surface brightness of local zodiacal light in units of $ 1/arcsec^2 $ \end{description} \end{itemize} \subsubsection*{Output} \begin{itemize} \item \begin{description} \item[intTime (astropy Quantity array)] \hfill \\ Integration time for each of the planets of interest in units of $ day $ \end{description} \end{itemize} \subsubsection{calc\_charTime Method} \label{sec:calccharTimetask} The \verb+calc_charTime+ method calculates the characterization time required for a specific target system. This method is called from the Survey Simulation module. \subsubsection*{Inputs} \begin{itemize} \item \begin{description} \item[TL (object)] \hfill \\ TargetList class object, see \S\ref{sec:targetlist} for definition of available attributes \item[sInds (integer ndarray)] \hfill \\ Integer indices of the stars of interest, with the length of the number of planets of interest. For instance, if a star hosts $ n $ planets, the index of this star must be repeated $ n $ times. \item[dMag (float ndarray)] \hfill \\ Differences in magnitude between planets and their host star. \item[WA (astropy Quantity array)] \hfill \\ Working angles of the planets of interest in units of arcsec \item[fEZ (astropy Quantity array)] \hfill \\ Surface brightness of exo-zodiacal light in units of $ 1/arcsec^2 $ \item[fZ (astropy Quantity array)] \hfill \\ Surface brightness of local zodiacal light in units of $ 1/arcsec^2 $ \end{description} \end{itemize} \subsubsection*{Output} \begin{itemize} \item \begin{description} \item[charTime (astropy Quantity array)] \hfill \\ Characterization time for each of the planets of interest in units of $ day $ \end{description} \end{itemize} \subsubsection{Cp\_Cb Method} \label{sec:CpCbtask} The \verb+Cp_Cb+ method calculates the electron count rates for planet signal and background noise. \subsubsection*{Inputs} \begin{itemize} \item \begin{description} \item[TL (object)] \hfill \\ TargetList class object, see \S\ref{sec:targetlist} for definition of available attributes \item[sInds (integer ndarray)] \hfill \\ Integer indices of the stars of interest, with the length of the number of planets of interest. For instance, if a star hosts $ n $ planets, the index of this star must be repeated $ n $ times. \item[dMag (float ndarray)] \hfill \\ Differences in magnitude between planets and their host star. \item[WA (astropy Quantity array)] \hfill \\ Working angles of the planets of interest in units of arcsec \item[fEZ (astropy Quantity array)] \hfill \\ Surface brightness of exo-zodiacal light in units of $ 1/arcsec^2 $ \item[fZ (astropy Quantity array)] \hfill \\ Surface brightness of local zodiacal light in units of $ 1/arcsec^2 $ \item[inst (dict)] \hfill \\ Selected Science Instrument \item[syst (dict)] \hfill \\ Selected Starlight Suppression System \item[Npix (float)] \hfill \\ Number of noise pixels \end{description} \end{itemize} \subsubsection*{Output} \begin{itemize} \item \begin{description} \item[C\_p (astropy Quantity array)] \hfill \\ Planet signal electron count rate in units of $ 1/s $ \item[C\_b (astropy Quantity array)] \hfill \\ Background noise electron count rate in units of $ 1/s $ \end{description} \end{itemize} % ZODIACAL LIGHT \subsection{Zodiacal Light}\label{sec:zodiacallight} The Zodiacal Light module contains the \verb+fZ+ and \verb+fEZ+ methods. The \verb+fZ+ method calculates the surface brightness of local zodiacal light. The \verb+fEZ+ calculates the surface brightness of exozodiacal light. The inputs and outputs for the Zodiacal Light method are depicted in \reffig{fig:zodiacallightmodule}. \begin{figure}[ht] \begin{center} \begin{tabular}{c} \includegraphics[width=0.8\textwidth]{ZodiTasks} \end{tabular} \end{center} \caption{\label{fig:zodiacallightmodule} Depiction of Zodiacal Light module method including inputs and outputs (see \S\ref{sec:fZtask} and \S\ref{sec:fEZtask}).} \end{figure} \subsubsection{Zodiacal Light Object Attribute Initialization} \subsubsection*{Input} \begin{itemize} \item \begin{description} \item \item[magZ] \hfill \\ Zodi brightness magnitude (per $ arcsec^2 $). Defaults to 23. \item[magEZ] \hfill \\ Exo-zodi brightness magnitude (per $ arcsec^2 $). Defaults to 22. \item[varEZ] \hfill \\ Exo-zodiacal light variation (variance of log-normal distribution). Constant if set to 0. Defaults to 0. \item[nEZ] \hfill \\ Exo-zodiacal light level in zodi. Defaults to 1.5. \end{description} \end{itemize} \subsubsection*{Attributes} \begin{itemize} \item \begin{description} \item[magZ (float)] \hfill \\ Zodi brightness magnitude (per $ arcsec^2 $) \item[magEZ (float)] \hfill \\ Exo-zodi brightness magnitude (per $ arcsec^2 $) \item[varEZ (float)] \hfill \\ Exo-zodiacal light variation (variance of log-normal distribution) \item[nEZ (float)] \hfill \\ Exo-zodiacal light level in zodi \end{description} \end{itemize} \subsubsection{fZ Method} \label{sec:fZtask} The \verb+fZ+ method returns surface brightness of local zodiacal light for planetary systems. This functionality is used by the Simulated Universe module. \subsubsection*{Inputs} \begin{itemize} \item \begin{description} \item[TL (object)] \hfill \\ TargetList class object, see \S\ref{sec:targetlist} for description of functionality and attributes \item[sInds (integer ndarray)] \hfill \\ Integer indices of the stars of interest, with the length of the number of planets of interest \item[lam (astropy Quantity)] \hfill \\ Central wavelength in units of $ nm $ \item[r\_sc (astropy Quantity 1$\times$3 array)] \hfill \\ Observatory position vector in units of $ km $ \end{description} \end{itemize} \subsubsection*{Outputs} \begin{itemize} \item \begin{description} \item[fZ (astropy Quantity array)] \hfill \\ Surface brightness of zodiacal light in units of $ 1/arcsec^2 $ \end{description} \end{itemize} \subsubsection{fEZ Method} \label{sec:fEZtask} The \verb+fEZ+ method returns surface brightness of exo-zodiacal light for planetary systems. This functionality is used by the Simulated Universe module. \subsubsection*{Inputs} \begin{itemize} \item \begin{description} \item[TL (object)] \hfill \\ TargetList class object, see \S\ref{sec:targetlist} for description of functionality and attributes \item[sInds (integer ndarray)] \hfill \\ Integer indices of the stars of interest, with the length of the number of planets of interest \item[I (astropy Quantity array)] \hfill \\ Inclination of the planets of interest in units of $ deg $ \end{description} \end{itemize} \subsubsection*{Outputs} \begin{itemize} \item \begin{description} \item[fEZ (astropy Quantity array)] \hfill \\ Surface brightness of exo-zodiacal light in units of $ 1/arcsec^2 $ \end{description} \end{itemize} \subsection{Background Sources}\label{sec:backgroundsources} The Background Sources module will provide density of background sources for a given target based on its coordinates and the integration depth. This will be used in the post-processing module to determine false alarms based on confusion. The prototype module has no inputs and only a single function: \verb+dNbackground+. \subsubsection{dNbackground} \subsubsection*{Inputs} \begin{itemize} \item \begin{description} \item[coords (astropy SkyCoord array)] \hfill \\ \href{http://astropy.readthedocs.org/en/latest/api/astropy.coordinates.SkyCoord.html}{SkyCoord object} containing right ascension, declination, and distance to star of the planets of interest in units of $ deg $, $ deg $ and $ pc $. \item[intDepths (float ndarray)] \hfill \\ Integration depths equal to absolute magnitudes (in the detection band) of the dark hole to be produced for each target. Must be of same length as coords. \end{description} \end{itemize} \subsubsection*{Outputs} \begin{itemize} \item \begin{description} \item[dN (astropy Quantity array)] \hfill \\ Number densities of background sources for given targets in units of $ 1/arcmin^2 $. Same length as inputs. \end{description} \end{itemize} % OBSERVATORY \subsection{Observatory} The Observatory module contains all of the information specific to the space-based observatory not included in the Optical System module. The module has two main methods: \verb+orbit+ and \verb+keepout+, which are implemented as functions within the module. The observatory orbit plays a key role in determining which of the target stars may be observed for planet finding at a specific time during the mission lifetime. The Observatory module's \verb+orbit+ method takes the current mission time as input and outputs the observatory's position vector. The position vector is standardized throughout the modules to be referenced to a heliocentric equatorial frame at the J2000 epoch. The observatory's position vector is used in the \verb+keepout+ method and Target List module to determine which of the stars are observable at the current mission time. The \verb+keepout+ method determines which target stars are observable at a specific time during the mission simulation and which are unobservable due to bright objects within the field of view such as the sun, moon, and solar system planets. The keepout volume is determined by the specific design of the observatory and, in certain cases, by the starlight suppression system. The \verb+keepout+ method takes the current mission time and Star Catalog or Target List module output as inputs and outputs a list of the target stars which are observable at the current time. It constructs position vectors of the target stars and bright objects which may interfere with observations with respect to the observatory. These position vectors are used to determine if bright objects are in the field of view for each of the potential stars under exoplanet finding observation. If there are no bright objects obstructing the view of the target star, it becomes a candidate for observation in the Survey Simulation module. The solar keepout is typically encoded as allowable angle ranges for the spacecraft-star unit vector as measured from the spacecraft-sun vector. In addition to these methods, the observatory definition can also encode finite resources used by the observatory throughout the mission. The most important of these is the fuel used for stationkeeping and repointing, especially in the case of occulters which must move significant distances between observations. Other considerations could include the use of other volatiles such as cryogens for cooled instruments, which tend to deplete solely as a function of mission time. This module also allows for detailed investigations of the effects of orbital design on the science yield, e.g., comparing the original baseline geosynchronous 28.5\textdegree{} inclined orbit for WFIRST-AFTA with an L2 halo orbit, which is the new mission baseline. The inputs, outputs, and updated attributes of the required Observatory module methods are depicted in \reffig{fig:observatorymodule}. \begin{figure}[ht] \begin{center} \begin{tabular}{c} \includegraphics[width=0.9\textwidth]{observatory3} \end{tabular} \end{center} \caption{\label{fig:observatorymodule} Depiction of Observatory module methods including inputs, outputs, and updated attributes (see \S\ref{sec:orbittask} and \S\ref{sec:keepouttask}).} \end{figure} \label{sec:observatory} \subsubsection{Observatory Object Attribute Initialization} \subsubsection*{Inputs} \begin{itemize} \item % \begin{description} % \item[User specification] \hfill \\ % Information from simulation specification JSON file organized into a Python dictionary. If the below \verb+key:value+ pairs are missing from the dictionary, the Observatory object attributes will be assigned the default values listed. \begin{description} \item[settlingTime] \hfill \\ Amount of time needed for observatory to settle after a repointing in $ day $. Default value is 1. \item[thrust] \hfill \\ Occulter slew thrust in $ mN $. Default value is 450. \item[slewIsp] \hfill \\ Occulter slew specific impulse in $ s $. Default value is 4160. \item[scMass] \hfill \\ Occulter (maneuvering spacecraft) initial wet mass in $ kg $. Default value is 6000. \item[dryMass] \hfill \\ Occulter (maneuvering spacecraft) dry mass in $ kg $. Default value is 3400. \item[coMass] \hfill \\ Telescope (or non-maneuvering spacecraft) mass in $ kg $. Default value is 5800. \item[skIsp] \hfill \\ Specific impulse for station keeping in $ s $. Default value is 220. \item[defburnPortion] \hfill \\ Default burn portion for slewing. Default value is 0.05 \item[spkpath] \hfill\\ String with full path to SPK kernel file (only used if using jplephem for solar system body propagation - see \ref{sec:ssbPosTask}. \item[forceStaticEphem] \hfill \\ Boolean, forcing use of static solar system ephemeris if set to True, even if jplephem module is present (see \ref{sec:ssbPosTask}). Default value is False. \end{description} % \end{description} \end{itemize} \subsubsection*{Attributes} \begin{itemize} \item \begin{description} \item[settlingTime] \hfill \\ Amount of time needed for observatory to settle after a repointing (astropy Quantity initially set in $ day $) \item[thrust] \hfill \\ Occulter slew thrust (astropy Quantity initially set in $ mN $) \item[slewIsp] \hfill \\ Occulter slew specific impulse (astropy Quantity initially set in $ s $) \item[scMass] \hfill \\ Occulter (maneuvering spacecraft) initial wet mass (astropy Quantity initially set in $ kg $) \item[dryMass] \hfill \\ Occulter (maneuvering spacecraft) dry mass (astropy Quantity initially set in $ kg $) \item[coMass] \hfill \\ Telescope (or non-maneuvering spacecraft) mass (astropy Quantity initially set in $ kg $) \item[kogood] \hfill \\ 1D NumPy ndarray of Boolean values where True is a target unobstructed and observable in the keepout zone. Initialized to an empty array. This attribute is updated to the current mission time through the keepout method (see \ref{sec:keepouttask}). \item[r\_sc] \hfill \\ Observatory orbit position in HE reference frame. Initialized to NumPy ndarray as numpy.array([0., 0., 0.]) and associated with astropy Quantity in $ km $. This attribute is updated to the orbital position of the observatory at the current mission time through the orbit method (see \ref{sec:orbittask}). \item[skIsp] \hfill \\ Specific impulse for station keeping (astropy Quantity initially set in $ s $) \item[defburnPortion] \hfill \\ Default burn portion for slewing \item[currentSep] \hfill \\ Current occulter separation (astropy Quantity initially set in $ km $ \item[flowRate] \hfill \\ Slew flow rate derived from thrust and slewIsp (astropy Quantity initially set in $ kg/day $) \end{description} \end{itemize} \subsubsection{orbit Method} \label{sec:orbittask} The \verb+orbit+ method finds the heliocentric equatorial position vector of the observatory spacecraft. \subsubsection*{Inputs} \begin{itemize} \item \begin{description} \item[time] \hfill \\ astropy \href{http://astropy.readthedocs.org/en/latest/time/index.html}{Time object} which may be \verb+TimeKeeping.currenttimeAbs+ from Time Keeping module see \ref{sec:currenttime} for definition \end{description} \end{itemize} \subsubsection*{Outputs} \begin{itemize} \item \begin{description} \item[success] \hfill \\ Boolean indicating if orbit was successfully calculated \end{description} \end{itemize} \subsubsection*{Updated Object Attributes} \begin{itemize} \item \begin{description} \item[Observatory.r\_sc] \hfill \\ Observatory orbit position in HE reference frame at current mission time (astropy Quantity defined in $ km $) \end{description} \end{itemize} \subsubsection{keepout Method} \label{sec:keepouttask} The \verb+keepout+ method determines which stars in the target list are observable at the given input time. \subsubsection*{Inputs} \begin{itemize} \item \begin{description} \item[time] \hfill \\ astropy Time object which may be \verb+TimeKeeping.currenttimeAbs+ (see \ref{sec:currenttime} for definition) \item[targlist] \hfill \\ Instantiated Target List object from Target List module. See \ref{sec:targetlist} for definition of available attributes \item[koangle] \hfill \\ Telescope keepout angle in $ deg $ - \verb+OpticalSystem.telescopeKeepout+ \end{description} \end{itemize} \subsubsection*{Outputs} \begin{itemize} \item \begin{description} \item[success] \hfill \\ Boolean indicating if orbit was successfully calculated \end{description} \end{itemize} \subsubsection*{Updated Object Attributes} \begin{itemize} \item \begin{description} \item[Observatory.kogood] \hfill \\ 1D NumPy ndarray of Boolean values for each target at given time where True is a target unobstructed in the keepout zone and False is a target unobservable due to obstructions in the keepout zone \end{description} \end{itemize} \subsubsection{solarSystem\_body\_position Method}\label{sec:ssbPosTask} The \verb+solarSystem_body_position+ returns the position of any solar system body (Earth, Sun, Moon, etc.) at a given time in the common Heliocentric Equatorial frame. The observatory prototype will attempt to load the jplephem module, and use a local SPK file for all propagations if available. The SPK file is not packaged with the software but may be downloaded from JPL's website at: \url{http://naif.jpl.nasa.gov/pub/naif/generic_kernels/spk/planets/a_old_versions/}. The location of the spk file is assumed to be in the Observatory directory but can be set by the \verb+spkpath+ input. If jplephem is not present, the Observatory prototype will load static ephemeris derived from Vallado (2004) and use those for propagation. This behavior can be forced even when jplephem is available by setting the \verb+forceStaticEphem+ input to True. \subsubsection*{Inputs} \begin{itemize} \item \begin{description} \item[time] \hfill \\ astropy Time object which may be \verb+TimeKeeping.currenttimeAbs+ (see \ref{sec:currenttime} for definition) \item[bodyname] \hfill \\ String containing object name, capitalized by convention. \end{description} \end{itemize} \subsubsection*{Outputs} \begin{itemize} \item \begin{description} \item[r\_body] \hfill \\ (Quantity) heliocentric equatorial position vector (units of km) \end{description} \end{itemize} % TIME KEEPING \subsection{Time Keeping} \label{sec:time} The Time Keeping module is responsible for keeping track of the current mission time. It encodes only the mission start time, the mission duration, and the current time within a simulation. All functions in all modules requiring knowledge of the current time call functions or access parameters implemented within the Time module. Internal encoding of time is implemented as the time from mission start (measured in $ day $). The Time Keeping module also provides functionality for converting between this time measure and standard measures such as Julian Day Number and UTC time. The Time Keeping module contains the \verb+update_times+ and \verb+duty_cycle+ methods. These methods updates the mission time during a survey simulation. The duty cycle determines when during the mission timeline the observatory is allowed to perform planet-finding operations. The duty cycle function takes the current mission time as input and outputs the next available time when exoplanet observations may begin or resume, along with the duration of the observational period. The outputs of this method are used in the Survey Simulation module to determine when and how long exoplanet finding and characterization observations occur. The inputs and updated attributes for the Time Keeping methods are depicted in \reffig{fig:timekeepingmodule}. \begin{figure}[ht] \begin{center} \begin{tabular}{c} \includegraphics[width=\textwidth]{TimeKeepingTasks} \end{tabular} \end{center} \caption{\label{fig:timekeepingmodule} Depiction of Time Keeping module method including input and updated attributes (see \S\ref{sec:updatetimestask} and \S\ref{sec:dutycycletask}).} \end{figure} \subsubsection{Time Keeping Object Attribute Initialization} \subsubsection*{Inputs} \begin{itemize} \item \begin{description} \item[missionStart] \hfill \\ Mission start time in $ MJD $. Default value is 60634. \item[missionLife] \hfill \\ Total length of mission in $ years $. Default value is 6. \item[extendedLife] \hfill \\ Extended mission time in $ years $. Default value is 0. Extended life typically differs from the primary mission in some way---most typically only revisits are allowed. \item[missionPortion] \hfill \\ Portion of mission time devoted to planet-finding. Default value is 1/6. \end{description} \end{itemize} \subsubsection*{Attributes} \begin{itemize} \item \begin{description} \item[missionStart] \hfill \\ Mission start time (astropy Time object initially defined in $ MJD $) \item[missionLife] \hfill \\ Mission lifetime (astropy Quantity initially set in $ years $) \item[extendedLife] \hfill \\ Extended mission time (astropy Quantity initially set in $ years $) \item[missionPortion] \hfill \\ Portion of mission time devoted to planet-finding \item[duration] \hfill \\ Duration of planet-finding operations (astropy Quantity initially set in $ day $) \item[nexttimeAvail] \hfill \\ Next time available for planet-finding (astropy Quantity initially set in $ day $) \item[currenttimeNorm] \hfill \\ Current mission time normalized so that start date is 0 (astropy Quantity initially set in $ day $) \item[currenttimeAbs] \label{sec:currenttime}\hfill \\ Current absolute mission time (astropy Time object initially defined in $ MJD $) \item[missionFinishNorm] \hfill \\ Mission finish time (astropy Quantity initially set in $ day $) \item[missionFinishAbs] \hfill \\ Mission completion date (astropy Time object initially defined in $ MJD $) \end{description} \end{itemize} \subsubsection{update\_times Method} \label{sec:updatetimestask} The \verb+update_times+ method updates the relevant mission times. \subsubsection*{Inputs} \begin{itemize} \item \begin{description} \item[dt] \hfill \\ Time increment (astropy Quantity with units of time) \end{description} \end{itemize} \subsubsection*{Updated Object Attributes} \begin{itemize} \item \begin{description} \item[TimeKeeping.currenttimeNorm] \hfill \\ Current mission time normalized so that start date is 0 (astropy Quantity with units of time) \item[TimeKeeping.currenttimeAbs] \hfill \\ Current absolute mission time (astropy Time object) \end{description} \end{itemize} \subsubsection{duty\_cycle Method} \label{sec:dutycycletask} The \verb+duty_cycle+ method calculates the next time that the observatory will be available for exoplanet science and returns this time and the maximum amount of time afterwards during which an exolanet observation can run (if capped). \subsubsection*{Inputs} \begin{itemize} \item \begin{description} \item[currenttime] \hfill \\ Current time in mission simulation (astropy Quantity with units of time often \verb+TimeKeeping.currenttimeNorm+) \end{description} \end{itemize} \subsubsection*{Outputs} \begin{itemize} \item \begin{description} \item[nexttime] \hfill \\ Next available time for planet-finding (astropy Quantity with units of time) \end{description} \end{itemize} \subsubsection*{Updated Object Attributes} \begin{itemize} \item \begin{description} \item[TimeKeeping.nexttimeAvail] \hfill \\ Next time available for planet-finding (astropy Quantity with units of time) \item[TimeKeeping.duration] \hfill \\ Duration of planet-finding operations (astropy Quantity with units of time) \end{description} \end{itemize} % POST-PROCESSING \subsection{Post-Processing}\label{sec:postprocessing} The Post-Processing module encodes the effects of post-processing on the data gathered in a simulated observation, and the effects on the final contrast of the simulation. The Post-Processing module is also responsible for determining whether a planet detection has occurred for a given observation, returning one of four possible states---true positive (real detection), false positive (false alarm), true negative (no detection when no planet is present) and false negative (missed detection). These can be generated based solely on statistical modeling or by processing simulated images. The Post-Processing module contains the \verb+det_occur+ task. This task determines if a planet detection occurs for a given observation. The inputs and outputs for this task are depicted in \reffig{fig:postprocessingmodule}. \begin{figure}[ht] \begin{center} \begin{tabular}{c} \includegraphics[width=0.8\textwidth]{PostTasks} \end{tabular} \end{center} \caption{\label{fig:postprocessingmodule} Depiction of Post-Processing module task including inputs and outputs (see \S\ref{sec:detoccurtask}).} \end{figure} \subsubsection{Post-Processing Object Attribute Initialization} \subsubsection*{Inputs} \begin{itemize} \item \begin{description} \item[FAP] \hfill \\ Detection false alarm probability. Default value is $3 \times 10^{-5}$. \item[MDP] \hfill \\ Missed detection probability. Default value is $10^{-3}$. \item[ppFact] \hfill \\ Post-processing contrast factor, between 0 and 1. Default value is 1. \item[SNimag] \hfill \\ Signal to Noise Ratio threshold for imaging/detection. Default value is 5. \item[SNchar] \hfill \\ Signal to Noise Ratio threshold for characterization. Default value is 11. \end{description} \end{itemize} \subsubsection*{Attributes} \begin{itemize} \item \begin{description} \item[BackgroundSources (object)] \hfill \\ BackgroundSources class object (see \ref{sec:backgroundsources}) \item[FAP] \hfill \\ Detection false alarm probability \item[MDP] \hfill \\ Missed detection probability \end{description} \end{itemize} \subsubsection{det\_occur Method} \label{sec:detoccurtask} The \verb+det_occur+ method determines if a planet detection has occurred. \subsubsection*{Inputs} \begin{itemize} \item \begin{description} \item[observationPossible] \hfill \\ 1D NumPy ndarray of booleans signifying if a planet in the system being observed is observable \end{description} \end{itemize} \subsubsection*{Outputs} \begin{itemize} \item \begin{description} \item[FA] \hfill \\ Boolean where True means False Alarm \item[DET] \hfill \\ Boolean where True means DETection \item[MD] \hfill \\ Boolean where True means Missed Detection \item[NULL] \hfill \\ Boolean where True means Null Detection \end{description} \end{itemize} % COMPLETENESS \subsection{Completeness}\label{sec:completeness} The Completeness module takes in information from the Planet Population module to determine initial completeness and update completeness values for target list stars when called upon. The Completeness module contains the following methods: \verb+target_completeness+ and \verb+completeness_update+. \verb+target_completeness+ generates initial completeness values for each star in the target list (see \S\ref{sec:targetcompletenesstask}). \verb+completeness_update+ updates the completeness values following an observation (see \S\ref{sec:completenessupdatetask}). \subsubsection{Completeness Object Attribute Initialization} \subsubsection*{Input} Monte Carlo methods for calculating completeness will require an input of the number of planet samples called \verb+Nplanets+. \subsubsection*{Attributes} \begin{itemize} \item \begin{description} \item[PlanetPopulation] \hfill \\ Planet Population module object (see \ref{sec:planetpopulation}) \item[PlanetPhysicalModel] \hfill \\ Planet Physical Model module object (see \ref{sec:planetphysicalmodel}) \end{description} \end{itemize} \subsubsection{target\_completeness Method} \label{sec:targetcompletenesstask} The \verb+target_completeness+ method generates completeness values for each star in the target list. \subsubsection*{Inputs} \begin{itemize} \item \begin{description} \item[targlist] \hfill \\ Instantiated Target List object from Target List module see \S\ref{sec:targetlist} for definition of functionality and attributes \end{description} \end{itemize} \subsubsection*{Outputs} \begin{itemize} \item \begin{description} \item[comp0] \hfill \\ 1D NumPy ndarray containing completeness values for each star in the target list \end{description} \end{itemize} \subsubsection{gen\_update Method} \label{sec:genupdatetask} The \verb+gen_update+ method generates dynamic completeness values for successive observations of each star in the target list. \subsubsection*{Input} \begin{itemize} \item \begin{description} \item[TL] \hfill \\ Instantiated Target List object from Target List module see \S\ref{sec:targetlist} for definition of functionality and attributes \end{description} \end{itemize} \subsubsection{completeness\_update Method} \label{sec:completenessupdatetask} The \verb+completeness_update+ method updates the completeness values for each star in the target list following an observation. \subsubsection*{Inputs} \begin{itemize} \item \begin{description} \item[s\_ind] \hfill \\ index of star in target list just observed \item[targlist] \hfill \\ Instantiated Target List object from Target List module see \S\ref{sec:targetlist} for definition of functionality and attributes \item[obsbegin] \hfill \\ Mission time when the observation of \verb+s_ind+ began (astropy Quantity with units of time) \item[obsend] \hfill \\ Mission time when the observation of \verb+s_ind+ ended (astropy Quantity with units of time) \item[nexttime] \hfill \\ Mission time of next observational period (astropy Quantity with units of time) \end{description} \end{itemize} \subsubsection*{Output} \begin{itemize} \item \begin{description} \item[comp0] \hfill \\ 1D NumPy ndarray of updated completeness values for each star in the target list \end{description} \end{itemize} % TARGET LIST \subsection{Target List} The Target List module takes in information from the Star Catalog, Optical System, Zodiacal Light, Post Processing, Background Sources, Completeness, PlanetPopulation, and Planet Physical Model modules to generate the target list for the simulated survey. This list can either contain all of the targets where a planet with specified parameter ranges could be observed or a list of pre-determined targets such as in the case of a mission which only seeks to observe stars where planets are known to exist from previous surveys. The final target list encodes all of the same information as is provided by the Star Catalog module. \label{sec:targetlist} \subsubsection{Target List Object Attribute Initialization} \subsubsection*{Inputs} \begin{itemize} \item \begin{description} % \item[User specification] \hfill \\ % Information from simulation specification JSON file organized into a Python dictionary. If \verb+key:value+ pairs are missing from the dictionary, the Target List object attributes will be assigned the default values. \item[keepStarCatalog] \hfill \\ Boolean representing whether to delete the star catalog object after the target list is assembled (defaults to False). If True, object reference will be available from TargetList class object. \item[minComp] \hfill \\ Minimum completeness value for inclusion in target list. Defaults to 0.1. \end{description} \end{itemize} \subsubsection*{Attributes} \begin{itemize} \item \begin{description} \item[(StarCatalog values)] \hfill \\ Mission specific filtered star catalog values from StarCatalog class object (see \ref{sec:starcatalog}) \item[PlanetPopulation (object)] \hfill \\ PlanetPopulation class object (see \ref{sec:planetpopulation}) \item[PlanetPhysicalModel (object)] \hfill \\ PlanetPhysicalModel class object (see \ref{sec:planetphysicalmodel}) \item[StarCatalog (object)]\hfill \\ StarCatalog class object (only retained if keepStarCatalog is True, see \ref{sec:starcatalog}) \item[OpticalSystem (object)] \hfill \\ OpticalSystem class object (see \ref{sec:opticalsystem}) \item[ZodiacalLight (object)] \hfill \\ ZodiacalLight class object (see \ref{sec:zodiacallight}) \item[BackgroundSources (object)] \hfill \\ BackgroundSources class object (see \ref{sec:backgroundsources}) \item[PostProcessing (object)] \hfill \\ PostProcessing class object (see \ref{sec:postprocessing}) \item[Completeness (object)] \hfill \\ Completeness class object (see \ref{sec:completeness}) \item[maxintTime (astropy Quantity array)] \hfill \\ Maximum integration time for each target star in units of $ day $. Calculated from \verb+OpticalSystem.calc_maxintTime+ \S\ref{sec:calcmaxintTimetask} \item[comp0 (float ndarray)] \hfill \\ Completeness value for each target star. Calculated from \verb+Completeness.target_completeness+ \S\ref{sec:targetcompletenesstask} \item[minComp (float)] \hfill \\ Minimum completeness value for inclusion in target list. \item[MsEst (float ndarray)] \hfill \\ Approximate stellar mass in $ M_{sun} $ \item[MsTrue (float ndarray)] \hfill \\ Stellar mass with an error component included in $ M_{sun} $ \item[nStars (int)] \hfill \\ Number of target stars \end{description} \end{itemize} \subsubsection{starMag Method} \label{sec:starMagtask} The \verb+starMag+ method calculates star visual magnitudes with B-V color using empirical fit to data from Pecaut and Mamajek (2013, Appendix C). The expression for flux is accurate to about $7\%$, in the range of validity 400 $ nm < \lambda < $ 1000 $ nm $ (Traub et al. 2016). \subsubsection*{Inputs} \begin{itemize} \item \begin{description} \item[sInds (integer ndarray)] \hfill \\ Indices of the stars of interest, with the length of the number of planets of interest \item[lam (astropy Quantity)] \hfill \\ Wavelength in units of $ nm $ \end{description} \end{itemize} \subsubsection*{Output} \begin{itemize} \item \begin{description} \item[mV (float ndarray)] \hfill \\ Star visual magnitudes with B-V color \end{description} \end{itemize} \subsubsection{populate\_target\_list method} \label{sec:populatetargetlisttask} This method is responsible for populating values from the star catalog (or any other source) into the target list attributes. It has not specific inputs and outputs, but is always passed the full specification dictionary, and updates all relevant Target List attributes. This method is called from the prototype constructor, and does not need to be called from the implementation constructor when overloaded in the implementation. The prototype implementation copies values directly from star catalog and removes stars with any NaN attributes. It also calls the \verb+target_completeness+ in the Completeness module (\S\ref{sec:targetcompletenesstask}) and the \verb+calc_maxintTime+ in the Optical System module (\S\ref{sec:calcmaxintTimetask}) to generate the initial completeness and maximum integration time for all targets. It also generates 'true' and 'approximate' star masses using object method \verb+stellar_mass+ (see below). \subsubsection{filter\_target\_list method} \label{sec:filtertargetlisttask} This method is responsible for filtering the targetlist to produce the values from the star catalog (or any other source) into the target list attributes. It has not specific inputs and outputs, but is always passed the full specification dictionary, and updates all relevant Target List attributes. This method is called from the prototype constructor, immediately after the \verb+populate_target_list+ call, and does not need to be called from the implementation constructor when overloaded in the implementation. The prototype implementation filters out any targets where the widest separation planet in the modeled population would be inside the system IWA, any targets where the limiting delta mag is above the population maximum delta mag, where the integration time for the brightest planet in the modeled population is above the specified maximum integration time, and all targets where the initial completeness is below the specified threshold. \subsubsection{Target List Filtering Helper Methods} The \verb+filter_target_list+ method calls multiple helper functions to perform the actual filtering tasks. Additional filters can be defined in specific implementations and by overloading the \verb+filter_target_list+ method. The filter subtasks (with a few exception) take no inputs and operate directly on object attributes. The prototype implementation implements the following methods: \begin{itemize} \item \verb+nan_filter+: Filters any target list entires with NaN values \item \verb+binary_filter+: Filters any targets with attribute \verb+Binary_Cut+ set to True \item \verb+main_sequence_filter+: Filters any target lists that are not on the Main Sequence (estimated from the MV and BV attributes) \item \verb+ fgk_filter+: Filters any targets that are not F, G, or K stars \item \verb+vis_mag_filter+: Filters out targets with visible magnitudes below input value \verb+Vmagcrit+ \item \verb+outside_IWA_filter+: Filters out targets with all planets inside of the IWA \item \verb+int_cutoff_filter+: Filters out all targets with maximum integration times above the specified (in the input spec) threshold integration time \item \verb+max_dmag_filter+: Filters out all targets with minimum delta mag above the limiting delta mag (from input spec) \item \verb+completeness_filter+: Filters out all targets with single visit completeness below the specified (in the input spec) threshold completeness \item \verb+revise_lists+: General helper function for applying filters. \end{itemize} % SIMULATED UNIVERSE \subsection{Simulated Universe} \label{sec:simulateduniverse} The Simulated Universe module instantiates the Target List module and creates a synthetic universe by populating planetary systems about some or all of the stars in the target list. For each target, a planetary system is generated based on the statistics encoded in the Planet Population module, so that the overall planet occurrence and multiplicity rates are consistent with the provided distribution functions. Physical parameters for each planet are similarly sampled from the input density functions (or calculated via the Planet physical model). All planetary orbital and physical parameters are encoded as arrays of values, with an indexing array that maps planets to the stars in the target list. The Simulated Universe module contains the following methods: \begin{itemize} \item[] \verb+gen_planetary_systems+ \item[] \verb+planet_pos_vel+ \item[] \verb+prop_system+ \item[] \verb+get_current_WA+ \end{itemize} \verb+gen_planetary_systems+ populates the orbital elements and physical characteristics of all planets (see \S\ref{sec:genplanetarysystemstask}). \verb+planet_pos_vel+ finds initial position and velocity vectors for each planet (see \S\ref{sec:planetposveltask}). \verb+prop_system+ propagates planet position, velocity, and star separation in time (see \S\ref{sec:propsystemtask}). \verb+get_current_WA+ calculates planet current working angle (see \S\ref{sec:getcurrentWAtask}). All planetary parameters are generated in the constructor via calls to the appropriate value generating functions in the planet population module. %\begin{figure}[ht] % \begin{center} % \begin{tabular}{c} % \includegraphics[width=\textwidth]{SimulatedUniverseTasks} % \end{tabular} % \end{center} % \caption{\label{fig:simulateduniversemodule} Depiction of Simulated Universe module methods including inputs and outputs (see \ref{sec:planettostartask}, \ref{sec:planetatask}, \ref{sec:planetetask}, \ref{sec:planetwtask}, \ref{sec:planetOtask}, \ref{sec:planetmassestask}, \ref{sec:planetradiitask}, \ref{sec:planetposveltask}, \ref{sec:planetalbedostask}, \ref{sec:planetinclinationstask}, and \ref{sec:propsystemtask}).} %\end{figure} \subsubsection{Attributes} \begin{itemize} \item \begin{description} \item[PlanetPopulation (object)] \hfill \\ PlanetPopulation class object (see \ref{sec:planetpopulation}) \item[PlanetPhysicalModel (object)] \hfill \\ PlanetPhysicalModel class object (see \ref{sec:planetphysicalmodel}) \item[OpticalSystem (object)] \hfill \\ OpticalSystem class object (see \ref{sec:opticalsystem}) \item[ZodiacalLight (object)] \hfill \\ ZodiacalLight class object (see \ref{sec:zodiacallight}) \item[BackgroundSources (object)] \hfill \\ BackgroundSources class object (see \ref{sec:backgroundsources}) \item[PostProcessing (object)] \hfill \\ PostProcessing class object (see \ref{sec:postprocessing}) \item[Completeness (object)] \hfill \\ Completeness class object (see \ref{sec:completeness}) \item[TargetList (object)] \hfill \\ TargetList class object (see \ref{sec:targetlist}) \item[nPlans (integer)] \hfill \\ Total number of planets \item[plan2star (integer ndarray)] \hfill \\ Indices mapping planets to target stars in TargetList \item[sInds (integer ndarray)] \hfill \\ Unique indices of stars with planets in TargetList \item[a (astropy Quantity array)] \hfill \\ Planet semi-major axis in units of $ AU $ \item[e (float ndarray)] \hfill \\ Planet eccentricity \item[I (astropy Quantity array)] \hfill \\ Planet inclination in units of $ deg $ \item[O (astropy Quantity array)] \hfill \\ Planet right ascension of the ascending node in units of $ deg $ \item[w (astropy Quantity array)] \hfill \\ Planet argument of perigee in units of $ deg $ \item[p (float ndarray)] \hfill \\ Planet albedo \item[Rp (astropy Quantity array)] \hfill \\ Planet radius in units of $ km $ \item[Mp (astropy Quantity array)] \hfill \\ Planet mass in units of $ kg $ \item[r (astropy Quantity n$\times$3 array)] \hfill \\ Planet position vector in units of $ km $ \item[v (astropy Quantity n$\times$3 array)] \hfill \\ Planet velocity vector in units of $ km/s $ \item[s (astropy Quantity array)] \hfill \\ Planet-star apparent separations in units of $ km $ \item[d (astropy Quantity array)] \hfill \\ Planet-star distances in units of $ km $ \item[fEZ (astropy Quantity array)] \hfill \\ Surface brightness of exozodiacal light in units of $ 1/arcsec2 $, determined from \verb+ZodiacalLight.fEZ+ \S\ref{sec:fEZtask} \end{description} \end{itemize} \subsubsection{gen\_planetary\_system Method} \label{sec:genplanetarysystemstask} The \verb+gen_planetary_system+ method generates the planetary systems for the current simulated universe. This routine populates arrays of the orbital elements and physical characteristics of all planets, and generates indexes that map from planet to parent star. \subsubsection*{Inputs} This method does not take any explicit inputs. It uses the inherited TargetList and PlanetPopulation objects. \begin{itemize} \item \begin{description} \item[M (ndarray)] \hfill \\ Initial mean anomaly of all planets. Must be an ndarray of the same size as PlanetPopulation.a (and all other orbital parameters). Must be in radians, in range of $[0,2\pi)$. If set to None (the default) this array will be populated with uniformly distributed random values in that range. \end{description} \end{itemize} \subsubsection*{Updated Attributes} \begin{itemize} \item \begin{description} \item[nPlans (integer)] \hfill \\ Total number of planets \item[plan2star (integer ndarray)] \hfill \\ Indices mapping planets to target stars in TargetList \item[sInds (integer ndarray)] \hfill \\ Unique indices of stars with planets in TargetList \item[a (astropy Quantity array)] \hfill \\ Planet semi-major axis in units of $ AU $ \item[e (float ndarray)] \hfill \\ Planet eccentricity \item[I (astropy Quantity array)] \hfill \\ Planet inclination in units of $ deg $ \item[O (astropy Quantity array)] \hfill \\ Planet right ascension of the ascending node in units of $ deg $ \item[w (astropy Quantity array)] \hfill \\ Planet argument of perigee in units of $ deg $ \item[r (astropy Quantity n$\times$3 array)] \hfill \\ Planet position vector in units of $ km $ \item[v (astropy Quantity n$\times$3 array)] \hfill \\ Planet velocity vector in units of $ km/s $ \item[s (astropy Quantity array)] \hfill \\ Planet-star apparent separations in units of $ km $ \item[d (astropy Quantity array)] \hfill \\ Planet-star distances in units of $ km $ \item[Mp (astropy Quantity array)] \hfill \\ Planet mass in units of $ kg $ \item[Rp (astropy Quantity array)] \hfill \\ Planet radius in units of $ km $ \item[p (float ndarray)] \hfill \\ Planet albedo \item[fEZ (astropy Quantity array)] \hfill \\ Surface brightness of exozodiacal light in units of $ 1/arcsec2 $, determined from \verb+ZodiacalLight.fEZ+ \S\ref{sec:fEZtask} \end{description} \end{itemize} \subsubsection{planet\_pos\_vel Method} \label{sec:planetposveltask} The \verb+planet_pos_vel+ method assigns each planet an initial position and velocity vector with appropriate astropy Quantity units attached. \subsubsection*{Inputs} This method does not take any explicit inputs. It uses the following attributes assigned before calling this method: \begin{itemize} \item \verb+SimulatedUniverse.a+ \item \verb+SimulatedUniverse.e+ \item \verb+SimulatedUniverse.I+ \item \verb+SimulatedUniverse.O+ \item \verb+SimulatedUniverse.w+ \item \verb+SimulatedUniverse.Mp+ \end{itemize} \subsubsection*{Outputs} \begin{itemize} \item \begin{description} \item[r (astropy Quantity n$\times$3 array)] \hfill \\ Planet position vector in units of $ km $ \item[v (astropy Quantity n$\times$3 array)] \hfill \\ Planet velocity vector in units of $ km/s $ \end{description} \end{itemize} \subsubsection{prop\_system Method} \label{sec:propsystemtask} The \verb+prop_system+ method propagates planet state vectors (position and velocity) in time. \subsubsection*{Inputs} \begin{itemize} \item \begin{description} \item[r (astropy Quantity n$\times$3 array)] \hfill \\ Initial position vector of each planet in units of $ km $ \item[v (astropy Quantity n$\times$3 array)] \hfill \\ Initial velocity vector of each planet in units of $ km/s $ \item[Mp (astropy Quantity array)] \hfill \\ Planet masses in units of $ kg $ \item[Ms (float ndarray)] \hfill \\ Target star masses in M\_sun \item[dt (astropy Quantity)] \hfill \\ Time increment to propagate the system in units of $ day $ \end{description} \end{itemize} \subsubsection*{Outputs} \begin{itemize} \item \begin{description} \item[rnew (astropy Quantity n$\times$3 array)] \hfill \\ Propagated position vectors in units of $ km $ \item[vnew (astropy Quantity n$\times$3 array)] \hfill \\ Propagated velocity vectors in units of $ km/s $ \item[snew (astropy Quantity array)] \hfill \\ Propagated apparent separations in units of $ km $ \item[dnew (astropy Quantity array)] \hfill \\ Propagated planet-star distances in units of $ km $ \end{description} \end{itemize} \subsubsection{get\_current\_WA Method} \label{sec:getcurrentWAtask} The \verb+get_current_WA+ method calculates the current working angles for planets specified by the given indices. \subsubsection*{Inputs} \begin{itemize} \item \begin{description} \item[pInds (integer ndarray)] \hfill \\ Integer indices of the planets of interest \end{description} \end{itemize} \subsubsection*{Outputs} \begin{itemize} \item \begin{description} \item[WA (astropy Quantity array)] \hfill \\ Working angles in units of $ arcsec $ \end{description} \end{itemize} % SURVEY SIMULATION \subsection{Survey Simulation} \label{sec:surveysim} %The Survey Simulation module takes as input instances of the Simulated Universe module and the Time Keeping, and Post-Processing modules. This is the module that performs a specific simulation based on all of the input parameters and models. This module returns the mission timeline - an ordered list of simulated observations of various targets on the target list along with their outcomes. The output also includes an encoding of the final state of the simulated universe (so that a subsequent simulation can start from where a previous simulation left off) and the final state of the observatory definition (so that post-simulation analysis can determine the percentage of volatiles expended, and other engineering metrics). Survey Simulation TASKS: \verb+run_sim()+ - perform survey simulation \S\ref{sec:runsimtask} Survey Simulation SUBTASKS: \verb+initial_target()+ - find initial target star \S\ref{sec:initialtargettask} \verb+observation_detection(pInds, s_ind, DRM, planPosTime)+ - finds if planet detections are possible and returns relevant information \S\ref{sec:observationdetectiontask}\\ \verb+det_data(s, dMag, Ip, DRM, FA, DET, MD, s_ind, pInds, observationPossible, observed)+ - determines detection status \S\ref{sec:detdatatask} \begin{verbatim} observation_characterization(observationPossible, pInds, s_ind, spectra,\ s, Ip, DRM, FA, t_int) \end{verbatim} finds if characterizations are possible and returns relevant information \S\ref{sec:observationcharacterizationtask}\\ \verb+next_target(s_ind, revisit_list, extended_list, DRM)+ - find next target (scheduler) \S\ref{sec:nexttargettask} \subsubsection{Survey Simulation Object Attribute Initialization} \subsubsection*{Inputs} \begin{itemize} \item \begin{description} % \item[User specification] \hfill \\ % Information from simulation specification JSON file organized into a Python dictionary. If the below \verb+key:value+ pairs are missing from the dictionary, the Survey Simulation object attributes will be assigned the default values listed. \item[OpticalSystem] \hfill \\ Instance of Optical System module inherited from Simulated Universe module (see \ref{sec:opticalsystem}) \item[PlanetPopulation] \hfill \\ Instance of Planet Population module inherited from Simulated Universe module (see \ref{sec:planetpopulation}) \item[ZodiacalLight] \hfill \\ Instance of Zodiacal Light module inherited from Simulated Universe module (see \ref{sec:zodiacallight}) \item[Completeness] \hfill \\ Instance of Completeness module inherited from Simulated Universe module (see \ref{sec:completeness}) \item[TargetList] \hfill \\ Instance of Target List module inherited from Simulated Universe module (see \ref{sec:targetlist}) \item[PlanetPhysicalModel] \hfill \\ Instance of Planet Physical Model module inherited from Simulated Universe module (see \ref{sec:planetphysicalmodel}) \item[SimulatedUniverse] \hfill \\ Instance of Simulated Universe module (see \ref{sec:simulateduniverse}) \item[Observatory] \hfill \\ Instance of Observatory module (see \ref{sec:observatory}) \item[TimeKeeping] \hfill \\ Instance of Time Keeping module (see \ref{sec:time}) \item[PostProcessing] \hfill \\ Instance of Post-Processing module (see \ref{sec:postprocessing}) \end{description} \end{itemize} \subsubsection*{Attributes} \begin{itemize} \item \begin{description} \item[OpticalSystem] \hfill \\ Instance of Optical System module (see \ref{sec:opticalsystem}) \item[PlanetPopulation] \hfill \\ Instance of Planet Population module (see \ref{sec:planetpopulation}) \item[ZodiacalLight] \hfill \\ Instance of Zodiacal Light module (see \ref{sec:zodiacallight}) \item[Completeness] \hfill \\ Instance of Completeness module (see \ref{sec:completeness}) \item[TargetList] \hfill \\ Instance of Target List module (see \ref{sec:targetlist}) \item[PlanetPhysicalModel] \hfill \\ Instance of Planet Physical Model module (see \ref{sec:planetphysicalmodel}) \item[SimulatedUniverse] \hfill \\ Instance of Simulated Universe module (see \ref{sec:simulateduniverse}) \item[Observatory] \hfill \\ Instance of Observatory module (see \ref{sec:observatory}) \item[TimeKeeping] \hfill \\ Instance of Time Keeping module (see \ref{sec:time}) \item[PostProcessing] \hfill \\ Instance of Post-Processing module (see \ref{sec:postprocessing}) \item[DRM] \hfill \\ Contains the results of survey simulation \end{description} \end{itemize} \subsubsection{run\_sim Method} \label{sec:runsimtask} The \verb+run_sim+ method performs the survey simulation and populates the results in \verb+SurveySimulation.DRM+. \subsubsection*{Inputs} This method does not take any explicit inputs. It uses the inherited modules to generate a survey simulation. \subsubsection*{Updated Object Attributes} \begin{itemize} \item \begin{description} \item[SurveySimulation.DRM] \hfill \\ Python list where each entry contains a dictionary of survey simulation results for each observation. The dictionary may include the following key:value pairs (from the prototype): \begin{description} \item[target\_ind] \hfill \\ Index of star in target list observed \item[arrival\_time] \hfill \\ Days since mission start when observation begins \item[sc\_mass] \hfill \\ Maneuvering spacecraft mass (if simulating an occulter system) \item[dF\_lateral] \hfill \\ Lateral disturbance force on occulter in $ N $ if simulating an occulter system \item[dF\_axial] \hfill \\ Axial disturbance force on occulter in $ N $ if simulating an occulter system \item[det\_dV] \hfill \\ Detection station-keeping $\Delta$V in $ m/s $ if simulating an occulter system \item[det\_mass\_used] \hfill \\ Detection station-keeping fuel mass used in $ kg $ if simulating an occulter system \item[det\_int\_time] \hfill \\ Detection integration time in $ day $ \item[det\_status] \hfill \\ Integer or list where \begin{itemize} \item 1 = detection \item 0 = null detection \item -1 = missed detection \item -2 = false alarm \end{itemize} \item[det\_WA] \hfill \\ Detection WA in $ mas $ \item[det\_dMag] \hfill \\ Detection $ \Delta $mag \item[char\_1\_time] \hfill \\ Characterization integration time in $ day $ \item[char\_1\_dV] \hfill \\ Characterization station-keeping $\Delta$V in $ m/s $ if simulating an occulter system \item[char\_1\_mass\_used] \hfill \\ Characterization station-keeping fuel mass used in $ kg $ if simulating an occulter system \item[char\_1\_success] \hfill \\ Characterization success where value which may be: \begin{itemize} \item 1 - successfull characterization \item effective wavelength found during characterization in $ nm $ \end{itemize} \item[slew\_time] \hfill \\ Slew time to next target in $ day $ if simulating an occulter system \item[slew\_dV] \hfill \\ Slew $\Delta$V in $ m/s $ if simulating an occulter system \item[slew\_mass\_used] \hfill \\ Slew fuel mass used in $ kg $ if simulating an occulter system \item[slew\_angle] \hfill \\ Slew angle to next target in $ rad $ \end{description} \end{description} \end{itemize} \subsubsection{initial\_target Sub-task} \label{sec:initialtargettask} The \verb+initial_target+ sub-task is called from the \verb+run_sim+ method to determine the index of the initial target star in the target list. \subsubsection*{Inputs} This sub-task does not take any explicit inputs. It may use any of the inherited modules to generate the initial target star index. \subsubsection*{Outputs} \begin{itemize} \item \begin{description} \item[s\_ind] \hfill \\ Index of the initial target star \end{description} \end{itemize} \subsubsection{observation\_detection Sub-task} \label{sec:observationdetectiontask} The \verb+observation_detection+ sub-task is called from the \verb+run_sim+ task to determine if planets may be detected and calculate information needed later in the simulation. \subsubsection*{Inputs} \begin{itemize} \item \begin{description} \item[pInds] \hfill \\ 1D NumPy ndarray of indices of planets belonging to the target star (used to get relevant attributes from the \verb+SimulatedUniverse+ module) \item[s\_ind] \hfill \\ Index of target star in target list \item[DRM] \hfill \\ Python dictionary containing survey simulation results of current observation as key:value pairs \item[planPosTime] \hfill \\ 1D NumPy ndarray containing the times at which the planet positions and velocities contained in \verb+SimulatedUniverse.r+ and \verb+SimulatedUniverse.v+ are current (astropy Quantity with units of time) \end{description} \end{itemize} \subsubsection*{Outputs} \begin{itemize} \item \begin{description} \item[observationPossible] \hfill \\ 1D NumPy ndarray (length is number of planets in the system under observation) containing boolean values where True is an observable planet \item[t\_int] \hfill \\ Integration time (astropy Quantity with units of time) \item[DRM] \hfill \\ Python dictionary containing survey simulation results of current observation as key:value pairs \item[s] \hfill \\ 1D NumPy ndarray (length is number of planets in the system under observation) containing apparent separation of planets (astropy Quantity with units of distance) \item[dMag] \hfill \\ 1D NumPy ndarray (length is number of planets in the system under observation) containing $ \Delta $mag for each planet \item[Ip] \hfill \\ 1D NumPy ndarray (length is number of planets in the system under observation) containing irradiance (astropy Quantity with units of $ \frac{1}{m^2 \cdot nm \cdot s} $) \end{description} \end{itemize} \subsubsection{det\_data Sub-task} \label{sec:detdatatask} The \verb+det_data+ sub-task is called from the \verb+run_sim+ task to assign a detection status to the dictionary of current observation results. \subsubsection*{Inputs} \begin{itemize} \item \begin{description} \item[s] \hfill \\ 1D NumPy array (length is number of planets in the system under observation) containing apparent separation of planets (astropy Quantity with units of distance) \item[dMag] \hfill \\ 1D NumPy ndarray (length is number of planets in the system under observation) containing $ \Delta $mag for each planet \item[Ip] \hfill 1D NumPy ndarray (length is number of planets in the system under observation) containing irradiance (astropy Quantity with units of $ \frac{1}{m^2 \cdot nm \cdot s} $) \item[DRM] \hfill \\ Python dictionary containing survey simulation results of current observation as key:value pairs \item[FA] \hfill \\ Boolean where True is False Alarm \item[DET] \hfill \\ Boolean where True is DETection \item[MD] \hfill \\ Boolean where True is Missed Detection \item[s\_ind] \hfill \\ Index of target star in target list \item[pInds] \hfill \\ 1D NumPy ndarray of indices of planets belonging to the target star (used to get relevant attributes from the \verb+SimulatedUniverse+ module) \item[observationPossible] \hfill \\ 1D NumPy ndarray (length is number of planets in the system under observation) containing boolean values where True is an observable planet \item[observed] \hfill \\ 1D NumPy ndarray which contains the number of observations for each planet in the simulated universe \end{description} \end{itemize} \subsubsection*{Outputs} \begin{itemize} \item \begin{description} \item[s] \hfill \\ 1D NumPy array (length is number of planets in the system under observation) containing apparent separation of planets (astropy Quantity with units of distance) \item[dMag] \hfill \\ 1D NumPy ndarray (length is number of planets in the system under observation) containing $ \Delta $mag for each planet \item[Ip] \hfill \\ 1D NumPy ndarray (length is number of planets in the system under observation) containing irradiance (astropy Quantity with units of $ \frac{1}{m^2 \cdot nm \cdot s} $) \item[DRM] \hfill \\ Python dictionary containing survey simulation results of current observation as key:value pairs \item[observed] \hfill \\ 1D NumPy ndarray which contains the number of observations for each planet in the simulated universe \end{description} \end{itemize} \subsubsection{observation\_characterization Sub-task} \label{sec:observationcharacterizationtask} The \verb+observation_characterization+ sub-task is called by the \verb+run_sim+ task to determine if characterizations are to be performed and calculate relevant characterization information to be used later in the observation simulation. \subsubsection*{Inputs} \begin{itemize} \item \begin{description} \item[observationPossible] \hfill \\ 1D NumPy ndarray (length is number of planets in the system under observation) containing boolean values where True is an observable planet \item[pInds] \hfill \\ 1D NumPy ndarray of indices of planets belonging to the target star (used to get relevant attributes from the \verb+SimulatedUniverse+ module) \item[s\_ind] \hfill \\ Index of target star in target list \item[spectra] \hfill \\ NumPy ndarray where 1 denotes spectra for a planet that has been captured, 0 denotes spectra for a planet that has not been captured \item[s] \hfill \\ 1D NumPy array (length is number of planets in the system under observation) containing apparent separation of planets (astropy Quantity with units of distance) \item[Ip] \hfill \\ 1D NumPy ndarray (length is number of planets in the system under observation) containing irradiance (astropy Quantity with units of $ \frac{1}{m^2 \cdot nm \cdot s} $) \item[DRM] \hfill \\ Python dictionary containing survey simulation results of current observation as key:value pairs \item[FA] \hfill \\ Boolean where True is False Alarm \item[t\_int] \hfill \\ Integration time (astropy Quantity with units of time) \end{description} \end{itemize} \subsubsection*{Outputs} \begin{itemize} \item \begin{description} \item[DRM] \hfill \\ Python dictionary containing survey simulation results of current observation as key:value pairs \item[FA] \hfill \\ Boolean where True is False Alarm \item[spectra] \hfill \\ NumPy ndarray where 1 denotes spectra for a planet that has been captured, 0 denotes spectra for a planet that has not been captured \end{description} \end{itemize} \subsubsection{next\_target Sub-task} \label{sec:nexttargettask} The \verb+next_target+ sub-task is called from the \verb+run_sim+ task to determine the index of the next star from the target list for observation. \subsubsection*{Inputs} \begin{itemize} \item \begin{description} \item[s\_ind] \hfill \\ Index of current star from the target list \item[targlist] \hfill \\ Target List module (see \ref{sec:targetlist}) \item[revisit\_list] \hfill \\ NumPy ndarray containing index of target star and time in $ day $ of target stars from the target list to revisit \item[extended\_list] \hfill \\ 1D NumPy ndarray containing the indices of stars in the target list to consider if in extended mission time \item[DRM] \hfill \\ Python dictionary containing survey simulation results of current observation as key:value pairs \end{description} \end{itemize} \subsubsection*{Outputs} \begin{itemize} \item \begin{description} \item[new\_s\_ind] \hfill \\ Index of next target star in the target list \item[DRM] \hfill \\ Python dictionary containing survey simulation results of current observation as key:value pairs \end{description} \end{itemize} % SURVEY ENSEMBLE NEEDS UPDATING \subsection{Survey Ensemble} The Survey Ensemble module's only task is to run multiple simulations. While the implementation of this module is not at all dependent on a particular mission design, it can vary to take advantage of available parallel-processing resources. As the generation of a survey ensemble is an embarrassingly parallel task---every survey simulation is fully independent and can be run as a completely separate process---significant gains in execution time can be achieved with parallelization. The baseline implementation of this module contains a simple looping function that executes the desired number of simulations sequentially, as well as a locally parallelized version based on IPython Parallel. Depending on the local setup, the Survey Ensemble implementation could also potentially save time by cloning survey module objects and reinitializing only those sub-modules that have stochastic elements (i.e., the simulated universe). Another possible implementation variation is to use the Survey Ensemble module to conduct investigations of the effects of varying any normally static parameter. This could be done, for example, to explore the impact on yield in cases where the non-coronagraph system throughput, or elements of the propulsion system, are mischaracterized prior to launch. This SE module implementation would overwrite the parameter of interest given in the input specification for every individual survey executed, and saving the true value of the parameter used along with the simulation output. \section*{Acknowledgements} EXOSIMS development is supported by NASA Grant Nos. NNX14AD99G (GSFC) and NNX15AJ67G (WPS). \end{document}
{ "alphanum_fraction": 0.7309456826, "avg_line_length": 64.2969283276, "ext": "tex", "hexsha": "6c38b5811b673385db0301522097c0bf54e65e67", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ce41adc8c162b6330eb9cefee83f3a395bcff614", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "douglase/EXOSIMS", "max_forks_repo_path": "ICD/icd.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "ce41adc8c162b6330eb9cefee83f3a395bcff614", "max_issues_repo_issues_event_max_datetime": "2020-06-26T00:18:37.000Z", "max_issues_repo_issues_event_min_datetime": "2016-08-13T18:39:39.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "dgarrett622/EXOSIMS", "max_issues_repo_path": "ICD/icd.tex", "max_line_length": 1381, "max_stars_count": null, "max_stars_repo_head_hexsha": "ce41adc8c162b6330eb9cefee83f3a395bcff614", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "dgarrett622/EXOSIMS", "max_stars_repo_path": "ICD/icd.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 31520, "size": 131873 }
\hypertarget{language_8h}{}\section{jpconj/\+Ui\+Helper/language.h File Reference} \label{language_8h}\index{jpconj/\+Ui\+Helper/language.\+h@{jpconj/\+Ui\+Helper/language.\+h}} {\ttfamily \#include $<$Q\+Application$>$}\\* {\ttfamily \#include $<$Q\+Debug$>$}\\* {\ttfamily \#include $<$Q\+Dir$>$}\\* {\ttfamily \#include $<$Q\+File$>$}\\* {\ttfamily \#include $<$Q\+Hash$>$}\\* {\ttfamily \#include $<$Q\+Main\+Window$>$}\\* {\ttfamily \#include $<$Q\+Pair$>$}\\* {\ttfamily \#include $<$Q\+Settings$>$}\\* {\ttfamily \#include $<$Q\+String$>$}\\* {\ttfamily \#include $<$Q\+Translator$>$}\\* \subsection*{Classes} \begin{DoxyCompactItemize} \item class \hyperlink{class_language}{Language} \end{DoxyCompactItemize} \subsection*{Typedefs} \begin{DoxyCompactItemize} \item typedef Q\+Pair$<$ Q\+Translator $\ast$, Q\+Translator $\ast$ $>$ \hyperlink{language_8h_ae897088620900770712f299b9302ff40}{Pair\+Trans} \end{DoxyCompactItemize} \subsection{Typedef Documentation} \index{language.\+h@{language.\+h}!Pair\+Trans@{Pair\+Trans}} \index{Pair\+Trans@{Pair\+Trans}!language.\+h@{language.\+h}} \subsubsection[{\texorpdfstring{Pair\+Trans}{PairTrans}}]{\setlength{\rightskip}{0pt plus 5cm}typedef Q\+Pair$<$Q\+Translator$\ast$, Q\+Translator$\ast$$>$ {\bf Pair\+Trans}}\hypertarget{language_8h_ae897088620900770712f299b9302ff40}{}\label{language_8h_ae897088620900770712f299b9302ff40}
{ "alphanum_fraction": 0.6981268012, "avg_line_length": 47.8620689655, "ext": "tex", "hexsha": "2826a45a17c3192f1e960e7be9e3d10e26ae0cf8", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2017-08-05T10:56:53.000Z", "max_forks_repo_forks_event_min_datetime": "2017-08-05T10:56:53.000Z", "max_forks_repo_head_hexsha": "34321a1bd09df4e2d6dc576f424b7006afdf570d", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "anqafalak/japkatsuyou", "max_forks_repo_path": "docs/latex.jpconj/language_8h.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "34321a1bd09df4e2d6dc576f424b7006afdf570d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "anqafalak/japkatsuyou", "max_issues_repo_path": "docs/latex.jpconj/language_8h.tex", "max_line_length": 288, "max_stars_count": 9, "max_stars_repo_head_hexsha": "34321a1bd09df4e2d6dc576f424b7006afdf570d", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "anqafalak/japkatsuyou", "max_stars_repo_path": "docs/latex.jpconj/language_8h.tex", "max_stars_repo_stars_event_max_datetime": "2021-09-16T01:41:20.000Z", "max_stars_repo_stars_event_min_datetime": "2018-03-15T06:45:22.000Z", "num_tokens": 513, "size": 1388 }
\newpage \subsection{Scalar AES Acceleration} \label{sec:scalar:aes} This section details proposals for acceleration of the AES block cipher \cite{nist:fips:197} within a scalar RISC-V core, obeying the two-read-one-write constraint on general purpose register file accesses. Supporting material, including rationale and a design space exploration for these instructions can be found in \cite{cryptoeprint:2020:930}. \subsubsection{RV32 Instructions} \label{sec:scalar:aes:rv32} \begin{bytefield}[bitwidth={1.05em},endianness={big}]{32} \bitheader{0-31} \\ \encaesthreetwoesmi \encaesthreetwoesi \encaesthreetwodsmi \encaesthreetwodsi \end{bytefield} \begin{cryptoisa} aes32esi rt, rs2, bs // Encrypt: SubBytes aes32esmi rt, rs2, bs // Encrypt: SubBytes & MixColumns aes32dsi rt, rs2, bs // Decrypt: SubBytes aes32dsmi rt, rs2, bs // Decrypt: SubBytes & MixColumns \end{cryptoisa} These instructions are a very lightweight proposal, derived from \cite{MJS:LWAES:20}. They are designed to enable a partial T-Table based implementation of AES in hardware, where the SubBytes, ShiftRows and MixColumns transformations are all rolled into a single instruction, with the per-byte results then accumulated. The {\tt bs} immediate operand is a 2-bit {\em Byte Select}, and indicates which byte of the input word is operated on. RISC-V Sail model code for each instruction is found in figure \ref{fig:sail:aes:rv32}. Note that the instructions source their destination register from bits $19:15$ of the encoding, rather than the usual $11:7$. This is because the instructions are designed to be used such that the destination register is always the same as {\tt rs1}. See Appendix \ref{sec:scalar:encodings} for more information. These instructions use the Equivalent Inverse Cipher construction \cite[Section 5.3.5]{nist:fips:197}. This affects the computation of the KeySchedule, as shown in \cite[Figure 15]{nist:fips:197}. \begin{figure}[h] \lstinputlisting[language=sail,firstline=55,lastline=71]{../extern/sail-riscv/model/riscv_insts_kext_rv32.sail} \caption{RISC-V Sail model specification for the lightweight AES instructions targeting the RV32 base architecture.} \label{fig:sail:aes:rv32} \end{figure} % ------------------------------------------------------------ \newpage \subsubsection{RV64 Instructions} \label{sec:scalar:aes:rv64} \begin{bytefield}[bitwidth={1.05em},endianness={big}]{32} \bitheader{0-31} \\ \encaessixfourksonei \encaessixfourkstwo \encaessixfourim \encaessixfouresm \encaessixfoures \encaessixfourdsm \encaessixfourds \end{bytefield} \begin{cryptoisa} aes64ks1i rd, rs1, rcon // KeySchedule: SubBytes, Rotate, Round Const aes64ks2 rd, rs1, rs2 // KeySchedule: XOR summation aes64im rd, rs1 // KeySchedule: InvMixColumns for Decrypt aes64esm rd, rs1, rs2 // Round: ShiftRows, SubBytes, MixColumns aes64es rd, rs1, rs2 // Round: ShiftRows, SubBytes aes64dsm rd, rs1, rs2 // Round: InvShiftRows, InvSubBytes, InvMixColumns aes64ds rd, rs1, rs2 // Round: InvShiftRows, InvSubBytes \end{cryptoisa} These instructions are for RV64 only. They implement the SubBytes, ShiftRows and MixColumns transformations of AES. Each round instruction takes two 64-bit registers as input, representing the 128-bit state of the AES cipher, and outputs one 64-bit result, i.e. half of the next round state. The byte mapping of input register values to AES state and output register values is shown in \figref{aes:rv64:mapping}. RISC-V Sail model code for the instructions is illustrated in \figref{pesudo:aes:rv64}. \begin{itemize} \item The \mnemonic{aes64ks1i}/\mnemonic{aes64ks2} instructions are used in the encrypt KeySchedule. \mnemonic{aes64ks1i} implements the rotation, SubBytes and Round Constant addition steps. \mnemonic{aes64ks2} implements the remaining {\tt xor} operations. \item The \mnemonic{aes64im} instruction applies the inverse MixColumns transformation to two columns of the state array, packed into a single 64-bit register. It is used to create the inverse cipher KeySchedule, according to the equivalent inverse cipher construction in \cite[Page 23, Section 5.3.5]{nist:fips:197}. \item The \mnemonic{aes64esm}/\mnemonic{aes64dsm} instructions perform the (Inverse) SubBytes, ShiftRows and MixColumns Transformations. \item The \mnemonic{aes64es}/\mnemonic{aes64ds} instructions perform the (Inverse) SubBytes and ShiftRows Transformations. They are used for the last round only. \item Computing the next round state uses two instructions. The high or low 8 bytes of the next state are selected by swapping the order of the source registers. The following code snippet shows one round of the AES block encryption. {\tt t0} and {\tt t1} hold the current round state. {\tt t2} and {\tt t3} hold the next round state. \begin{lstlisting} aes64esm t2, t0, t1 // ShiftRows, SubBytes, MixColumns bytes 0..7 aes64esm t3, t1, t0 // " " " " 8..15 \end{lstlisting} \end{itemize} This proposal requires $6$ instructions per AES round: $2$ \mnemonic{ld} instructions to load the round key, $2$ \mnemonic{xor} to add the round key to the current state and $2$ of the relevant AES encrypt/decrypt instructions to perform the SubBytes, ShiftRows and MixColumns round functions. An un-rolled AES-128 block encryption with an offline KeySchedule hence requires $69$ instructions in total. These instructions are amenable to macro-op fusion. The recommended sequences are: \begin{lstlisting}[language=pseudo] aes64esm rd1, rs1, rs2 // Different destination registers, aes64esm rd2, rs2, rs1 // identical source registers with swapped order. \end{lstlisting} This is similar to the recommended \mnemonic{mulh}, \mnemonic{mul} sequence in the M extension to compute a full $32*32->64$ bit multiplication result \cite[Section 7.1]{riscv:spec:user}. Unlike the $32$-bit AES instructions, the $64$-bit variants {\em do not} use the Equivalent Inverse Cipher construction \cite[Section 5.3.5]{nist:fips:197}. \begin{figure}[h] \centering \includegraphics[width=0.8\textwidth]{diagrams/aes-rv64-state.png} \caption{ Mapping of AES state between input and output registers for the round instructions. {\tt Rout1} is given by \mnemonic{aes64esm rd, rs1, rs2}, and {\tt Rout2} by \mnemonic{aes64esm rd, rs2, rs1}. The {\tt [Inv]ShiftRows} blocks show how to select the relevant $8$ bytes for further processing from the concatenation {\tt rs2 || \tt rs1}. } \label{fig:aes:rv64:mapping} \end{figure} \begin{figure}[h!] \lstinputlisting[language=sail,firstline=64,lastline=105]{../extern/sail-riscv/model/riscv_insts_kext_rv64.sail} \caption{ RISC-V Sail model specification for the RV64 AES instructions. } \label{fig:pesudo:aes:rv64} \end{figure}
{ "alphanum_fraction": 0.7636900369, "avg_line_length": 36.8206521739, "ext": "tex", "hexsha": "3e2577aa2e32a6e150b468a8dfed1c174a27dc8a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "608f550ea2a791fb091133fe6050321545dfc547", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "dingiso/riscv-crypto", "max_forks_repo_path": "doc/old-tex/tex/sec-scalar-aes.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "608f550ea2a791fb091133fe6050321545dfc547", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "dingiso/riscv-crypto", "max_issues_repo_path": "doc/old-tex/tex/sec-scalar-aes.tex", "max_line_length": 112, "max_stars_count": null, "max_stars_repo_head_hexsha": "608f550ea2a791fb091133fe6050321545dfc547", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "dingiso/riscv-crypto", "max_stars_repo_path": "doc/old-tex/tex/sec-scalar-aes.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1963, "size": 6775 }
\frame { \frametitle{SaltStack} \begin{center}% \includegraphics[height=6cm]{images/saltstack_logo.jpg} \end{center}% } \frame { \frametitle{Buzzwords} \begin{itemize} \item<1-> ZeroMQ \item<2-> Event Based \item<3-> Python \item<4-> Remote Execution \end{itemize} } \subsection{The taste of Salt} \frame { \frametitle{The taste of Salt} \begin{center}% \Huge The taste of Salt \end{center}% } \frame { \frametitle{Core Concepts} \begin{itemize} \item<1-> Master/Minion/Syndic \item<2-> PKI \item<3-> States \item<4-> Grains \item<5-> Pillars \item<6-> Beacons/Reactors \item<7-> Returners/Outputters \end{itemize} } \frame { \frametitle{Learning Salt} \begin{itemize} \item<1-> Master: Install 'salt-master' \item<2-> Minion: Install 'salt-minion' \item<3-> Minion: Point to master ip \item<4-> Minion: Restart 'salt-minion' \item<5-> Master: Accept minion with 'salt-key' \end{itemize} } \frame { \frametitle{Ping all} \begin{center}% \includegraphics[height=3cm]{images/salt_ping.png} \end{center}% } \subsection{Instant execution} \frame { \frametitle{Instant execution} \begin{center}% \Huge Instant execution \end{center}% } \frame { \frametitle{Instant execution} \begin{center}% \includegraphics[height=3cm]{images/salt_cmd.png} \end{center}% } \frame { \frametitle{Instant execution - Dev perspective} \begin{center}% \includegraphics[height=6cm]{images/unlimited_power.png} \end{center}% } \frame { \frametitle{Switch perspective} \begin{center}% Lets look at this from the Ops perspective \end{center}% } \frame { \frametitle{Instant execution - Ops perspective} \begin{itemize} \item<1-> sudo salt '*' cmd.run 'rm -rf /' \item<2-> sudo salt '*' cmd.run 'poweroff -f' \end{itemize} } \frame { \frametitle{Instant execution - Ops perspective} \begin{center}% \includegraphics[height=6cm]{images/accident.png} \end{center}% } \frame { \frametitle{Security} \begin{center}% \Huge Secure the Salt Master! \end{center}% } \frame { \frametitle{Security} \begin{itemize} \item<1-> M\&M's Security \item<2-> Protect your master - at all cost \item<3-> Disallow 'cmd.run' if not needed \end{itemize} }
{ "alphanum_fraction": 0.6621680453, "avg_line_length": 16.176056338, "ext": "tex", "hexsha": "5da173b20df1ac44d6b08238200860ee091a8bec", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "9b6fde3b6c0486a1b6076aa8460fbf8f7b521fe9", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "fracklen/salt_vs_chef", "max_forks_repo_path": "includes/salt.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9b6fde3b6c0486a1b6076aa8460fbf8f7b521fe9", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "fracklen/salt_vs_chef", "max_issues_repo_path": "includes/salt.tex", "max_line_length": 60, "max_stars_count": null, "max_stars_repo_head_hexsha": "9b6fde3b6c0486a1b6076aa8460fbf8f7b521fe9", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "fracklen/salt_vs_chef", "max_stars_repo_path": "includes/salt.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 789, "size": 2297 }
% NB: use pdflatex to compile NOT pdftex. Also make sure youngtab is % there... % converting eps graphics to pdf with ps2pdf generates way too much % whitespace in the resulting pdf, so crop with pdfcrop % cf. http://www.cora.nwra.com/~stockwel/rgspages/pdftips/pdftips.shtml \documentclass[10pt,aspectratio=169,dvipsnames]{beamer} \usetheme[color/block=transparent]{metropolis} \usepackage[absolute,overlay]{textpos} \usepackage{booktabs} \usepackage[utf8]{inputenc} \usepackage{tikz} \usepackage[scale=2]{ccicons} \usepackage[official]{eurosym} %use this to add space between rows \newcommand{\ra}[1]{\renewcommand{\arraystretch}{#1}} \newcommand{\R}{\mathbb{R}} \setbeamerfont{alerted text}{series=\bfseries} \setbeamercolor{alerted text}{fg=Mahogany} \setbeamercolor{background canvas}{bg=white} \def\l{\lambda} \def\m{\mu} \def\d{\partial} \def\cL{\mathcal{L}} \def\co2{CO${}_2$} \def\bra#1{\left\langle #1\right|} \def\ket#1{\left| #1\right\rangle} \newcommand{\braket}[2]{\langle #1 | #2 \rangle} \newcommand{\norm}[1]{\left\| #1 \right\|} \def\corr#1{\Big\langle #1 \Big\rangle} \def\corrs#1{\langle #1 \rangle} % for sources http://tex.stackexchange.com/questions/48473/best-way-to-give-sources-of-images-used-in-a-beamer-presentation \setbeamercolor{framesource}{fg=gray} \setbeamerfont{framesource}{size=\tiny} \newcommand{\source}[1]{\begin{textblock*}{5cm}(10.5cm,8.35cm) \begin{beamercolorbox}[ht=0.5cm,right]{framesource} \usebeamerfont{framesource}\usebeamercolor[fg]{framesource} Source: {#1} \end{beamercolorbox} \end{textblock*}} \usepackage{hyperref} \usepackage{tikz} \usepackage[europeanresistors,americaninductors]{circuitikz} %\usepackage[pdftex]{graphicx} \graphicspath{{graphics/}} \DeclareGraphicsExtensions{.pdf,.jpeg,.png,.jpg,.gif} \def\goat#1{{\scriptsize\color{green}{[#1]}}} \let\olditem\item \renewcommand{\item}{% \olditem\vspace{5pt}} \title{Energy System Modelling\\ Summer Semester 2020, Lecture 6} %\subtitle{---} \author{ {\bf Dr. Tom Brown}, \href{mailto:[email protected]}{[email protected]}, \url{https://nworbmot.org/}\\ \emph{Karlsruhe Institute of Technology (KIT), Institute for Automation and Applied Informatics (IAI)} } \date{} \titlegraphic{ \vspace{0cm} \hspace{10cm} \includegraphics[trim=0 0cm 0 0cm,height=1.8cm,clip=true]{kit.png} \vspace{5.1cm} {\footnotesize Unless otherwise stated, graphics and text are Copyright \copyright Tom Brown, 2020. Graphics and text for which no other attribution are given are licensed under a \href{https://creativecommons.org/licenses/by/4.0/}{Creative Commons Attribution 4.0 International Licence}. \ccby} } \begin{document} \maketitle \begin{frame} \frametitle{Table of Contents} \setbeamertemplate{section in toc}[sections numbered] \tableofcontents[hideallsubsections] \end{frame} \section{Optimisation: Motivation} \begin{frame} \frametitle{What to do about variable renewables?} Backup energy costs money and may also cause CO${}_2$ emissions. Curtailing renewable energy is also a waste. We consider \alert{four options} to deal with variable renewables: \begin{enumerate} \item Smoothing stochastic variations of renewable feed-in over \alert{larger areas}, e.g. the whole of European continent. \item Using \alert{storage} to shift energy from times of surplus to deficit. \item \alert{Shifting demand} to different times, when renewables are abundant. \item Consuming the electricity in \alert{other sectors}, e.g. transport or heating. \end{enumerate} \alert{Optimisation} in energy networks is a tool to assess these options. \end{frame} \begin{frame} \frametitle{Why optimisation?} In the energy system we have lots of \alert{degrees of freedom}: \begin{enumerate} \item Power plant and storage dispatch \item Renewables curtailment \item Dispatch of network elements (e.g. High Voltage Direct Current (HVDC) lines) \item Capacities of everything when considering investment \end{enumerate} but we also have to respect \alert{physical constraints}: \begin{enumerate} \item Meet energy demand \item Do not overload generators or storage \item Do not overload network \end{enumerate} and we want to do this while \alert{minimising costs}. Solution: \alert{optimisation}. \end{frame} \section{Optimisation: Introduction} \begin{frame} \frametitle{Simplest 1-d optimisation problem} Consider the following problem. We have a function $f(x)$ of one variable $x \in \mathbb{R}$ \begin{equation*} f(x) = (x-2)^2 \end{equation*} Where does it reach a minimum? School technique: find stationary point $\frac{df}{dx} = 2(x-2) = 0$, i.e. minimum at $x^* = 2$ where $f(x^*)=0$. \centering \includegraphics[width=7.5cm]{quadratic} \end{frame} \begin{frame} \frametitle{Simplest 1-d optimisation problem} Consider the following problem. We have a function $f(x)$ of one variable $x \in \mathbb{R}$ \begin{equation*} f(x) = x^3 -4x^2+3x +4 \end{equation*} Where does it reach a minimum? School technique fails since has two stationary points, one local minimum and local maximum; must check 2nd derivative for minimum/maximum. Also: function is not bounded as $x \to -\infty$. No solution! \centering \includegraphics[width=7cm]{cubic} \end{frame} \begin{frame} \frametitle{Beware saddle points in higher dimensions} Some functions have \alert{saddle points} with zero derivative in all directions (stationary points) but that are neither maxima nor minima, e.g. $f(x,y) = x^2 - y^2$ at $(x,y) = (0,0)$. \centering \includegraphics[width=8cm]{Saddle_point.png} \source{Wikipedia} \end{frame} \begin{frame} \frametitle{Simplest 1-d optimisation problem} Consider the following problem. We have a function $f(x)$ of one variable $x \in \mathbb{R}$ \begin{equation*} f(x) = x^4 -4x^2+x +5 \end{equation*} Where does it reach a minimum? Now two separate local minima. Function is \alert{not convex} downward. This is a problem for algorithms that only search for minima locally. \centering \includegraphics[width=7.5cm]{quartic} \end{frame} \begin{frame} \frametitle{Simplest 1-d optimisation problem with constraint} Consider the following problem. We minimise a function of one variable $x \in \mathbb{R}$ \begin{equation*} \min_x (x-2)^2 \end{equation*} subject to a constraint \begin{equation*} x \geq 1 \end{equation*} The constraint has \alert{no effect} on the solution. It is \alert{non-binding}. \centering \includegraphics[width=7cm]{quadratic-gt1} \end{frame} \begin{frame} \frametitle{Simplest 1-d optimisation problem with constraint} Consider the following problem. We minimise a function of one variable $x \in \mathbb{R}$ \begin{equation*} \min_x (x-2)^2 \end{equation*} subject to a constraint \begin{equation*} x \geq 3 \end{equation*} Now the constraint is \alert{binding} and is \alert{saturated} at the optimum $x^* = 3$. \centering \includegraphics[width=7cm]{quadratic-gt3} \end{frame} \begin{frame} \frametitle{Simple 2-d optimisation problem} Consider the following problem. We have a function $f(x,y)$ of two variables $x,y\in \mathbb{R}$ \begin{equation*} f(x,y) = 3x \end{equation*} and we want to find the maximum of this function in the $x-y$ plane \begin{equation*} \max_{x,y\in \mathbb{R}} f(x,y) \end{equation*} subject to the following constraints \begin{align} x + y & \leq 4 \\ x & \geq 0 \\ y & \geq 1 \end{align} \end{frame} \begin{frame} \frametitle{Simple 2-d optimisation problem} Consider $x-y$ plane of our variables: \centering \includegraphics[width=7cm]{2dsimple-b.pdf} \end{frame} \begin{frame} \frametitle{Simple 2-d optimisation problem} Add constraints (2) and (3): \centering \includegraphics[width=7cm]{2dsimple-c.pdf} \end{frame} \begin{frame} \frametitle{Simple 2-d optimisation problem} Add constraint (1). In this allowed space (white area) what is the maximum of $f(x,y) = 3x$? \centering \includegraphics[width=7cm]{2dsimple-d.pdf} \end{frame} \begin{frame} \frametitle{Simple 2-d optimisation problem} $f(x,y) = 3x$ maximised at $x^* = 3, y^* = 1, f(x^*, y^*) = 9$: \centering \includegraphics[width=7cm]{2dsimple.pdf} \end{frame} \begin{frame} \frametitle{Simple 2-d optimisation problem} Consider the following problem. We have a function $f(x,y)$ of two variables $x,y\in \mathbb{R}$ \begin{equation*} f(x,y) = 3x \end{equation*} and we want to find the maximum of this function in the $x-y$ plane \begin{equation*} \max_{x,y\in \mathbb{R}} f(x,y) \end{equation*} subject to the following constraints \begin{align} x + y & \leq 4 \\ x & \geq 0 \\ y & \geq 1 \end{align} \alert{Optimal solution:} $x^* = 3, y^* = 1, f(x^*,y^*) = 9$. NB: We would have gotten the same solution if we had removed the 2nd constraint - it is \alert{non-binding}. \end{frame} \begin{frame} \frametitle{Another simple optimisation problem} We can also have equality constraints. Consider the maximum of this function in the $x-y-z$ space \begin{equation*} \max_{x,y,z\in \mathbb{R}} f(x,y,z) = (3x + 5z) \end{equation*} subject to the following constraints \begin{align*} x + y & \leq 4 \\ x & \geq 0 \\ y & \geq 1 \\ z & = 2 \end{align*} \pause \alert{Optimal solution:} $x^* = 3, y^* = 1, z^* = 2, f(x^*,y^*,z^*) = 19$. [This problem is \alert{separable}: can solve for $(x,y)$ and $(z)$ separately.] \end{frame} \begin{frame} \frametitle{Energy system mapping to an optimisation problem} This optimisation problem has the same basic form as our energy system considerations: \ra{1.05} \begin{table}[!t] \begin{tabular}{p{6cm}p{0.5cm}p{6cm}} \toprule \alert{Objective function to minimise} & \vspace{.4cm}$\leftrightarrow$ & \alert{Minimise total costs} \\ \alert{Optimisation variables} & \vspace{.4cm} $\leftrightarrow$ & \alert{Physical degrees of freedom (power plant dispatch, etc.)} \\ \alert{Constraints} &\vspace{.4cm} $\leftrightarrow$ & \alert{Physical constraints (overloading, etc.)} \\ \bottomrule \end{tabular} \end{table} Before we apply optimisation to the energy system, we'll do some \alert{theory}. \end{frame} \section{Optimisation: Theory} \begin{frame} \frametitle{Optimisation problem} We have an \alert{objective function} $f: \R^k \to \R$ \begin{equation*} \max_{x} f(x) \end{equation*} [$x = (x_1, \dots x_k)$] subject to some \alert{constraints} within $\R^k$: \begin{align*} g_i(x) & = c_i \hspace{1cm}\leftrightarrow\hspace{1cm} \l_i \hspace{1cm} i = 1,\dots n \\ h_j(x) & \leq d_j \hspace{1cm}\leftrightarrow\hspace{1cm} \m_j \hspace{1cm} j = 1,\dots m \end{align*} $\l_i$ and $\m_j$ are the \alert{Karush-Kuhn-Tucker (KKT) multipliers} (basically Lagrange multipliers) we introduce for each constraint equation. Each one measures the change in the objective value of the optimal solution obtained by relaxing the constraint by a small amount. Informally $\l_i \sim \frac{\d f}{\d c_i}$ and $\m_j \sim \frac{\d f}{\d d_j}$ at the optimum $x^*$. They are also known as the \alert{shadow prices} of the constraints. \end{frame} \begin{frame} \frametitle{Feasibility} The space $X \subset \R^k$ which satisfies \begin{align*} g_i(x) & = c_i \hspace{1cm}\leftrightarrow\hspace{1cm} \l_i \hspace{1cm} i = 1,\dots n \\ h_j(x) & \leq d_j \hspace{1cm}\leftrightarrow\hspace{1cm} \m_j \hspace{1cm} j = 1,\dots m \end{align*} is called the \alert{feasible space}. It will have dimension lower than $k$ if there are independent equality constraints. It may also be empty (e.g. for $k=1$, $x \geq 1, x \leq 0$ in $\R^1$), in which case the optimisation problem is called \alert{infeasible}. It can be \alert{convex} or \alert{non-convex}. If all the constraints are affine, then the feasible space is a convex polytope (multi-dimensional polygon). \end{frame} \begin{frame} \frametitle{Convexity means fast polynomial algorithms} If the feasible space is \alert{convex} it is much easier to search, since for a convex objective function we can keep looking in the direction of improving objective function without worrying about getting stuck in a local maximum. \centering \includegraphics[width=12cm]{concave-convex.jpg} \end{frame} \begin{frame} \frametitle{Lagrangian} We now study the \alert{Lagrangian function} \begin{equation*} \cL(x,\l,\m) = f(x) - \sum_i \l_i \left[g_i(x) - c_i\right] - \sum_j \m_j \left[h_j(x) - d_j\right] \end{equation*} We've built this function using the variables $\l_i$ and $\m_j$ to better understand the optimal solution of $f(x)$ given the constraints. The stationary points of $\cL(x,\mathbf{\l},\m)$ tell us important information about the optima of $f(x)$ given the constraints. [It is entirely analogous to the physics Lagrangian $L(x,\dot{x},\l)$ except we have no explicit time dependence $\dot{x}$ and we have additional constraints which are inequalities.] We can already see that if $\frac{\d \cL}{\d \l_i} = 0$ then the equality constraint $g_i(x) = c$ will be satisfied. [Beware: $\pm$ signs appear differently in literature, but have been chosen here such that $\l_i = \frac{\d \cL}{\d c_i}$ and $\m_j = \frac{\d \cL}{\d d_j}$.] \end{frame} \begin{frame} \frametitle{Optimum is a saddle point of the Lagrangian} The stationary point of $\cL$ is a saddle point in $(x,\l,\m)$ space (here minimising $f(x)$): \centering \includegraphics[width=8cm]{conejo-saddle.png} \source{Conejo et al, ``Decomposition Techniques'' (2006)} \end{frame} \begin{frame} \frametitle{KKT conditions} The \alert{Karush-Kuhn-Tucker (KKT) conditions} are necessary conditions that an optimal solution $x^*,\m^*,\l^*$ always satisfies (up to some regularity conditions): \begin{enumerate} \item \alert{Stationarity}: For $\ell = 1,\dots k$ \begin{equation*} \frac{\d \cL}{\d x_\ell} = \frac{\d f}{\d x_\ell} - \sum_i \l_i^* \frac{\d g_i}{\d x_\ell} - \sum_j \m_j^* \frac{\d h_j}{\d x_\ell} = 0 \end{equation*} \item \alert{Primal feasibility}: \begin{align*} g_i(x^*) & = c_i \\ h_j(x^*) &\leq d_j \end{align*} \item \alert{Dual feasibility}: $\m_j^* \geq 0$ \item \alert{Complementary slackness}: $\m_j^* (h_j(x^*) - d_j) = 0$ \end{enumerate} \end{frame} \begin{frame} \frametitle{Complementarity slackness for inequality constraints} We have for each inequality constraint \begin{align*} \m_j^* & \geq 0 \\ \m_j^*(h_j(x^*) - d_j) & = 0 \end{align*} So \alert{either} the inequality constraint is binding \begin{align*} h_j(x^*) = d_j \end{align*} and we have $\m_j^* \geq 0$. \alert{Or} the inequality constraint is NOT binding \begin{align*} h_j(x^*) < d_j \end{align*} and we therefore MUST have $\m_j^* = 0$. If the inequality constraint is non-binding, we can remove it from the optimisation problem, since it has no effect on the optimal solution. \end{frame} \begin{frame} \frametitle{Nota Bene} \begin{enumerate} \item The KKT conditions are necessary conditions for an optimal solution, but are only \alert{sufficient} for optimality of the solution under certain conditions, e.g. for problems with convex objective, convex differentiable inequality constraints and affine equalities constraints. For linear problems, KKT is sufficient. \item The variables $x_\ell$ are often called the \alert{primary variables}, while $(\l_i,\m_j)$ are the \alert{dual variables}. \item Since at the optimal solution we have $g_i(x^*) = c_i$ for equality constraints and $\m_j^*(h_j(x^*) - d_j) = 0$, we have \begin{equation*} \cL(x^*,\l^*,\m^*) = f(x^*) \end{equation*} \end{enumerate} \end{frame} \begin{frame} \frametitle{How we will use the KKT conditions} Usually we will have enough constraints to determine the $k$ values $x_\ell^*$ for $\ell=1,\dots k$ uniquely, i.e. $k$ independent constraints will be binding and the objective function is never constant along any constraint. We will use the KKT conditions, primarily stationarity, to determine the values of the $k$ KKT multipliers for the independent binding constraints. \alert{Dimensionality check}: we need to find $k$ KKT multipliers and we have $k$ equations from stationarity to find them. Good! The remaining KKT multipliers are either zero (for non-binding constraints) or dependent on the $k$ independent KKT multipliers in the case of dependent constraints. (There are also degenerate cases where the optimum is not at a single point, where things will be more complicated, e.g. when objective function is constant along a constraint.) \end{frame} \begin{frame} \frametitle{Return to simple optimisation problem} We want to find the maximum of this function in the $x-y$ plane \begin{equation*} \max_{x,y\in \mathbb{R}} f(x,y) = 3x \end{equation*} subject to the following constraints (now with KKT multipliers) \begin{align*} x + y & \leq 4 \hspace{1cm}\leftrightarrow\hspace{1cm} \m_1 \\ -x & \leq 0 \hspace{1cm}\leftrightarrow\hspace{1cm} \m_2\\ -y & \leq -1 \hspace{1cm}\leftrightarrow\hspace{1cm} \m_3 \end{align*} We know the optimal solution in the \alert{primal variables} $x^* = 3, y^* = 1, f(x^*,y^*) = 9$. What about the \alert{dual variables} $\m_i$? Since the second constraint is not binding, by complementarity $\m_2^*(-x^* - 0) = 0$ we have $\m_2^* = 0$. To find $\m_1^*$ and $\m_3^*$ we have to do more work. \end{frame} \begin{frame} \frametitle{Simple problem with KKT conditions} We use stationarity for the optimal point: \begin{align*} 0 & = \frac{\d \cL}{\d x } = \frac{\d f}{\d x} - \sum_i \l_i^* \frac{\d g_i}{\d x} - \sum_j \m_j^* \frac{\d h_j}{\d x} = 3 - \m_1^* + \m_2^* \\ 0 & = \frac{\d \cL}{\d y } = \frac{\d f}{\d y} - \sum_i \l_i^* \frac{\d g_i}{\d y} - \sum_j \m_j^* \frac{\d h_j}{\d y} = - \m_1^* + \m_3^* \end{align*} From which we find: \begin{align*} \m_1^* & = 3 - \m_2^* = 3 \\ \m_3^* & = \m_1^* = 3 \end{align*} Check interpretation: $\m_j = \frac{\d \cL}{\d d_j}$ with $d_j \to d_j + \varepsilon$. \end{frame} \begin{frame} \frametitle{Simple problem with KKT conditions: Check interpretation} Check interpretation of $\m_1^* = 3$ by shifting constant $d_1$ for first constraint by $\varepsilon$ and solving: \begin{equation*} \max_{x,y\in \mathbb{R}} f(x,y) = 3x \end{equation*} subject to the following constraints \begin{align*} x + y & \leq 4+ \varepsilon \hspace{1cm}\leftrightarrow\hspace{1cm} \m_1 \\ -x & \leq 0 \hspace{1cm}\leftrightarrow\hspace{1cm} \m_2\\ -y & \leq -1 \hspace{1cm}\leftrightarrow\hspace{1cm} \m_3 \end{align*} \end{frame} \begin{frame} \frametitle{Simple problem with KKT conditions: Check interpretation} $f(x,y) = 3x$ maximised at $x^* = 3+\varepsilon, y^* = 1, f(x^*, y^*) = 9+3\varepsilon$. $d_1 \to d_1 + \varepsilon$ causes optimum to shift $f(x^*, y^*) \to f(x^*, y^*) + 3\varepsilon$. Consistent with $\m_1^* = 3$. \centering \includegraphics[width=7cm]{2dsimple-v1.pdf} \end{frame} \begin{frame} \frametitle{Return to another simple optimisation problem} We want to find the maximum of this function in the $x-y-z$ space \begin{equation*} \max_{x,y,z\in \mathbb{R}} f(x,y) = 3x + 5z \end{equation*} subject to the following constraints (now with KKT multipliers) \begin{align*} x + y & \leq 4 \hspace{1cm}\leftrightarrow\hspace{1cm} \m_1 \\ -x & \leq 0 \hspace{1cm}\leftrightarrow\hspace{1cm} \m_2\\ -y & \leq -1 \hspace{1cm}\leftrightarrow\hspace{1cm} \m_3 \\ z & = 2 \hspace{1cm}\leftrightarrow\hspace{1cm} \l \end{align*} We know the optimal solution in the \alert{primal variables} $x^* = 3, y^* = 1, z^* = 2, f(x^*,y^*,z^*) = 19$. What about the \alert{dual variables} $\m_i,\l$? We get same solutions to $\m_1^* = 3, \m_2^* = 0, \m_3^* =3$ because they're not coupled to $z$ direction. What about $\l^*$? \end{frame} \begin{frame} \frametitle{Another simple problem with KKT conditions} We use stationarity for the optimal point: \begin{align*} 0 & = \frac{\d \cL}{\d z } = \frac{\d f}{\d z} - \sum_i \l_i^* \frac{\d g_i}{\d z} - \sum_j \m_j^* \frac{\d h_j}{\d z} = 5 - \l^* \end{align*} From which we find: \begin{align*} \l^* & = 5 \end{align*} Check interpretation: $\l_i = \frac{\d \cL}{\d c_i}$ with $c_i \to c_i + \varepsilon$. \end{frame} \begin{frame} \frametitle{An example for you to do} Find the values of $x^*,y^*,\m_i^*$ \begin{equation*} \max_{x,y\in \mathbb{R}} f(x,y) = y \end{equation*} subject to the following constraints \begin{align*} y + x^2 & \leq 4 \hspace{1cm}\leftrightarrow\hspace{1cm} \m_1 \\ y - 3x & \leq 0 \hspace{1cm}\leftrightarrow\hspace{1cm} \m_2\\ -y & \leq 0 \hspace{1cm}\leftrightarrow\hspace{1cm} \m_3 \end{align*} \end{frame} \section{Optimisation: Solution Algorithms} \begin{frame} \frametitle{Optimisation solution algorithms} In general finding the solution to optimisation problems is hard, at worst $NP$-hard. Non-linear, non-convex and/or discrete (i.e. some variables can only take discrete values) problems are particularly troublesome. There is specialised software for solving particular classes of problems (linear, quadratic, discrete etc.). Since we will mostly focus on linear problems, the main two algorithms of relevance are: \begin{itemize} \item The \alert{simplex algorithm} \item The \alert{interior-point algorithm} \end{itemize} \end{frame} \begin{frame} \frametitle{Simplex algorithm} \begin{columns}[T] \begin{column}{6.5cm} \includegraphics[width=6cm]{480px-Simplex-method-3-dimensions.png} \end{column} \begin{column}{7.5cm} The simplex algorithm works for linear problems by building the feasible space, which is a multi-dimensional polyhedron, and searching its surface for the solution. If the problem has a solution, the optimum can be assumed to always occur at (at least) one of the vertices of the polyhedron. There is a finite number of vertices. The algorithm starts at a feasible vertex. If it's not the optimum, the objective function will increase along one of the edges leading away from the vertex. Follow that edge to the next vertex. Repeat until the optimum is found. \alert{Complexity:} On \emph{average} over given set of problems can solve in polynomial time, but worst cases can always be found with exponential time. \end{column} \end{columns} \source{\href{https://en.wikipedia.org/wiki/Simplex_algorithm\#/media/File:Simplex-method-3-dimensions.png}{Wikipedia}} \end{frame} \begin{frame} \frametitle{Interior point methods} \begin{columns}[T] \begin{column}{6.5cm} \includegraphics[width=7.5cm]{1024px-Karmarkar.png} \end{column} \begin{column}{7.5cm} Interior point methods can be used on more general non-linear problems. They search the interior of the feasible space rather than its surface. They achieve this by extremising the objective function plus a \alert{barrier term} that penalises solutions that come close to the boundary. As the penality becomes less severe the algorithm converges to the optimum point at the boundary. \vspace{.5cm} \alert{Complexity:} For linear problems, Karmakar's version of the interior point method can run in polynomial time. \end{column} \end{columns} \source{\href{https://en.wikipedia.org/wiki/Interior-point_method\#/media/File:Karmarkar.svg}{Wikipedia}} \end{frame} \begin{frame} \frametitle{Interior point methods: Barrier method} Take a problem \begin{equation*} \min_{\{x_i, i=1,\dots n\}} f(x) \end{equation*} such that for \begin{align*} c_j(x) & = 0 \leftrightarrow \l_j, j = 1\dots k \\ x & \geq 0 \end{align*} Any optimisation problem can be brought into this form. Introduce the \alert{barrier function} \begin{equation*} B(x,\mu) = f(x) - \m \sum_{i=1}^n \ln(x_i) \end{equation*} where $\mu$ is the small and positive \alert{barrier parameter} (a scalar). Note that the barrier term penalises solutions when $x$ comes close to 0 by becoming large and positive. \end{frame} \begin{frame} \frametitle{Interior point methods: Barrier method} Barrier term $-\m ln(x)$ penalises the minimisation the closer we get to $x=0$. As $\mu$ gets smaller it converges on being a near-vertical function at $x=0$. \centering \includegraphics[width=7.5cm]{barrier-term.pdf} \end{frame} \begin{frame} \frametitle{Interior point methods: Barrier method: 1-d example} Return to our old 1-d example. We minimise a function of one variable $x \in \mathbb{R}$ \begin{equation*} \min_x (x-2)^2 \end{equation*} subject to a constraint \begin{equation*} x \geq 3 \end{equation*} Solution: $x^* = 3$. \centering \includegraphics[width=7cm]{quadratic-gt3} \end{frame} \begin{frame} \frametitle{Interior point methods: Barrier method: 1-d example} Now instead minimise the barrier problem without any constraint: \begin{equation*} \min_x B(x,\mu) = (x-2)^2 - \m \ln(x-3) \end{equation*} Solve $\frac{\d B(x,\mu)}{\d x} = 2(x-2) -\frac{\mu}{x-3} = 0$, i.e. at $x^* = 2.5 + 0.5\sqrt{1+2\mu} \to 3$ as $\mu \to 0$. \centering \includegraphics[width=8.5cm]{quadratic-barrier} \end{frame} \begin{frame} \frametitle{Interior point methods: Barrier method} The problem \begin{equation*} \min_{\{x_i, i=1,\dots n\}} \left[ f(x) - \m \sum_{i=1}^n \ln(x_i) \right] \end{equation*} such that \begin{align*} c_j(x) & = 0 \leftrightarrow \l_j, j = 1\dots k \end{align*} can now be solved using the extremisation of the Lagrangian like we did for KKT sufficiency. Solve the following equation system iteratively using the Newton method to find the $x_i$ and $\lambda_j$: \begin{align*} \nabla_i f(x) - \m \frac{1}{x_i} + \sum_j \lambda_j \nabla_i c_j(x) & = 0 \\ c_j(x) & = 0 \end{align*} See this \href{https://www.youtube.com/watch?v=zm4mfr-QT1E}{nice video} for more details and visuals. \end{frame} \end{document}
{ "alphanum_fraction": 0.6893516938, "avg_line_length": 29.9897610922, "ext": "tex", "hexsha": "9009a57c177bf2b7e3974f1c3bad258801cd2ea9", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2021-06-15T08:25:36.000Z", "max_forks_repo_forks_event_min_datetime": "2020-06-25T16:25:51.000Z", "max_forks_repo_head_hexsha": "8e46ff7e01bf0ef4da378d71f2265acf71ab317b", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "pitmonticone/esm-lectures", "max_forks_repo_path": "esm-lecture-6.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8e46ff7e01bf0ef4da378d71f2265acf71ab317b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "pitmonticone/esm-lectures", "max_issues_repo_path": "esm-lecture-6.tex", "max_line_length": 326, "max_stars_count": 15, "max_stars_repo_head_hexsha": "780320fa6755596cd1578f1c035f66208e496215", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "nworbmot/esm-lectures", "max_stars_repo_path": "esm-lecture-6.tex", "max_stars_repo_stars_event_max_datetime": "2021-08-10T17:54:02.000Z", "max_stars_repo_stars_event_min_datetime": "2020-05-26T19:02:39.000Z", "num_tokens": 8709, "size": 26361 }
\chapter{Project Status and Plans for future Action} \section{Happened since last meeting (/04/2022)} \begin{itemize} \item Calculation of Homography (based on manually selected matched keypoints) \item Decomposing homography into Rotation matrix, translation vector and normal vector \item Calculation of Essential matrix (based on manually selected matched keypoints) \item Decomposing essential matrix into Rotation matrix and translation vector \item translating rotation matrix into Euler angles \item Visualisation of rotation angles and translation \end{itemize} \section{Ongoing work} \begin{itemize} \item Deciding which combination of rotation/translation is the right one? \item Taking the motion parameters, and assume they remain the same for the next pair of frames, and thus making predictions for \begin{itemize} \item the horizon line \item the approximate position of a point tracked from image N to N+1 into frame N+2 \end{itemize} \end{itemize} \section{Long Term Plans and Ideas} \begin{itemize} \item \end{itemize} \section{TODO} \begin{itemize} \item \end{itemize}
{ "alphanum_fraction": 0.7571305099, "avg_line_length": 37.3225806452, "ext": "tex", "hexsha": "1646165d62fe188e3396f58d02b8a2eb480eccb2", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "9b2f3643e5e28424db774e87237fad7eb409e584", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "BavoPersyn/thesis-NTNU", "max_forks_repo_path": "chapters/6-plansAndIdeas.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9b2f3643e5e28424db774e87237fad7eb409e584", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "BavoPersyn/thesis-NTNU", "max_issues_repo_path": "chapters/6-plansAndIdeas.tex", "max_line_length": 132, "max_stars_count": null, "max_stars_repo_head_hexsha": "9b2f3643e5e28424db774e87237fad7eb409e584", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "BavoPersyn/thesis-NTNU", "max_stars_repo_path": "chapters/6-plansAndIdeas.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 272, "size": 1157 }
\documentclass[10pt,a4paper]{article} \usepackage[utf8]{inputenc} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{hyperref} \usepackage{graphicx} \graphicspath{ {./images/} } \usepackage{amsmath} \DeclareMathOperator*{\argmax}{arg\,max} \hypersetup{ colorlinks=true, linkcolor=blue, filecolor=magenta, urlcolor=cyan, pdftitle={Overleaf Example}, pdfpagemode=FullScreen, } \title{Chapter 1 - Introduction Exercises} \author{Stéphane Liem NGUYEN} \begin{document} \maketitle Exercises with (\textbf{\textit{corrected}}) were corrected based on the \href{http://incompleteideas.net/book/errata.html}{Errata}. These are my own answers and mistakes or errors are possible. \paragraph{\textit{Exercise 1.1: Self-Play} (p. 12)} Suppose, instead of playing against a random opponent, the reinforcement learning algorithm described above played against itself, with both sides learning. What do you think would happen in this case? Would it learn a different policy for selecting moves? \bigskip In this case, if we suppose that draws are better than defeats, I would think that the policies will both converge towards the same optimal one where all outcomes are draws. However, from the book, it's written than draws are assumed to be equally bad as defeats and I would think that for this case, it might fail to arrive at the solution where all outcomes are draws. The players might sometimes win, sometimes lose and sometimes it's a draw. In both cases, the policies learned are different from the one learned against the random imperfect opponent because the "backups" from two successive states $S_t$ and $S_{t+1}$ depends on how the opponent behaves. Both sides might learn also similar policies, both trying to win against their opponent and the solution will maybe converge towards a minimax solution. \textbf{TODO: maybe program it to observe in numerically} \paragraph{\textit{Exercise 1.2: Symmetries} (p. 12)} Many tic-tac-toe positions appear different but are really the same because of symmetries. How might we amend the learning process described above to take advantage of this? In what ways would this change improve the learning process? Now think again. Suppose the opponent did not take advantage of symmetries. In that case, should we? Is it true, then, that symmetrically equivalent positions should necessarily have the same value? \bigskip To take advantage of symmetries, we can maybe convert any state or board configurations into some canonical form. For example, for any given board configuration, by rotating the board, we have in total maximum $4$ symmetrical states ($3$ other than the given configuration) from this transformation. We can also flip the board left-right (reflection) then apply rotations and it can give a maximum of $4$ additional symmetrical states. In practice, there are probably many ways to efficiently have save, compare and use the canonical state configurations and a non-efficient way would be to save each configuration encountered and each time, compare if the current state is symmetrical to the ones we already know. This change might improve the learning process by reducing the amount of samples required to train the agent because updates would no longer be spread into the other symmetrical states. If the opponent does not take advantage of symmetries, it is not really the problem of our agent that the opponent is less performant so I would think that we should continue taking advantage of symmetries. Intuitively, a state that is symmetrical to another one (by applying transformations cited previously) should not change the basis on which the agents take their actions. In other words, values should be the same for symmetrically equivalent positions (expected return from states, not the estimations. But estimations should converge). \paragraph{\textit{Exercise 1.3: Greedy Play} (p. 12)} Suppose the reinforcement learning player was \textit{greedy}, that is, it always played the move that brought it to the position that it rated the best. Might it learn to play better, or worse, than a nongreedy player? What problems might occur? \bigskip If the RL player was greedy with respect to the value functions, it will play worse in the limit than an epsilon-greedy algorithm with $\epsilon$ decaying over time (Greedy in the Limit with Infinite Exploration, see David Silver lecture on exploration-exploitation). The greedy algorithm might be stuck in a suboptimal behavior because it does not try to improve the estimates of the values of other actions that can be potentially better. In general it also depends on what \textit{nongreedy player} means. For instance, if the opponent is trying to lose all the time, we're unsure if the RL player would be better or not etc. \paragraph{\textit{Exercise 1.4: Learning from Exploration} (p. 13)} Suppose learning updates occurred after \textit{all} moves, including exploratory moves. If the step-size parameter is appropriately reduced over time (but not the tendency to explore), then the state values would converge to a different set of probabilities. What (conceptually) are the two sets of probabilities computed when we do, and when we do not, learn from exploratory moves? Assuming that we do continue to make exploratory moves, which set of probabilities might be better to learn? Which would result in more wins? \bigskip Let's first recall the TD learning update rule \begin{equation} V(S_t) \leftarrow V(S_t) + \alpha \left[V(S_{t+1}) - V(S_t)\right] \end{equation} where $\alpha$ is a small positive step-size parameter. When learning updates occur only after greedy moves, $S_{t+1}$ is the state after the greedy move and $S_{t}$ is the state before the greedy move. In the set of probabilities learned in this case, the value of a state would converge to the probability of winning by our player if he follows the optimal greedy policy without any exploration. In other words, the target policy is greedy while the behavior policy (how the agent interacts with the environment to gather information) has some exploration in it. In constrast, learning updates occurring after all moves including exploratory ones would be like \textit{on-policy} instead of \textit{off-policy}. For each state, the value would converge to the probability of winning by our player if he follows the target policy as the behavior policy (target policy is the same as the behavior policy and includes exploratory moves) from that state. %After learning the value function when we do not learn from exploratory moves, if the player follows the target policy (no exploration), then the player would obtain the most rewards. \bigskip Let's take another problem to illustrate what can be better in what scenario. Let's say that the agent has to go from a starting point to an end point with a cliff in the middle. We suppose that actions deterministically move the agent to the next square or do not move the agent if he tries to move into a wall or edge of the grid world. Let's also suppose that the behavior policy is most of the time taking greedy actions with respect to the value function and sometimes exploratory actions. Intuitively, if we want to behave optimally while still having the tendency to randomly explore, we need to avoid getting too close to the cliff. To do so, we have to learn from all moves, so including exploratory ones. On the other hand, if we just want to behave optimally without the tendency to randomly explore, we can go close to the cliff for the shortest path because the agent won't have the risk of falling. Coming back to our problem, after learning the optimal value function when we do not learn from exploratory moves, if the player follows the target policy (no exploration) that is greedy with respect to the optimal value function, then the player would obtain the most rewards. However, if we want to get more rewards while interacting with the environment with the behavior policy that has the tendency to explore, we should go for the update rule including exploratory moves. %In this problem, an update rule without exploratory moves will make %the agent learn a greedy policy telling him to potentially go for instance straight near the cliff without any tendency to explore. If we wanted to obtain more reward while learning, because the behavior policy has a tendency to explore, we would prefer to not go as close to the cliff. %Coming back to our problem, after learning the value function when we do not learn from exploratory moves, if the player follows the target policy (no exploration), then the player would obtain the most rewards. % %However, the method where we learn from exploratory moves might result in more wins when the player was learning \textbf{to ADD. but maybe not afterwards} \textbf{TODO: code and observe} \paragraph{\textit{Exercise 1.5: Other Improvements} (p. 13)} Can you think of other ways to improve the reinforcement learning player? Can you think of any better way to solve the tic-tac-toe problem as posed? \bigskip One way to maybe improve the RL player might be to specify a different reward for losing and for draws so that they're not equally bad. \end{document}
{ "alphanum_fraction": 0.7940382331, "avg_line_length": 94.4795918367, "ext": "tex", "hexsha": "2ae4cbf0ea15f72e87aa26a4d0f284a007933afd", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "6aae7a514b9e3e6adc9135cea03413baedc81567", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Zenchiyu/learning-rl", "max_forks_repo_path": "Intro_RL_Sutton_book/Chap1-Introduction/answers.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6aae7a514b9e3e6adc9135cea03413baedc81567", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Zenchiyu/learning-rl", "max_issues_repo_path": "Intro_RL_Sutton_book/Chap1-Introduction/answers.tex", "max_line_length": 714, "max_stars_count": null, "max_stars_repo_head_hexsha": "6aae7a514b9e3e6adc9135cea03413baedc81567", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Zenchiyu/learning-rl", "max_stars_repo_path": "Intro_RL_Sutton_book/Chap1-Introduction/answers.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2033, "size": 9259 }
\documentclass[12pt]{article} \author{Ian Westrope} \title{Evolutionary Computation Homework One} \date{\today} \begin{document} \maketitle \section{Introduction} For out first homework we had to try and answer a few questions. First we had to find the set of points that gave the best answer to the fitness equation given in the homework. But this isn't really too hard since we know what the best points are already (1, 3). The real questions we wanted to answer were how does the way you store your data and the mutation operators you use on that data affect your results. \section{Methods} To store our x and y coordinates we used an unsigned long long int and we used the last 20 bits. 10 for x and 10 for y. The points were either stored as Gray code or as normal binary. Three different mutation options where used. Random, where the old points were tossed aside and a new random point was found. Bit flipping, where one of the 20 bits was flipped randomly. And increment/decrement, where either the x or y value had one added to or subtracted from. We then used a simple localsearch and the given fitness equation to see if we could find the best point. In total there were six expirimenets. \section{Conclusions} The first expiriment was Gray code with a random mutation. It had a mean fitness of .99 which is good, but it had a mean of 4939 as the number of times through the loop it took to find the best point.Binary with random mutation produced about the same results. Gray code with bit flipping had a mean fitness of 1 and a mean of 129 times through the loop to find the best point, this is much faster than with the random mutation. Where binary with bit flipping had a mean fitness of .57 which means half the time it didn't even find the best point. Gray code with the increment/decrement mutation had a mean fitness of .06 so it almost never found maximum. Where binary with increment/decrement had a mean fitness of 1 and a mean of 1971 times through the loop to find the best point. From these results we can see that a random mutation can find the maximum if given enough time, but if we really want to find the maximum fast we should use Gray code with bit flipping or binary with increment/decrement. The reason Gray code didn't handle increment/decrement well was because we didn't de-Gray the values before adding or subtracting one so we weren't actually adding or subtracting one to the value it was representing. And the reason binary didn't perform well with bit flipping is most of the time you need to flip more than one bit to get to the number right next to you so flipping one bit made big jumps. The different expirments had different landscapes. The ones with high fitness had a landscape that was pretty smooth with one hill in it. But the ones with worse fitness had landscapes that were much more rough and they would get stuck on their local maximum never finding the global maximum. \end{document}
{ "alphanum_fraction": 0.7916807575, "avg_line_length": 92.40625, "ext": "tex", "hexsha": "a64e28c2887867ebc3a5250d674a294eb7deaacf", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d5ae9d5d06ba698cfef40ad35d4277223726b3b8", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "westrope/Evo-Comp", "max_forks_repo_path": "Assignment_1/westrope.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d5ae9d5d06ba698cfef40ad35d4277223726b3b8", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "westrope/Evo-Comp", "max_issues_repo_path": "Assignment_1/westrope.tex", "max_line_length": 783, "max_stars_count": null, "max_stars_repo_head_hexsha": "d5ae9d5d06ba698cfef40ad35d4277223726b3b8", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "westrope/Evo-Comp", "max_stars_repo_path": "Assignment_1/westrope.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 650, "size": 2957 }
\documentclass{llncs} \usepackage[T1]{fontenc} \usepackage[utf8x]{inputenc} \usepackage[english]{babel} \usepackage{lmodern} \usepackage{mathtools} %\usepackage{fullpage} \usepackage{graphicx} \usepackage{xspace} \usepackage{tabularx} \usepackage[lf]{ebgaramond} \usepackage{biolinum} \usepackage[cmintegrals,cmbraces]{newtxmath} \usepackage{ebgaramond-maths} \usepackage{sectsty} \usepackage[noend]{algpseudocode} \usepackage{algorithm} \usepackage{algorithmicx} \usepackage[bottom]{footmisc} \usepackage{caption} %\captionsetup{font=small} \captionsetup{labelfont={sf,bf}} \PassOptionsToPackage{hyphens}{url}\usepackage{hyperref} \makeatletter % Recover some math symbols that were masked by eb-garamond, but do not % have replacement definitions. \DeclareSymbolFont{ntxletters}{OML}{ntxmi}{m}{it} \SetSymbolFont{ntxletters}{bold}{OML}{ntxmi}{b}{it} \re@DeclareMathSymbol{\leftharpoonup}{\mathrel}{ntxletters}{"28} \re@DeclareMathSymbol{\leftharpoondown}{\mathrel}{ntxletters}{"29} \re@DeclareMathSymbol{\rightharpoonup}{\mathrel}{ntxletters}{"2A} \re@DeclareMathSymbol{\rightharpoondown}{\mathrel}{ntxletters}{"2B} \re@DeclareMathSymbol{\triangleleft}{\mathbin}{ntxletters}{"2F} \re@DeclareMathSymbol{\triangleright}{\mathbin}{ntxletters}{"2E} \re@DeclareMathSymbol{\partial}{\mathord}{ntxletters}{"40} \re@DeclareMathSymbol{\flat}{\mathord}{ntxletters}{"5B} \re@DeclareMathSymbol{\natural}{\mathord}{ntxletters}{"5C} \re@DeclareMathSymbol{\star}{\mathbin}{ntxletters}{"3F} \re@DeclareMathSymbol{\smile}{\mathrel}{ntxletters}{"5E} \re@DeclareMathSymbol{\frown}{\mathrel}{ntxletters}{"5F} \re@DeclareMathSymbol{\sharp}{\mathord}{ntxletters}{"5D} \re@DeclareMathAccent{\vec}{\mathord}{ntxletters}{"7E} % Change font for algorithm label. \renewcommand\ALG@name{\sffamily\bfseries Algorithm} \makeatother \allsectionsfont{\sffamily} \pagestyle{plain} \makeatletter \renewenvironment{abstract}{% \list{}{\advance\topsep by0.35cm\relax\small \leftmargin=1cm \labelwidth=\z@ \listparindent=\z@ \itemindent\listparindent \rightmargin\leftmargin}\item[\hskip\labelsep \textsf{\textbf{\abstractname}}]} {\endlist} \newenvironment{extranote}{% \list{}{\advance\topsep by0.35cm\relax\small \leftmargin=1cm \labelwidth=\z@ \listparindent=\z@ \itemindent\listparindent \rightmargin\leftmargin}\item[\hskip\labelsep \textsf{\textbf{Note.}}]} {\endlist} \makeatother \spnewtheorem{mtheorem}{Theorem}{\sffamily\bfseries}{\itshape} \spnewtheorem*{mproof}{Proof}{\sffamily\bfseries\itshape}{\rmfamily} %\newcommand{\GF}{\mathrm{\textit{GF}}} \newcommand{\GF}{GF} \newcommand{\bN}{\mathbb{N}} \newcommand{\bZ}{\mathbb{Z}} \newcommand{\neutral}{\mathbb{O}} \raggedbottom \begin{document} \title{\textsf{Efficient Elliptic Curve Operations On Microcontrollers With Finite Field Extensions}} \author{Thomas Pornin} \institute{NCC Group, \email{[email protected]}} \maketitle \noindent\makebox[\textwidth]{3 January 2020} \begin{abstract} In order to obtain an efficient elliptic curve with 128-bit security and a prime order, we explore the use of finite fields $\GF(p^n)$, with $p$ a small modulus (less than $2^{16}$) and $n$ a prime. Such finite fields allow for an efficient inversion algorithm due to Itoh and Tsujii, which we can leverage to make computations on an ordinary curve (short Weierstraß equation) in affine coordinates. We describe a very efficient variant of Montgomery reduction for computations modulo $p$, and choose $p = 9767$ and $n = 19$ to better map the abilities of small microcontrollers of the ARM Cortex-M0+ class. Inversion cost is only six times the cost of multiplication. Our fully constant-time implementation of curve point multiplication runs in about 4.5 million cycles (only 1.29 times slower than the best reported Curve25519 implementations); it also allows for efficient key pair generation (about 1.9 million cycles) and Schnorr signature verification (about 5.6 million cycles). Moreover, we describe variants of the Itoh-Tsujii algorithms that allow fast computations of square roots and cube roots (in less than twenty times the cost of a multiplication), leading to efficient point compression and constant-time hash-to-curve operations with Icart's map. \end{abstract} \section{Introduction}\label{sec:intro} This article explores the use of affine coordinates for fast and secure implementation of an elliptic curve defined over a carefully chosen finite field. \paragraph{Affine Coordinates.} In a finite field $\GF(p^n)$ with characteristic $p \ge 5$ and extension degree $n\ge 1$, all elliptic curves can be expressed as sets of points $(x,y) \in \GF(p^n)\times\GF(p^n)$ that fulfill the short Weierstraß equation: \begin{equation*} y^2 = x^3 + ax + b \end{equation*} for two constants $a$ and $b$ in $\GF(p^n)$ such that $4a^3 + 27b^2 \neq 0$; an extra point (denoted $\neutral$), called the ``point at infinity'' and with no defined coordinates, is adjoined to the curve and serves as neutral element for the group law on the curve. The formulas for point addition are well-known: for points $Q_1 = (x_1, y_1)$ and $Q_2 = (x_2, y_2)$, we have: \begin{eqnarray*} - Q_1 &=& (x_1, -y_1) \\ Q_1 + Q_2 &=& (x_3, y_3) \\ &=& (\lambda^2 - x_1 - x_2, \lambda (x_1 - x_3) - y_1), \mathrm{where{:}} \\ \lambda &=& \begin{dcases} \frac{y_2 - y_1}{x_2 - x_1} & \quad\text{if $Q_1 \neq \pm Q_2$} \\ \frac{3x_1^2 + a}{2y_1} & \quad\text{if $Q_1 = Q_2$} \\ \end{dcases}\\ \end{eqnarray*} The quantity $\lambda$ is the slope of the line that contains $Q_1$ and $Q_2$. The representation of a point $Q$ as two field elements $x$ and $y$ is called \emph{affine coordinates}. \paragraph{Fractions and Coordinate Systems.} Computing a point addition with these formulas involves a division in $\GF(p^n)$. This operation is usually quite expensive, traditionally estimated at about 80 times the cost of a multiplication in the field. For much better performance, it is customary to replace coordinates with fractions, which makes inversion free (the numerator and denominator are swapped) but increases the cost of multiplications (both numerators and denominators must be multiplied), and greatly increases the cost of additions (in all generality, the addition of two fractions involves three multiplications and one addition in the field). The idea is that if a large number of curve point additions are to be computed (e.g. as part of a point multiplication routine), then the whole computation can be done with fractions, and only at the end are divisions needed to obtain the affine result. The fractions are often expressed as systems of coordinates, such as \emph{projective coordinates}, in which point $(x,y)$ is represented by the triplet $(X{:}Y{:}Z)$ such that $x = X/Z$ and $y = Y/Z$; in essence, these are fractions such that the same denominator is used for $x$ and $y$. Another popular choice is \emph{Jacobian coordinates} that instead use $x = X/Z^2$ and $y = Y/Z^3$. Vast amounts of research efforts have been invested in finding systems of coordinates and point addition formulas that minimize cost (see \cite{EFD} for a database of such formulas)\footnote{Coordinate systems can also be interpreted geometrically; e.g. projective coordinates are part of the generic treatment of projective geometry, while Jacobian coordinates can be seen as jumps between isomorphic curves. However, from an implementation performance point of view, what ultimately matters is that most divisions are avoided, and the number of multiplications is minimized.}. \paragraph{Alternate Curve Types and Cofactors.} An additional strategy to optimize performance of elliptic curve operations has been to seek alternate curve equations that, combined with an adequate system of coordinates, yield formulas with fewer multiplications and squarings. In particular, Montgomery curves ($by^2 = x^3 + ax^2 + x$) and twisted Edwards curves ($ax^2 + y^2 = 1 + dx^2y^2$) offer better performance than short Weierstraß curves. However, this comes at a price: these faster curves cannot have a prime order, since they necessarily contain points of order 2. These curves are chosen to have order $hm$, where $m$ is a large prime, and $h$ is the \emph{cofactor}. For the well-known curves Curve25519 (Montgomery curve) and Edwards25519 (a twisted Edwards curve which is birationally equivalent to Curve25519), the cofactor is $h = 8$. An unfortunate consequence is that some extra complexity is induced in protocols that use such curves, in order to avoid subtle weaknesses. For instance, in the EdDSA algorithm, as specified in RFC 8032\cite{EdDSArfc8032} (and following the original definition paper~\cite{BerDuiLanSchYan2012}, which already contains that assertion), the verification process of a signature ends with the following item: \begin{quote} \textit{Check the group equation $8sB = 8R + 8kA'$. It's sufficient, but not required, to instead check $sB = R + kA'$.} \end{quote} This means that it is possible to (maliciously) craft a public/private key pair, and a signature on some message, such that some implementations will use the first equation and accept the signature, and others will use the second equation and reject it. This does not contradict the core properties of signature algorithms, but it is sufficient to induce forks in distributed applications that rely on several systems following consensus rules and accepting or rejecting exactly the same data elements. More severe issues coming from non-trivial cofactors have also been reported (e.g. \cite{MoneroBug2017}; see also \cite{CreJac2019}). In general, most if not all protocols that use elliptic curves can be made safe against such issues by sprinkling some multiplications by the cofactor here and there, but the exact analysis is complex and fraught with subtle details. It follows that, all other things being equal, having a prime order (i.e. cofactor $h = 1$) is a much desirable property. \paragraph{Prime Order Curve Strategies.} From a twisted Edwards curve with order $8m$ (for a big prime $m$), a group of order $m$ can be obtained with the Ristretto map, designed by M.~Hamburg (see~\cite{RistrettoWeb} for details). Group elements are internally represented by points on the curve, but the map encoding and decoding processes ensure proper filtering and normalization. Compared to operations on the curve itself, the map implies a small but nonnegligible computational overhead. In \cite{SchSpr2019}, Schwabe and Sprenkels explore the overhead implied by the use of a prime-order short Weierstraß curve, through performing benchmarks on such a curve defined over the same field as Curve25519 and Edwards25519. They rely on traditional projective coordinates, along with formulas described in \cite{RenCosBat2015}; these formulas are not the fastest available, but they are \emph{complete}, i.e. they produce the correct results on all inputs with no special cases. They obtain, as expected, worse performance than Curve25519 and Edwards25519. In this paper, we explore an alternate avenue. As explained above, all the efforts on formulas, coordinate systems and curve equations take place under the assumption that field inversions are desperately inefficient and only one or two such inversions may happen over a full curve point multiplication. Here, we instead focus on finding a finite field where operations are efficient, and in particular such that inversions are not especially slow. We will show in the following sections a finite field appropriate for defining secure curves, such that additions, multiplications and inversions are fast. Our chosen field is $\GF(9767^{19})$; on our target implementation platform (the ARM Cortex-M0+), multiplications and squarings are of similar performance to the best reported implementations in finite field $\GF(2^{255}-19)$ (the finite field used in Curve25519), while inversion cost is about six times the cost of multiplication (6M). We also obtain fast quadratic residue test (5.9M), square root extraction (17.1M) and cube root extraction (19.8M), allowing for very efficient point compression and hash-to-curve operations. \paragraph{Yet Another Curve?} Elliptic-curve cryptography is already rich with many curves, in fact too many for comfort. Implementing elliptic curve operations generically is possible, but usually yields substantially lower performance than implementations optimized for a specific finite field, and a specific curve equation. Generic curve implementations are also rarely constant-time, i.e. they may leak information on secret elements through timing-based side channels (including cache attacks). Specialized implementations, however, are specific to a single curve; supporting many different curves securely and efficiently thus requires large amounts of code. Consequently, there is a push for the reduction of the number of ``standard curves''. For instance, in the context of TLS, client and server negotiate elliptic curves with a handshake extension which initially contained no fewer than 25 possible curves, not counting the possibility of sending arbitrary curve equation parameters explicitly\cite{ECTLSrfc4492}. A later revision of the standard reduced that number to just 5, deprecating use of all other curves, as well as the ability to send explicit curve equation parameters\cite{ECTLSrfc8422}. In that respect, defining another curve is counterproductive. The goal of this paper is not to push for immediate adoption and standardisation of our curve; rather, it is an exploration of the concept of keeping to affine coordinates on a field where inversions (and also square roots and cube roots) are efficient. We see this new curve as a starting point for further research, especially beyond the basic curve point multiplication operations; for instance, as will be detailed in this article, fast square and cube roots allow for very efficient hash-to-curve operations, making more attractive protocols that entail many such hashing operations. We still took care to write our implementations with a clean API, amenable to integration in applications, and with well-defined encoding and decoding rules, for the following reasons: \begin{itemize} \item Making the effort of writing full implementations guarantees that we did not forget any part that would be required in practice. \item The closer to production-ready structure implementations get, the more meaningful and precise benchmarks become. \item Creations often escape from their creator, and once code has been published, especially with an open-source license, there is no way to prevent it from being reused by anybody. A responsible cryptographic code writer should ensure that any such published code is harmless, i.e. secure enough not to induce catastrophic weaknesses if deployed by the unwary. \end{itemize} \paragraph{Target Platform.} In the current article, we focus on low-end platforms, in particular the ARM Cortex-M0+ CPU, a popular microcontroller core because of its compactness and very low energy usage\footnote{The ARM Cortex-M0+ is an improvement over the previous ARM Cortex-M0; however, they do not differ in their timing characteristics for the operations described here. We also consider the variant with the 1-cycle fast multiplier, not the smaller 32-cycle slow multiplier.}. The techniques we develop are specially meant to map well to what that CPU offers. However, we will see that such optimization does not necessarily forfeit performance on larger systems, especially since, for instance, modern large CPU have SIMD units that can perform many ``small'' operations (e.g. 16-bit multiplications) in parallel. Baseline for any talk about performance is Curve25519 (and its twisted Edwards counterpart Edwards25519). Curve25519 was described in~\cite{Ber2006} and is a Montgomery curve defined over field $\GF(2^{255}-19)$. It should be noted that there are recent works that claim much better performance than Curve25519 on small systems, in particular the Four$\mathbb{Q}$ curve\cite{CraLon2015}. In~\cite{ZhaLinZhaZhoGao2018}, a point multiplication cost under 2 million cycles on an ARM Cortex-M0+ is reported, about 55\% of the cost of the best reported Curve25519 implementation on the same CPU. However, this particular curve has some extra structure (a low-degree endomorphism) which speeds up the computation but is slightly controversial, because cryptographers are often wary of extra structures, especially in elliptic curves (historically, many attacks on special curves exploited their extra structure), and because of the unclear intellectual property status of that endomorphism. Research on Four$\mathbb{Q}$ should be closely followed, but it is still too recent to serve as a reliable comparison point. \paragraph{Article Organization.} The article outline is the following: \begin{itemize} \item Section~\ref{sec:find-field} describes the criteria we used for finding the finite field in which we will define a curve. \item In section~\ref{sec:field-ops}, we explain how operations in the finite field, in particular inversions and square roots, can be implemented efficiently. \item Section~\ref{sec:curve9767} includes the definition our new elliptic curve, Curve9767; we explain how operations are optimized. We provide a generic, constant-time, complete point addition routine, as well as optimized constant-time point multiplication functions, for the three situations commonly encountered in cryptographic protocols (multiplying the conventional generator $G$, multiplying a dynamically obtained point $Q$, and a combined double-multiplication $uG + vQ$ used in ECDSA and Schnorr signature verification). Efficient routines for point compression and decompression, and hash-to-curve operations, are also provided. \item Implementation issues, including performance benchmarks, side channel attacks and countermeasures, and ideas for optimizations on other architectures, are detailed in section~\ref{sec:impl}. \item In appendix~\ref{sec:unused}, we list a few extra facts and ideas that turned out not to work as well as expected, or have redhibitory flaws, but are still worth exposing because they could lead to useful results in other contexts. \end{itemize} All of our source code (reference C code, ARM assembly, test vectors, and support scripts) is available on: \begin{center} \url{https://github.com/pornin/curve9767} \end{center} \section{Finding The Field}\label{sec:find-field} \subsection{Field Type} A finite field has cardinal $p^n$ for a prime $p$ (the field characteristic). Most modern elliptic curves use a prime field (i.e. $n = 1$). Here, we focus on extension fields ($n > 1$). We furthermore investigate only characteristics $p \geq 5$ (when $p = 2$ or $p = 3$, curve equations are different; moreover, an ordinary curve over $\GF(2^n)$ must have an even order, and hence cannot have cofactor $h = 1$). Use of extension fields for defining elliptic curves with efficient implementations has been described under the name ``optimal extension fields''\cite{Mih1997,BaiPaa1998}; however, we diverge from OEF in some aspects, explained below. When $n > 1$, $\GF(p^n)$ is defined by first considering the polynomial ring $\GF(p)[z]$ (in all of this article, $z$ denotes the symbolic variable for polynomials). We then make a quotient ring by computing operations on polynomials modulo a given monic polynomial $M$ of degree $n$. When $M$ is irreducible over $\GF(p)$, this defines a field of cardinal $p^n$. Two finite fields with the same cardinal are isomorphic to each other, and the isomorphisms can be efficiently computed in both directions; therefore, the choice of the exact field (i.e. of the modulus $M$) is irrelevant to security, and we are free to use a modulus that favours efficient implementation. For fast inversion and square roots, as will be explained in section~\ref{sec:field-ops}, we prefer to have $M = z^n - c$ for some constant $c \in \GF(p)$. Since $M$ must be irreducible, $c$ cannot be $0$, $1$ or $-1$. \subsection{Extension Degree}\label{sec:find-field:degree} The extension degree $n$ can be any integer. However, if the degree is very small, or is composite with a very small divisor, then attacks on elliptic curves based on Weil descent may apply. Weil descent is a generic process through which a discrete logarithm problem (DLP) on an algebraic curve over a field $K$ is transformed into a DLP on another curve of higher degree, over a subfield $K'$ of $K$. The latter problem may be easier to solve. Gaudry, Hess and Smart\cite{GauHesSma2002} have applied that idea to elliptic curves over $\GF(2^n)$; the GHS attack solves DLP on ordinary elliptic curves faster than generic attacks provided that the degree $n$ is composite and with small factors. Application of the GHS attack to curve fields with odd characteristic is non-trivial\cite{AriMatNagShi2004}. Gaudry\cite{Gau2009} has shown that if $n = 0\bmod 4$ (i.e. $\GF(p^n)$ is a quartic extension) then there exists an algorithm that solves DLP in asymptotic time $O(p^{3n/8})$, i.e. faster than the generic attack (Pollard's Rho algorithm, $O(p^{n/2})$). Conversely, Diem\cite{Die2003} showed that if the $n$ is prime and not lower than $11$, then the GHS attack cannot work. It should be noted that the GHS attack is not necessarily the only way to leverage Weil descent in order to break DLP on elliptic curves. Moreover, most known results are asymptotic in nature, and the extent of their applicability to practical cases (with curve order sizes of about 256 bits) is not fully known at this point. However, it seems that using a prime extension degree $n \geq 11$ provides adequate protection against Weil descent attacks. We will use this criterion for our field selection. In that respect, we diverge from OEF, which typically use smaller and/or composite degrees. \subsection{Delayed Modular Reduction}\label{sec:field-delayed-reduction} Our main target system is the ARM Cortex-M0+. That CPU has a very fast multiplier on 32-bit operands, working in a single clock cycle. However, it returns only the low 32 bits of the result, i.e. it computes modulo $2^{32}$. There is no opcode yielding the upper 32 bits of the product. Since we will need to perform multiplications in $\GF(p)$, this seems to limit the range of $p$ to values of at most 16 bits\footnote{This is not entirely true, as will be explained in section~\ref{sec:unused-signed}.}. Moreover, computations in $\GF(p)$ also involve reduction modulo $p$, which will be substantially more expensive than the product itself. It is thus advantageous to mutualize modular reductions. Consider the product $w = uv$ where $u$ and $v$ are elements of $\GF(p^n)$. Elements of $\GF(p^n)$ are polynomials of degree less than $n$, and we denote the coefficients of $u$ as $(u_i)$ (for $i = 0$ to $n-1$). The modulus for the definition of $\GF(p^n)$ is $M = z^n - c$. We then have the following: \begin{eqnarray*} w_0 &=& u_0 v_0 + c (u_1 v_{n-1} + u_2 v_{n-2} + u_3 v_{n-3} + \dots + u_{n-1} v_1) \\ w_1 &=& u_0 v_1 + u_1 v_0 + c (u_2 v_{n-1} + u_3 v_{n-2} + \dots + u_{n-1} v_2) \\ w_2 &=& u_0 v_2 + u_1 v_1 + u_2 v_0 + c (u_3 v_{n-1} + \dots + u_{n-1} v_3) \\ \dots \end{eqnarray*} There is a quadratic\footnote{As will be explained in section~\ref{sec:field-ops-mul}, this can be done in a sub-quadratic number of products, but still vastly larger than $n$.} number of products in $\GF(p)$; however, we can also compute each $w_i$ over plain integers, and making only one final reduction for each $w_i$, thereby lowering the number of reductions to $n$. This strategy is valid as long as the intermediate values (each $w_i$ before reduction) fits in a machine word, i.e. 32 bits. We see that the largest potential pre-reduction value is for the computation of $w_0$; if each $u_i$ and $v_i$ is an integer less than $p$, then the intermediate value may range up to $(1+c(n-1))(p-1)^2$. We will therefore look for a prime $p$ and degree $n$ such that $(1+c(n-1))(p-1)^2 < 2^{32}$. \subsection{Fast Modular Reduction}\label{sec:field-fast-reduction} Even if delaying modular reduction allows us to perform only $n$ such operations for a product of two elements in $\GF(p^n)$, they still constitute a nonnegligible cost; thus, reduction modulo $p$ should be made as efficient as possible. In the OEF analysis\cite{BaiPaa1998}, moduli very close to a power of 2 are favoured; however, this restricts the number of candidate moduli. In order to have a larger range of potential values for $p$, we instead use Montgomery reduction\cite{Mon1985}. Let $s \in \bN$ such that $p < 2^s$. We define $R = 2^s$, and $f = -1/p \bmod 2^s$. For an integer $x \in \bN$, we can compute $x/R \bmod p$ as follows: \begin{enumerate} \item Let $t = xf \bmod 2^s$. \item Let $t' = x + tp \bmod 2^s$. \item Return $t' / 2^s$. \end{enumerate} Indeed, we can see that $x + tp = 0 \bmod 2^s$; hence, the division by $2^s$ in the third step is exact. Since $p$ is relatively prime to $2^s$, it follows that the result is correct. Morever, if $x < p^2$, then the result $t_2$ is less than $2p$ and can be reduced down to the $0$..$p-1$ range with a single conditional subtraction. \emph{Montgomery multiplication} is a plain product, followed by a Montgomery reduction; the Montgomery multiplication of $x$ and $y$ computes $xy/R \bmod p$. It is convenient to use values in \emph{Montgomery representation}, i.e. value $x$ is stored as $xR \bmod p$. Additions and subtractions are unchanged ($xR+yR = (x+y)R$), and the Montgomery product of $xR$ with $yR$ is $(xR)(yR)/R = (xy)R$, i.e. the Montgomery representation of $xy$. We can keep all values in that representation, converting back to integers only for encoding purposes. Traditionally, $s$ is chosen to be close to the minimal value, since we perform computations modulo $2^s$. On an ARM Cortex-M0+, using $s = 16$ for a modulus $p$ less than $2^{16}$ would lead to a reduction using about 15 clock cycles. However, we can do much better, since the multiplication opcode actually works over 32-bit inputs. Suppose that a value $x \in \bN$ must be reduced modulo $p$. Suppose moreover that $0 < x < 2^{32}$. We apply Montgomery reduction with $s = 32$; but modulus $p$ is smaller than $2^{16}$. This has the following consequences: \begin{itemize} \item Value $t$ is computed with a single \verb+mul+ opcode. It is a 32-bit value; we can split it into a low and high halves, i.e. $t = t_0 + 2^{16} t_1$. \item We then have: $t' = x + t_0 p + 2^{16} t_1 p$. Since $t_1$ and $p$ are both lower than $2^{16}$, the value $t_1 p$ will fit on 32 bits, and we can also split it into a low and high halves: $t_1 p = t_2 + 2^{16} t_3$. \item This implies that $t' = x + t_0 p + 2^{16} t_2 + 2^{32} t_3$. But we know that $t'$ is a multiple of $2^{32}$. Therefore, the three values $x$, $t_0 p$ and $2^{16} t_2$ add up to a value $V$ whose low 32 bits are zero. \end{itemize} Suppose that $0 < x < 2^{32}+2^{16}-(2^{16}-1)p$. In that case: \begin{equation*} 0 < x + t_0 p + 2^{16} t_2 < (2^{32}+2^{16}-(2^{16}-1)p) + (2^{16}-1)p + 2^{16}(2^{16}-1) = 2^{33} \end{equation*} Since value $V$ is a multiple of $2^{32}$, greater than $0$ and lower than $2^{33}$, it follows that $V$ must be equal to $2^{32}$. Therefore, the result of the reduction is necessarily equal to $t_3 + 1$. We do not have to compute other intermediate values at all! Moreover, $t_3$ is the high half of $t_1 p$, with $t_1 < 2^{16}$; this implies that $t_3 < p$. This leads to the following algorithm: \begin{itemize} \item We represent elements of $\GF(p)$ with Montgomery representation in the $1$..$p$ range: value $a$ is stored as $aR \bmod p$, and if $a = 0$, then we store the value as $p$, not $0$. \item We perform additions and plain integer multiplications, resulting in a value $x$ that must be reduced. That value fits on a 32-bit word. Since we started with non-zero integers and only performed additions and multiplications (not subtractions), we have $x > 0$. We assume that $x < 2^{32}+2^{16}-(2^{16}-1)p$. \item To perform a Montgomery reduction of value $x$, we apply the following steps: \begin{enumerate} \item $t = xf \bmod p$ (where $f = -1/p \bmod 2^{32}$ is precomputed). \item $t_1 = \lfloor t/2^{16} \rfloor$ (a ``right shift'' operation). \item $t_3 = \lfloor (t_1p)/2^{16} \rfloor$ (a multiplication by the constant $p$, followed by a right shift). \item Reduced value is $t_3+1$, and is already in the $1$..$p$ range; no conditional subtraction is needed. \end{enumerate} \end{itemize} This implementation of Montgomery reduction uses only 5 cycles on an ARM Cortex-M0+; it requires two constants but no extra scratch register: \begin{verbatim} muls r0, r7 @ r0 <- r0 * r7 lsrs r0, #16 @ r0 <- r0 >> 16 muls r0, r6 @ r0 <- r0 * r6 lsrs r0, #16 @ r0 <- r0 >> 16 adds r0, #1 @ r0 <- r0 + 1 \end{verbatim} In this code, registers \verb+r6+ and \verb+r7+ must have been loaded with the constants $p$ and $-1/p \bmod 2^{32}$, respectively. Since they are not modified, they can be reused for further Montgomery reductions with no reloading cost. For this algorithm to work properly in our case (implementation of multiplications in $\GF(p^n)$), we need the pre-reduction values to fit in the acceptable range for Montgomery reduction. In the previous section, we assumed that polynomial coefficients were integers strictly less than $p$, but our representation now allows the value $p$ itself (which stands for zero). Moreover, the range for fast Montgomery reduction is somewhat smaller than $2^{32}$. We will therefore require the following: \begin{equation*} (1 + c(n-1)) p^2 + (2^{16}-1)p < 2^{32}+2^{16} \end{equation*} \subsection{Field Selection Criteria} We need a field $\GF(p^n)$ of a sufficient size to achieve a given security level. The order of a curve defined over a field of cardinal $q$ is close to $q$ (by Hasse's theorem, it differs from $q+1$ by at most $2\sqrt{q}$). Since Pollard's Rho algorithm solves DLP in a group of order $q$ in time $O(\sqrt{q})$, we need a 256-bit $q$ in order to achieve the traditional 128-bit security level. The choice of ``128 bits'' is not very rational. In general, we want a security level which is such that attacks are not practically feasible, and on top of that some ``security margin'', an ill-defined notion. ``128'' is a power of two, i.e. a nice number for somebody who thinks in binary; this makes it psychologically powerful. However, in practice, some deviations are allowed. For instance, Curve25519 uses a 255-bit field and has cofactor $h = 8$, leading to a group order close to $2^{252}$. This would technically make it a 126-bit curve, two bits short of the target 128-bit level. Curve25519 is still widely accepted to offer ``128-bit security'' for the official reason that the level is really about equivalence to AES-128 against brute force attacks and each step in Pollard's Rho algorithm will involve substantially more work than one AES encryption; and, officiously, ditching Curve25519 because of a failure to reach a totally arbitrary level by only two bits would be too inconvenient. For the same reasons, in our own field selection process, we will be content with any field that has cardinal close to $2^{250}$ or greater. Taking all criteria listed so far, we end up with the following list: \begin{itemize} \item $p < 2^{16}$, and is prime. \item $n \geq 11$, and is prime. \item Polynomial $z^n-c$ is irreducible over $\GF(p)$ for some constant $c$ (this requires that $n | p-1$). \item $p^n \geq 2^{250}$. \item $(1 + c(n-1)) p^2 + (2^{16}-1)p < 2^{32}+2^{16}$. \end{itemize} We want to minimize the degree $n$, since that parameter is what will drive performance. Therefore, for each potential $n$ value, we want to find the largest possible $p$ that satisfies the criteria above. Note that since $n$ divides $p-1$, the criteria imply that $n^3 < 2^{31}$, i.e. $n < 1291$. We enumerated all primes $n$ from $11$ to $1289$, and obtained the optimal values listed on table~\ref{tab:degree-modulus}. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|} \hline degree $n$ & modulus $p$ & field size $\log_2(p^n)$ \\ \hline 11 & 12739 & 150.007 \\ 13 & 11831 & 175.894 \\ 17 & 10337 & 226.704 \\ 19 & 9767 & 251.820 \\ 23 & 8971 & 302.014 \\ 29 & 7541 & 373.536 \\ 31 & 7193 & 397.184 \\ 37 & 6883 & 471.706 \\ 41 & 6397 & 518.370 \\ 43 & 6709 & 546.611 \\ 47 & 6299 & 593.183 \\ 53 & 6043 & 665.736 \\ 59 & 5783 & 737.359 \\ \hline \end{tabular} \end{center} \caption{\label{tab:degree-modulus}Optimal field degrees and base field modulus. All solutions use $M = z^n-2$.} \end{table} All of these solutions use polynomial $z^n-2$ (i.e. $c = 2$ yields the best results). The list is exhaustive in the following sense: for any line in the table, corresponding to a solution $(n,c,p)$, there is no triplet $(n',c',p')$ that fulfills the criteria and such that $n' \leq n$ and $p'^{n'} > p^n$. With larger degrees, we can get to increasingly larger fields, up to $n = 761$ for a $8045.83$-bit field (with $p = 1523$). For our target goal of ``128-bit security'', the best choice appears to be $n = 19$ and $p = 9767$: this yields a field size (and thus, a curve order) of about $251.82$ bits, very close to the 252 bits of Curve25519. \section{Efficient Field Operations}\label{sec:field-ops} \subsection{Platform Details}\label{sec:field-ops-platform} The ARM Cortex-M0+ is a small, low-power core that implements the ARMv6-M architecture. This follows the ``Thumb'' instruction set, in which almost all instructions are encoded over 16 bits; this instruction set is much more limited than what is offered by larger cores such as the ARM Cortex-M3 and M4, that use the ARMv7-M architecture. The following points are most relevant to implementation: \begin{itemize} \item There are 16 registers (\verb+r0+ to \verb+r15+); however, the program counter (\verb+r15+) and the stack pointer (\verb+r13+) cannot practically be used to store any state values. Register \verb+r9+ is reserved (e.g. to support position-independent code, or thread-local storage) and is best left untouched. There are thus 13 usable registers. \item Very few operations can use the ``high'' registers (\verb+r8+ to \verb+r15+): only simple copies (\verb+mov+) and additions (\verb+add+). Moreover, the additions are of the two-operand kind: one of the source operands is the destination; thus, that operand is consumed. \item A few operations on the ``low'' registers (\verb+r0+ to \verb+r7+) can have an output distinct from the operands, e.g. additions (\verb+adds+) and subtractions (\verb+subs+). Multiplications (\verb+muls+), however, are two-operand: when a product is computed, one of the source values is consumed. If both source operands must be retained for further computations, then an extra copy will be needed. \item All computation opcodes execute in 1 cycle each, with no special latency (the result of an opcode can be used as source in the next one with no penalty). However, memory accesses take 2 cycles, both for reading and for writing. The \verb+ldm+ and \verb+stm+ opcodes can respectively read and write several words in a faster way ($1+N$ cycles for $N$ 32-bit words), furthermore incrementing accordingly the register used as pointer for that access. The destination or source registers must be among the low registers, and are used in ascending order. \item Unaligned accesses are not tolerated (they trigger CPU exceptions). 8-bit and 16-bit accesses can be performed, but cannot use addressing based on the stack pointer, contrary to 32-bit accesses. \end{itemize} Since arithmetic operations are fast, but memory accesses are slow, and the number of available registers is limited, most of the computation time will not be spent in actual computations, but when moving data. Optimization efforts consist mostly in finding the algorithmic data flow that will minimize the number of memory accesses, and will allow the use of the relatively faster \verb+ldm+ and \verb+stm+ opcodes with two or more words per opcode. Performance of any routine written in assembly can be obtained in two ways: \begin{itemize} \item by measuring it on a test microcontroller that has a precise cycle counter; \item by painstakingly counting instructions manually. \end{itemize} We applied both methods to our code, and they match perfectly. The test system is an Atmel (now Microchip) SAM D20 Xplained Pro board, using an ATSAMD20J18 micro\-con\-trol\-ler\cite{SAMD20}. That microcontroller can be configured to run on several clock sources; moreover, it also has some internal counters that can also be configured to use these clock sources. By using the same source (the internal 8~MHz oscillator) for both, an accurate cycle counter is obtained. It shall be noted that while the SAM D20 board can run at up to 48~MHz, the Flash element that stores the code cannot provide 1-cycle access time at high frequencies; extra wait states are generated, that slow down execution. By running tests at 8~MHz, we can avoid any wait state. This is the usual way of providing benchmark values, and all figures in this article assume zero-wait state RAM and ROM accesses\footnote{It is also possible to copy code into RAM and then executed it from RAM, allowing high-frequency execution with no wait state; however, RAM is normally a scarce resource on microcontrollers, thus making that trick rarely worth it.}. Counting instructions manually is a valid method, since the timing rules are simple (no hidden penalty or optimizations). In our code, we obtain the exact same cycle counts as what the measures show. This allows making most of the optimization work while working only with a non-accurate software simulator. In practice, development was done mostly against an embedded Linux libc (libc and compiler were obtained through the Buildroot project\cite{Buildroot}) and executed with QEMU\cite{QEMU} in user-only mode (no full system emulation). This combination provides a great ease of debugging, but tests must still be ultimately performed on actual hardware, because QEMU does not trap on unaligned accesses. Tests on hardware use the cryptographic routines alone, with no libc, and only minimal boot code to configure the clocks and serial line (for measure reporting). The exact definition of the performance of a software routine is subject to some semi-arbitrary choices: a routine must receive parameters, and returns results. The callee must preserve some register values, as per the used ABI; the caller must then save all values that are not in preserved registers, but that must still be retained. Which of these saving costs should be accounted as part of the routine cost is a matter of definition. In our implementations, we found that for most ``expensive'' routines (e.g. multiplication of two elements), callers usually do not have many values to retain, and it is wasteful to force the callee to save registers that the caller will not need. Indeed, saving registers \verb+r4+ to \verb+r8+, \verb+r10+ and \verb+r11+, as per the standard ARM ABI, and restoring them on exit, requires 22~cycles in addition to the normal function entry (saving of the link register \verb+r12+ on the stack) and exit (restoring of that value into the program counter \verb+pc+). Our internal routines (which are not callable from C code) therefore use a modified ABI in which these registers are not saved. For such routines, the figures reported below include all opcodes that constitute the function body (including the initial ``\verb+push { lr }+'' and the final ``\verb+pop { pc }+'') but not the cost of the \verb+bl+ opcode that calls the routine (3 cycles) nor any value-saving costs on the caller side. \subsection{Baseline Performance}\label{sec:field-ops-baseline} In \cite{DulHaaHinHutPaaSanSch2015}, an implementation of Curve25519 point multiplication (with the Montgomery ladder) is reported, with the following performance: \begin{itemize} \item field multiplication: $1\,469$ cycles \item field squaring: $1\,032$ cycles \item curve point multiplication: $3\,589\,850$~cycles \end{itemize} More recently, Haase and Labrique\cite{HaaLab2019} reported slight improvements on the same operations: \begin{itemize} \item field multiplication: $1\,478$ cycles \item field squaring: $998$ cycles \item curve point multiplication: $3\,474\,201$~cycles \end{itemize} In \cite{NisMam2016}, Nishinaga and Mambo claim a faster field multiplication, at only $1\,350$~cycles; however, the performance of the complete curve point multiplication routine is worse than above, at $4\,209\,843$~cycles. Chances are that while they have a faster multiplication routine, they do not have a dedicated routine for squarings, making the latter substantially slower than they could. Moreover, they report substantially longer times for the ARM Cortex-M0 when compared with the M0+ (about +23\% cycles), which is not consistent with known instruction timings: the M0 and M0+ should differ only in rare corner cases, such as the cost of a taken conditional branch (this costs one extra cycle on the M0). It is possible that their test platform for the ARM Cortex-M0 was used at a frequency that induced extra wait states when reading from ROM/Flash. Since their code was not published, such hypotheses cannot be verified. We consequently disregard these figures in our evaluation. We did not find any published benchmark for curve Edwards25519 on an ARM Cortex-M0 or M0+. We can make some rough estimates, based on the figures for Curve25519. The Montgomery ladder implementation needs 5 multiplications and 4 squarings per multiplier bit (using the formulas in~\cite{Curve25519rfc7748} and ignoring the multiplication by the constant \texttt{a24} which is much faster than a normal multiplication because the constant is a small integer). Since the result is obtained as a fraction, and extra inversion is needed, normally implemented with a modular exponentiation for a cost of about one squaring per modulus bit. The total cost per bit is then 5 multiplications and 5 squarings (denoted: ``5M+5S''), plus some lighter operations (field additions and subtractions, conditional swap). We note that with the figures listed above, the multiplications and squarings account for about 90\% of the total cost. For Edwards25519, a typical point multiplication will need about $251$ doublings, and some point additions. A microcontroller usually has limited RAM, thereby preventing use of large windows; a 4-bit window, storing 8 precomputed points in extended coordinates, will need 1~kB of temporary RAM space, and imply one point addition every 4 point doublings\footnote{For the purposes of this paragraph, we are considering a constant-time point multiplication, suitable for all purposes, thus without wNAF optimizations that can be applied when processing public values, e.g. for signature verification.}. Using the formulas in~\cite{EdDSArfc8032}, we find that: \begin{itemize} \item point doubling uses 4M+4S; \item point addition uses 8M (ignoring the multiplication by the curve constant $d$); \item decoding an input point (e.g. a Diffie-Hellman public key) from its encoded (com\-pres\-sed) format requires a combined inversion and square root, normally done with a modular exponentiation (1S per modulus bit); \item obtaining the final point in affine coordinates implies an inversion, hence an extra 1S per modulus bit. \end{itemize} This brings the total cost at 6M+6S per bit, i.e. about 20\% more expensive than the Montgomery ladder, but more versatile: since curve Edwards25519 offers generic complete point addition formulas, it supports many cases beyond plain Diffie-Hellman, e.g. optimizing point multiplication for a conventional generator (as used in key pair generation, and signature generation), performing combined multiplications (as in EdDSA signature verification), and more generally supporting any protocol. The Ristretto map does not substantially change these figures: decoding and encoding imply a modular exponentiation each, which replace the exponentiations involved in point decompression and in conversion back to affine coordinates. This estimate thus rates the ``multiplication by a scalar'' operation on the prime-order Ristretto255 group at about 4.2 million cycles on an ARM Cortex-M0+. Keep in mind that it is only an estimate that cannot replace actual benchmarks: \begin{itemize} \item The ``+20\%'' expression assumes that operations other than multiplications and squarings add up to about 10\% of the total cost, as is the case in Curve25519 implementations. \item Any particular usage context may have more available RAM and thus allow for larger windows in the point multiplication algorithm. \item Decoding and encoding normally occur at the boundaries of the protocol, when performing I/O. If a given protocol calls for several operations (e.g. several point multiplications, and operations between the results), then the encoding and decoding could be mutualized, thereby reducing their relative cost. \end{itemize} \subsection{Element Representation}\label{sec:field-ops-repr} A field element is a polynomial with coefficients in $\GF(p)$, with degree at most $18$. The representation in memory must thus use $19$ elements, each being an integer modulo $p$. As explained in section~\ref{sec:find-field}, we use Montgomery representation for elements in $\GF(p)$, with values in the $1$..$p$ range. We use 16 bits per $\GF(p)$ element. To allow for accessing multiple elements at once, especially with the \verb+ldm+ and \verb+stm+ opcodes, we enforce 32-bit alignment for the whole array (i.e. array elements with even indices are guaranteed to be 32-bit aligned). Packing two elements in a single 32-bit word makes the representation relatively compact (40 bytes per field element, including a dummy slot for alignment purposes), which saves RAM and improves efficiency of bulk transfer operations. This also allows making two (non-modular) additions in one operation: 16-bit low halves add with each other without inducing extra carries into the high halves, since $2p < 2^{16}$. On the other hand, use of the compact format makes some operations somewhat harder, notably stack-based direct access to a 16-bit value: there is no opcode that can read or write a single 16-bit element using \verb+sp+ as base address register. We still found in our implementations that the compact format yields slightly better performance, and much lower RAM usage, than the 32-bits-per-value format. A simple way to express things is that since, on an ARM Cortex-M0+, operations are 1 cycle but memory accesses are 2 cycles each, the biggest cost is not computations but moving the data around. Anything that reduces memory exchanges tends to be good for performance. \subsection{Additions and Subtractions}\label{sec:field-ops-add} When adding field elements, the individual polynomial coefficients must be added pairwise. The addition or subtraction itself can be done on the packed format, but reducing the result into the expected range ($1$..$p$) requires splitting words into individual elements, and performing a conditional subtraction of the modulus $p$. This last operation can be performed in 4 cycles on an ARM Cortex-M0+: \begin{verbatim} subs r1, r7, r0 @ r1 <- r7 - r0 asrs r1, #31 @ r1 <- r1 >> 31 (with sign extension) ands r1, r7 @ r1 <- r1 & r7 subs r0, r1 @ r0 <- r0 - r1 \end{verbatim} This code snippet reduces value in \verb+r0+, using \verb+r1+ as scratch register. The register \verb+r7+ must have been loaded with the constant value $p = 9767$ (\verb+r7+ is not modified and can be reused for further reductions). This code subtracts $p$ from \verb+r0+ only if the reverse operation (subtracting \verb+r0+ from $p$) would have yielded a strictly negative value (sign bit set); thus, values in $1$..$p$ are unchanged, and values in $p+1$..$2p$ are reduced by subtracting $p$. The result is in the expected range ($1$..$p$). Subtractions can be done in a similar way, but with an extra detail to take into account: the subtraction can yield a negative value. Thus, when subtracting two 32-bit words, a carry bit from the low halves may impact the high halves; the splitting of the word into two 16-bit values will then need to compensate for this potential carry: \begin{verbatim} subs r3, r5 @ r3 <- r3 - r5 sxth r4, r3 @ sign-extend low half of r3 into r4 subs r3, r4 @ r3 <- r3 - r4 asrs r3, #16 @ r3 <- r3 >> 16 (with sign extension) \end{verbatim} The \verb+sxth+ opcode sign-extends the low half of \verb+r3+, thus interpreting these 16 bits as a signed representation; then, the second \verb+subs+ opcode subtracts that sign-extended value from the source: this clears the low half \emph{and} removes the action of the carry resulting from the initial subtraction, if there was one. The arithmetic right shift then recovers the high half with signed interpretation. Once the two halves have thus been separated, the reduction is done by adding $p$ to the value $x$ if and only if $x+1 < 0$ (the $+1$ is needed because we want $x = p$ to remain equal to $p$, and not replace it with 0). In the course of computations on elliptic curve points, it often happens that several additions or subtractions must be applied successively, sometimes with small factors. For instance, given field elements $u$, $v$ and $w$, $2(u - v - w)$ must be computed. It is then efficient to combine the operations, in order to mutualize the RAM accesses and the modular reductions. The intermediate value range is greater, though, preventing use of the 4-cycle conditional additions or subtractions explained above. Instead, we can use a derivative of Montgomery reduction; namely, to reduce value $x$, we apply Montgomery reduction on $xR$, where $R = 2^{32} \bmod p$. Moreover, the first step of Montgomery reduction is a multiplication by a constant (modulo $2^{32}$); we can merge that multiplication with the multiplication by $R$. This yields the following code sequence: \begin{verbatim} muls r0, r7 @ r0 <- r0 * r7 lsrs r0, #16 @ r0 <- r0 >> 16 muls r0, r6 @ r0 <- r0 * r6 lsrs r0, #16 @ r0 <- r0 >> 16 adds r0, #1 @ r0 <- r0 + 1 \end{verbatim} which is identical to the one used for Montgomery reduction (see section~\ref{sec:field-fast-reduction}), but the constants are different: we set \verb+r6+ to $p = 9767$, and \verb+r7+ to $-(2^{32} \bmod p)/p \bmod 2^{32} = 439\,742$. Exhaustive experiments show that for all inputs $x$ in the $1$..$509\,232$ range, the correct reduced value in the $1$..$p$ range is obtained. We implemented dedicated unrolled routines for all such ``linear'' operations that are needed in our curve point operation routines, with individual costs ranging between 173 and 275 cycles. \subsection{Multiplication}\label{sec:field-ops-mul} Consider a mutiplication of field elements $u$ and $v$, each consisting in $19$ elements. The generic ``schoolbook'' multiplication routine, using the formulas in section~\ref{sec:field-delayed-reduction}, leads to $361$ multiplications and $361$ additions (since the field modulus is $z^{19}-c$ with $c = 2$, multiplications by $c$ are equivalent to additions). Moreover, extra copies are needed, because the \verb+muls+ opcode consumes one of its operands; at the very least, all but 19 of the multiplications must involve an extra copy. The total bare minimum cost of a schoolbook multiplication is then $3\times 361 - 19 = 1064$~cycles. This is not attainable, since this count ignores all the costs of reading data from RAM and writing it back. Also, the Montgomery reductions must be applied on top of that. Karatsuba multiplication\cite{KarOfm1962} reduces the cost of a multiplication of polynomials. Suppose that two polynomials $u$ and $v$, of degree less than $m$, must be multiplied together. We split the polynomial $u$ into a ``low'' and ``high'' halves: \begin{equation*} u = u_l + z^{m/2} u_h \end{equation*} where $u_l$ and $u_h$ are polynomials of degree less than $m/2$. We similarly split $v$ into $v_l$ and $v_h$. We then have: \begin{eqnarray*} uv &=& (u_l + z^{m/2} u_h) (v_l + z^{m/2} v_h) \\ &=& u_l v_l + z^{m/2} (u_l v_h + u_h v_l) + z^m u_h v_h \\ &=& u_l v_l + z^{m/2} ((u_l + u_h) (v_l + v_h) - u_l v_l - u_h v_h) + z^m u_h v_h \\ \end{eqnarray*} We can thus multiply two $m$-element polynomials by computing three products of polynomials with $m/2$ elements: $u_l v_l$, $u_h v_h$, and $(u_l+u_h)(v_l+v_h)$. Applied recursively on these sub-products, Karatsuba multiplication leads to sub-quadratic asymptotic cost $O(m^{\log_2 3}) \approx m^{1.585}$. The description above assumes that $m$ is even, and that the splits are even. Uneven splits are also possible, although usually less efficient. If you start with $m = 19$, and split into low halves of 10 elements (degree less than 10) and high halves of 9 elements, then the three sub-products are: \begin{itemize} \item $u_l v_l$: two polynomials of degree less than 10, result of degree less than 19; \item $u_h v_h$: two polynomials of degree less than 9, result of degree less than 17; \item $(u_l+u_h)(v_l+v_h)$: two polynomials of degree less than 10, result of degree less than 19. However, since $u_h$ and $v_h$ are one element shorter than $u_l$ and $v_l$, the top element (degree 18) of this product is necessarily equal to the top element of $u_l v_l$, and the subsequent polynomial subtraction will cancel out these values. We can thus content ourselves with computing the low 18 elements of $(u_l+u_h)(v_l+v_h)$ (degrees 0 to 17) and ignore the top one. \end{itemize} Asymptotic behaviour is an approximation of the cost for sufficiently large inputs; but our inputs are not necessarily large enough for that approximation to be accurate. Karatsuba split reduces the number of multiplications but increases the number of additions; for small enough inputs, the extra additions overtake the cost savings from doing fewer multiplications. The threshold depends on the relative costs of additions and multiplications. In our case, multiplications are inexpensive, since Montgomery reduction is delayed. Moreover, as was pointed out previously, the costs of exchanging data between registers and RAM tend to be higher than the costs of computations. Thus, estimates based on operation counts that assume ideally free data movements may lead to the wrong conclusions, and only actual experiments will yield proper results. In our implementation, we found that, on the ARM Cortex-M0+, one level of Karatsuba split is optimal; the operands are split into a low half of 10 elements, and a high half of 9 elements. The computation of $u_l+u_h$ can be done two elements at a time, since elements are expressed over 16 bits and packed by pairs into 32-bit words. Performing further splits yields only worse performance. The additions and subtractions that follow the three sub-products (the ``Karastuba fix-up'') must operate on the 32-bit intermediate words (Montgomery reduction has not been applied yet at this point). We combine these operations with the reduction modulo $z^{19}-2$, and with Montgomery reductions. If we define the following: \begin{eqnarray*} \alpha &=& u_l v_l \\ \beta &=& u_h v_h \\ \gamma &=& (u_l+u_h)(v_l+vh) \\ w &=& uv \bmod z^{19}-2 \\ \end{eqnarray*} then the output words are computed as: \begin{equation*} w_i = \alpha_i + \gamma_{i-10} - \alpha_{i-10} - \beta_{i-10} + 2 (\beta_{i-1} + \gamma_{i+9} - \alpha_{i+9} - \beta_{i+9}) \end{equation*} with the convention that out-of-range coefficients are zero (i.e. $\alpha_j = 0$ when $j < 0$ or $j \geq 19$). In the expression above, $\gamma_{i-10}$, $\alpha_{i-10}$ and $\beta_{i-10}$ can be non-zero only if $\gamma_{i+9}$, $\alpha_{i+9}$ and $\beta_{i+9}$ are zero, and vice versa. Each $w_i$ would then entail reading five 32-bit words, but some of these read operations can be shared if we produce the output words in the order: $w_9$, $w_0$, $w_{10}$, $w_1$,\ldots, i.e. computation of $w_j$ is followed by computation of $w_{j-9 \bmod 19}$. Only three memory reads are then needed for each output word on average. Some of these words can be further optimized by noticing that, for instance, $\beta_j = 0$ for $j \geq 17$, and $\gamma_{19} = \alpha_{19}$. Putting all together, we obtain the individual costs detailed in table~\ref{tab:fieldmul}, for a total cost of $1\,574$ cycles. Compared to the baseline performance (section~\ref{sec:field-ops-baseline}), this is about 7.1\% higher than the field multiplication cost reported in~\cite{DulHaaHinHutPaaSanSch2015}. Thus, while the use of the finite field $\GF(9767^{19})$ does not yield a faster multiplication routine than with the field $\GF(2^{255}-19)$, it is still competitive, the difference in performance being slight. \begin{table}[H] \begin{center} \begin{tabular}{|l|r|} \hline \textsf{\textbf{Operation}} & \textsf{\textbf{Cost (cycles)}} \\ \hline function prologue & 5 \\ $u_l+u_h$ and $v_l+v_h$ & 63 \\ $u_l v_l$ & 410 \\ $u_h v_h$ & 345 \\ $(u_l+u_h)(v_l+v_h)$ & 405 \\ Karatsuba fix-up and Montgomery reduction & 337 \\ function exit & 5 \\ \hline \textsf{\textbf{Total}} & $1\,574$ \\ \hline \end{tabular} \end{center} \caption{\label{tab:fieldmul}Field multiplication cost.} \end{table} \paragraph{Squarings.} Squarings can be optimized by noticing that: \begin{eqnarray*} u^2 &=& (u_l + z^{10} u_h)^2 \\ &=& u_l^2 + 2 z^{10} u_l u_h + z^{20} u_h^2 \\ \end{eqnarray*} reducing the 19-element squaring to a 10-element squaring, a 9-element squaring, and a $10\times 9$-element multiplication. However, in our experiments, we found that squarings of polynomials of 10 elements or fewer are almost twice faster than generic multiplications, mostly because all operands can then fit into registers and avoid almost all read and write accesses to RAM. Therefore, better performance is achieved by using Karatsuba multiplication: \begin{eqnarray*} u^2 &=& (u_l + z^{10} u_h)^2 \\ &=& u_l^2 + z^{10} ((u_l+u_h)^2 - u_l^2 - u_h^2) + z^{20} u_h^2 \\ \end{eqnarray*} We obtain the cycle counts detailed in table~\ref{tab:fieldsqr}, for a total of 994~cycles. This is very slightly faster than the best reported baseline squaring in $\GF(2^{255}-19)$ ($998$ cycles, in~\cite{HaaLab2019}). A noteworthy point is that squaring costs are only about 63.2\% of multiplication costs; in elliptic curve computations, this makes it worthwhile to replace 2 multiplications with 3 squarings. This impacts analysis of elliptic curve formulas; e.g. \cite{EFD} ranks formulas under the assumption that a squaring cost is 80\% of a multiplication cost. \begin{table}[H] \begin{center} \begin{tabular}{|l|r|} \hline \textsf{\textbf{Operation}} & \textsf{\textbf{Cost (cycles)}} \\ \hline function prologue & 5 \\ $u_l+u_h$ & 30 \\ $u_l^2$ & 219 \\ $u_h^2$ & 182 \\ $(u_l+u_h)^2$ & 216 \\ Karatsuba fix-up and Montgomery reduction & 337 \\ function exit & 5 \\ \hline \textsf{\textbf{Total}} & 994 \\ \hline \end{tabular} \end{center} \caption{\label{tab:fieldsqr}Field squaring cost.} \end{table} Another important remark is that additions and subtractions are relatively expensive: an addition in the field is 173 cycles, i.e. about 11\% of the cost of a multiplication, and 17.4\% of the cost of a squaring. This highlights that counting multiplications and squarings is not sufficient to get an accurate estimate of a complete operation on elliptic curve points. \subsection{Inversion}\label{sec:field-ops-inv} Modular inversion can be computed in several ways. The main recommended method is to use Fermat's little theorem; namely, the inverse of $u$ in a finite field of cardinal $q$ is $u^{q-2}$. Nominally, $u = 0$ does not have an inverse, but the exponentiation yields a result of 0 if the operand is 0, and that turns out to be convenient in some edge cases. For an $m$-bit exponent $q-2$, $m-1$ squarings will be needed, along with some extra multiplications; since the exponent is known in advance and not secret, the number of extra multiplications can be quite small by using an optimized addition chain on the exponent. In~\cite{DulHaaHinHutPaaSanSch2015}, inversion in $\GF(2^{255}-19)$ is performed with 254 squarings, and 11 extra multiplications. On $\GF(9767^{19})$, we can use a much faster method, which computes an inversion in a cost equivalent to only 6 multiplications. The method was initially described by Itoh and Tsujii in the context of binary fields\cite{ItoTsu1988}, then adapted to other finite field extensions (e.g. see~\cite{HanMenVan2003}). It uses the following remark: \begin{equation*} p^n - 1 = (p-1)(1 + p + p^2 + p^3 + \dots + p^{n-1}) \end{equation*} Let $r = 1 + p + p^2 + \dots + p^{n-1}$, and let $x \neq 0$ a field element to invert. By Fermat's little theorem, we have: \begin{eqnarray*} (x^r)^{p-1} &=& x^{p^n-1} \\ &=& 1 \\ \end{eqnarray*} Therefore, $x^r$ is a root of the polynomial $X^{p-1}-1$ over the finite field $\GF(p^n)$. That polynomial can have at most $p-1$ roots, and all elements of $\GF(p)$ are roots, therefore the roots of $X^{p-1}-1$ over $\GF(p^n)$ are exactly the elements of the sub-field $\GF(p)$. It follows that $x^r \in \GF(p)$. We can thus compute the inverse of $x$ as: \begin{equation*} x^{-1} = \frac{x^{r-1}}{x^r} \end{equation*} The division by $x^r$ is easy since it requires only an inversion in $\GF(p)$, followed by a multiplication of $x^{r-1}$ by that inverse, which is also in $\GF(p)$. The values $x^{r-1}$ and then $x^r$ can be efficiently computed through application of the Frobenius automorphism. In a finite field $\GF(p^n)$, we define the $j$-th Frobenius operator (for $0 \leq j < n$) as: \begin{eqnarray*} \Phi_j : \GF(p^n) &\longrightarrow& \GF(p^n) \\ x &\longmapsto& x^{p^j} \\ \end{eqnarray*} Since $\GF(p^n)$ has characteristic $p$, these operators are automorphisms: $\Phi_j(xy) = \Phi_j(x)\Phi_j(y)$ and $\Phi_j(x+y) = \Phi_j(x) + \Phi_j(y)$ for all $x$ and $y$ in $\GF(p^n)$. Moreover, when the finite field is defined as the quotient ring $\GF(p)[z]/(z^n-c)$, then we have: \begin{equation*} \Phi_j(z^i) = c^{ij(p-1)/n} z^i \end{equation*} This means that computing $\Phi_j(x)$ over a field element $x$ is a simple matter of term-by-term multiplication with the values $c^{ij(p-1)/n}$ in $\GF(p)$; these values can be precomputed. In our implementation, application of a Frobenius operator costs 211 cycles, i.e. slightly more than an addition, but much less so than a multiplication. Such optimization of the Frobenius operator is the main reason why we wanted the field extension modulus with the form $z^n-c$ for some constant $c$ in $\GF(p)$, and not, for instance, $z^n-z-1$ (which would also have supported base primes $p$ in an adequate range). The inversion algorithm (see algorithm~\ref{alg:inverse}) leverages these facts to compute $x^{r-1}$ in only 5 multiplications, and 6 applications of a Frobenius operator. \begin{algorithm}[H] \caption{\ \ Fast inversion in $\GF(9767^{19})$}\label{alg:inverse} \begin{algorithmic}[1] \Require{$x \in \GF(p^n)$, $x\neq 0$, $p = 9767$, $n = 19$} \Ensure{$1/x$} \State{\label{alg:inverse:mulfrob1}$t_1 \gets x\cdot\Phi_1(x)$}\Comment{$t_1 = x^{1+p}$} \State{\label{alg:inverse:mulfrob2}$t_1 \gets t_1\cdot\Phi_2(t_1)$}\Comment{$t_1 = x^{1+p+p^2+p^3}$} \State{\label{alg:inverse:mulfrob3}$t_1 \gets t_1\cdot\Phi_4(t_1)$}\Comment{$t_1 = x^{1+p+p^2+\dots+p^7}$} \State{\label{alg:inverse:mulfrob4}$t_1 \gets x\cdot\Phi_1(t_1)$}\Comment{$t_1 = x^{1+p+p^2+\dots+p^8}$} \State{\label{alg:inverse:mulfrob5}$t_1 \gets t_1\cdot\Phi_9(t_1)$}\Comment{$t_1 = x^{1+p+p^2+\dots+p^{17}}$} \State{\label{alg:inverse:frob6}$t_1 \gets \Phi_1(t_1)$}\Comment{$t_1 = x^{p+p^2+p^3+\dots+p^{18}} = x^{r-1}$} \State{\label{alg:inverse:multoGFp}$t_2 \gets x t_1$}\Comment{$t_2 = x^r \in \GF(p)$} \State{\label{alg:inverse:invGFp}$t_2 \gets t_2^{p-2}$}\Comment{Inversion in $\GF(p)$} \State{\label{alg:inverse:mulbyGFp}$t_1 \gets t_1 t_2$}\Comment{$t_1 = x^{r-1} / x^r = 1/x$} \State{\Return $t_1$} \end{algorithmic} \end{algorithm} In algorithm~\ref{alg:inverse}, the following remarks apply: \begin{itemize} \item Four of the field multiplications are between $u$ and $\Phi_j(u)$ for some element $u$. The Frobenius operator and the multiplication can be combined to avoid some write operations that are then read again immediately. In our implementation, this saves about 36~cycles each time, i.e. 144~cycles over the complete inversion. \item The multiplication in step~\ref{alg:inverse:multoGFp} is fast because the result is known to be an element of $\GF(p)$; thus, only one polynomial coefficient needs to be computed. \item Inversion of $t_2$ in $\GF(p)$ (step~\ref{alg:inverse:invGFp}) can be done with Fermat's little theorem, i.e. by raising the input to the power $p-2$. With $p = 9767$, this is a matter of only 17 Montgomery multiplications. In our implementation, this step costs only 107 cycles. \item Multiplication by $t_2$ in step~\ref{alg:inverse:mulbyGFp} is a simple coefficient-wise multiplication, thus much more efficient than a normal multiplication. \end{itemize} Our implementation achieves the costs listed in table~\ref{tab:fieldinv}, for a total of $9\,508$ cycles. This is $6.04$ times the cost of a single multiplication. Since the algorithm itself involves 5 generic multiplications, this means that the 6 Frobenius operators, the specialized multiplication that yields $x^r$, the inversion in $\GF(p)$, and the final multiplication by $x^{-r}$, collectively cost about the same as a 6th generic multiplication. \begin{table}[H] \begin{center} \begin{tabular}{|l|r|} \hline \textsf{\textbf{Operation}} & \textsf{\textbf{Cost (cycles)}} \\ \hline function prologue & 7 \\ combined Frobenius and multiplication (step \ref{alg:inverse:mulfrob1}) & $1\,762$ \\ combined Frobenius and multiplication (step \ref{alg:inverse:mulfrob2}) & $1\,763$ \\ combined Frobenius and multiplication (step \ref{alg:inverse:mulfrob3}) & $1\,763$ \\ Frobenius and multiplication (step \ref{alg:inverse:mulfrob4}) & $1\,798$ \\ combined Frobenius and multiplication (step \ref{alg:inverse:mulfrob5}) & $1\,763$ \\ Frobenius (step \ref{alg:inverse:frob6}) & 217 \\ $x^r$ & 130 \\ $x^{-r}$ & 110 \\ $x^{r-1} x^{-r}$ & 190 \\ function exit & 5 \\ \hline \textsf{\textbf{Total}} & $9\,508$ \\ \hline \end{tabular} \end{center} \caption{\label{tab:fieldinv}Field inversion cost.} \end{table} \subsection{Square Root}\label{sec:field-ops-sqrt} Using techniques similar to the fast inversion algorithm exposed in section~\ref{sec:field-ops-inv}, we can obtain a fast square root extraction algorithm, and an even faster quadratic residue test. Our field has cardinal $q = 9767^{19}$. Since $q = 3\bmod 4$, square roots of element $x \in \GF(q)$ are obtained as: \begin{equation*} \sqrt{x} = \pm x^{(q+1)/4} \end{equation*} If $x$ is not a quadratic residue, then $-x$ is a quadratic residue, and this modular exponentiation returns a square root of $-x$. The exponent can be written as: \begin{equation*} \frac{p^{19}+1}{4} = (2e - r) \frac{p+1}{4} \end{equation*} where: \begin{equation*} \begin{array}{rclcl} d &=& & & 1 + p^2 + p^4 + \dots + p^{14} + p^{16} \\ e &=& 1 + dp^2 &=& 1 + p^2 + p^4 + \dots + p^{14} + p^{16} + p^{18} \\ f &=& pd &=& p + p^3 + p^5 + \dots + p^{15} + p^{17} \\ r &=& e + f &=& 1 + p + p^2 + p^3 + \dots + p^{17} + p^{18} \\ \end{array} \end{equation*} This allows performing the square root computations as: \begin{equation*} \sqrt{x} = \pm \left( \frac{(x^e)^2}{x^r} \right)^{(p+1)/4} \end{equation*} As in the case of the inversion algorithm, $x^r$ is an element of the sub-field $\GF(p)$, hence inexpensive to invert; and $a^e$ can be computed with a few multiplications and applications of Frobenius operators. The final exponentiation, with exponent $(p+1)/4$, can be done with 10 squarings and 4 multiplications. The algorithm can also return whether the operand is a quadratic residue. Indeed, $x\neq 0$ is a quadratic residue if and only if $x^{(q-1)/2} = 1$. This exponent can be written as: \begin{equation*} \frac{p^{19}-1}{2} = r \frac{p-1}{2} \end{equation*} Therefore, $x^{(q-1)/2} = (x^r)^{(p-1)/2}$. In other words, $x$ is a quadratic residue in $\GF(q)$ if and only if $x^r$ is a quadratic residue in $\GF(p)$ (this also applies for $x = 0$). Since we compute $x^r$ as part of the algorithm, we can also check whether it is a quadratic residue for fewer than 100 extra cycles. Moreover, if we are \emph{only} interested in whether $x$ is a quadratic residue, we can stop there and avoid the final raise to power $(p+1)/4$. The exact process is described in algorithm~\ref{alg:sqrt}. The operation costs are detailed in table~\ref{tab:fieldsqrt}, for a total cost of $26\,962$~cycles. If the square root is not requested, only the quadratic residue status, then that status is obtained in $9\,341$ cycles. \begin{algorithm}[H] \caption{\ \ Fast square root in $\GF(9767^{19})$}\label{alg:sqrt} \begin{algorithmic}[1] \Require{$x \in \GF(p^n)$, $p = 9767$, $n = 19$} \Ensure{QR status of $x$; $\sqrt{x}$ if QR, $\sqrt{-x}$ otherwise} \State{\label{alg:sqrt:mulfrob1}$t_1 \gets x\cdot\Phi_2(x)$}\Comment{$t_1 = x^{1+p^2}$} \State{\label{alg:sqrt:mulfrob2}$t_1 \gets t_1\cdot\Phi_4(t_1)$}\Comment{$t_1 = x^{1+p^2+p^4+p^6}$} \State{\label{alg:sqrt:mulfrob3}$t_1 \gets t_1\cdot\Phi_8(t_1)$}\Comment{$t_1 = x^{1+p^2+p^4+\dots+p^{14}}$} \State{\label{alg:sqrt:mulfrob4}$t_1 \gets x\cdot\Phi_2(t_1)$}\Comment{$t_1 = x^{1+p^2+p^4+\dots+p^{16}}$} \State{\label{alg:sqrt:frob5}$t_2 \gets \Phi_1(t_1)$}\Comment{$t_2 = x^{p+p^3+p^5+\dots+p^{17}} = x^f$} \State{\label{alg:sqrt:mulfrob6}$t_1 \gets x\cdot\Phi_1(t_2)$}\Comment{$t_1 = x^{1+p^2+p^4+\dots+p^{18}} = x^e$} \State{\label{alg:sqrt:multoGFp}$t_3 \gets t_1 t_2$}\Comment{$t_3 = x^r \in \GF(p)$} \State{\label{alg:sqrt:testQR}$t_4 \gets t_3^{(p-1)/2}$}\Comment{$t_4 = 0$, $1$ or $-1$} \If{only QR status is requested} \Return $(t_4 \neq -1)$ \EndIf \State{\label{alg:sqrt:invGFp}$t_3 \gets t_3^{p-2}$}\Comment{Inversion in $\GF(p)$} \State{\label{alg:sqrt:sqr}$t_1 \gets t_1^2$}\Comment{$t_1 = x^{2e}$} \State{\label{alg:sqrt:mulbyGFp}$t_1 \gets t_1 t_3$}\Comment{$t_1 = x^{2e} / x^r$} \State{\label{alg:sqrt:modpow}$t_1 \gets t_1^{(p+1)/4}$}\Comment{$t_1 = \sqrt{x}$ or $\sqrt{-x}$} \State{\Return $(t_4 \neq -1)$ and $t_1$} \end{algorithmic} \end{algorithm} \begin{table}[H] \begin{center} \begin{tabular}{|l|r|} \hline \textsf{\textbf{Operation}} & \textsf{\textbf{Cost (cycles)}} \\ \hline function prologue & 7 \\ combined Frobenius and multiplication (step \ref{alg:sqrt:mulfrob1}) & $1\,762$ \\ combined Frobenius and multiplication (step \ref{alg:sqrt:mulfrob2}) & $1\,763$ \\ combined Frobenius and multiplication (step \ref{alg:sqrt:mulfrob3}) & $1\,763$ \\ Frobenius and multiplication (step \ref{alg:sqrt:mulfrob4}) & $1\,798$ \\ Frobenius (step \ref{alg:sqrt:frob5}) & 217 \\ Frobenius and multiplication (step \ref{alg:sqrt:mulfrob6}) & $1\,798$ \\ $x^r$ & 129 \\ QR status & 100 \\ exit if only QR status requested & 4 \\ $x^{-r}$ & 113 \\ $(x^e)^2$ & 999 \\ $(x^e)^2 x^{-r}$ & 195 \\ raise to power $(p+1)/4$ & $16\,311$ \\ function exit & 7 \\ \hline \textsf{\textbf{Total}} & $26\,962$ \\ (if only QR status requested) & $9\,341$ \\ \hline \end{tabular} \end{center} \caption{\label{tab:fieldsqrt}Field square root cost.} \end{table} \subsection{Cube Root}\label{sec:field-ops-cubert} Since $q = 9767^{19} = 2 \bmod 3$, every element in $\GF(q)$ has a unique cube root, which is obtained with a modular exponentiation: \begin{equation*} \sqrt[3]{x} = x^{(2q-1)/3} \end{equation*} As in the case of square roots, this exponentiation can be greatly optimized with the Frobenius operator. The exponent is rewritten as: \begin{equation*} \frac{2p^{19}-1}{3} = e \frac{2p-1}{3} + f \frac{p-2}{3} \end{equation*} with $e$ and $f$ defined as in section~\ref{sec:field-ops-sqrt}. We then compute the cube root as: \begin{eqnarray*} \sqrt[3]{x} &=& (x^e)^{(2p-1)/3} (x^f)^{(p-2)/3} \\ &=& x^e (x^{2e+f})^{(p-2)/3} \\ \end{eqnarray*} This yields algorithm~\ref{alg:cubert} with costs detailed in table~\ref{tab:fieldcubert} and a total cost of $31\,163$ cycles. \begin{algorithm}[H] \caption{\ \ Fast cube root in $\GF(9767^{19})$}\label{alg:cubert} \begin{algorithmic}[1] \Require{$x \in \GF(p^n)$, $p = 9767$, $n = 19$} \Ensure{$\sqrt[3]{x}$} \State{\label{alg:cubert:mulfrob1}$t_1 \gets x\cdot\Phi_2(x)$}\Comment{$t_1 = x^{1+p^2}$} \State{\label{alg:cubert:mulfrob2}$t_1 \gets t_1\cdot\Phi_4(t_1)$}\Comment{$t_1 = x^{1+p^2+p^4+p^6}$} \State{\label{alg:cubert:mulfrob3}$t_1 \gets t_1\cdot\Phi_8(t_1)$}\Comment{$t_1 = x^{1+p^2+p^4+\dots+p^{14}}$} \State{\label{alg:cubert:mulfrob4}$t_1 \gets x\cdot\Phi_2(t_1)$}\Comment{$t_1 = x^{1+p^2+p^4+\dots+p^{16}}$} \State{\label{alg:cubert:frob5}$t_2 \gets \Phi_1(t_1)$}\Comment{$t_2 = x^{p+p^3+p^5+\dots+p^{17}} = x^f$} \State{\label{alg:cubert:mulfrob6}$t_1 \gets x\cdot\Phi_1(t_2)$}\Comment{$t_1 = x^{1+p^2+p^4+\dots+p^{18}} = x^e$} \State{\label{alg:cubert:sqrmul}$t_2 \gets t_1^2 t_2$}\Comment{$t_2 = x^{2e+f}$} \State{\label{alg:cubert:modpow}$t_2 \gets t_2^{(p-2)/3}$}\Comment{$t_2 = (x^{2e+f})^{(p-2)/3}$} \State{\label{alg:cubert:lastmul}$t_1 \gets t_1 t_2$}\Comment{$t_1 = \sqrt[3]{x}$} \State{\Return $t_1$} \end{algorithmic} \end{algorithm} \begin{table}[H] \begin{center} \begin{tabular}{|l|r|} \hline \textsf{\textbf{Operation}} & \textsf{\textbf{Cost (cycles)}} \\ \hline function prologue & 7 \\ combined Frobenius and multiplication (step \ref{alg:cubert:mulfrob1}) & $1\,762$ \\ combined Frobenius and multiplication (step \ref{alg:cubert:mulfrob2}) & $1\,763$ \\ combined Frobenius and multiplication (step \ref{alg:cubert:mulfrob3}) & $1\,763$ \\ Frobenius and multiplication (step \ref{alg:cubert:mulfrob4}) & $1\,798$ \\ Frobenius (step \ref{alg:cubert:frob5}) & 217 \\ Frobenius and multiplication (step \ref{alg:cubert:mulfrob6}) & $1\,798$ \\ $x^{2e+f}$ & $2\,579$ \\ raise to power $(p-2)/3$ & $17\,890$ \\ final multiplication (step \ref{alg:cubert:lastmul}) & $1\,581$ \\ function exit & 5 \\ \hline \textsf{\textbf{Total}} & $31\,163$ \\ \hline \end{tabular} \end{center} \caption{\label{tab:fieldcubert}Field cube root cost.} \end{table} Other root computations can benefit from such optimizations. In general, every exponentiation in $\GF(p^n)$ can be optimized by representing the exponent in base $p$, and splitting the exponentiation into $n$ exponentiations with short exponents (less than $p$) over the $\Phi_j(x)$ and whose results are multiplied together; with classic square-and-multiply algorithms, this allows mutualizing all the squaring operations. When the exponentiation is for a $k$-th root with $k$ small, the various small exponents tend to have common parts, leading to further optimizations, as we just saw for inversions, square roots and cube roots. \section{Curve9767}\label{sec:curve9767} Using the finite field $\GF(9767^{19})$ we defined in section~\ref{sec:find-field} and for which efficient operations were described in section~\ref{sec:field-ops}, we now proceed to define an elliptic curve over this field. Following the current fashion of naming curves with the concatenation of the term ``Curve'' and a sequence of seemingly random digits, we call our new curve ``Curve9767'', the digits being, of course, the decimal representation of the base modulus $p$. \subsection{Choosing The Curve} The choice of the exact curve to use was the result of a process taking into account known criteria for security, and with as few arbitrary choices as possible. We describe the steps here. In all of the following, we keep using the notations used previously: the base field modulus is $p = 9767$, and the field cardinal is $q = p^{19}$. \paragraph{Curve Equation.} Since we want a prime-order curve, we cannot use Montgomery or Edwards curves (which always have an even order). Instead, we concentrate on the short Weierstraß equation $y^2 = x^3 + ax + b$. The choice is then really about the two constants $a$ and $b$, which are elements of $\GF(q)$. We cannot choose $a = 0$: since the field cardinal $q = 2 \bmod 3$, this would lead to a curve with exactly $q+1$ elements, which is an even number. Moreover, it would be supersingular, implying in particular a quadratic embedding degree and consequently severe weakness against pairing-based attacks such as MOV\cite{MenOkaVan1993} and FR\cite{FreRuc1994,FreMulRuc1999}. Similarly, we cannot choose $b = 0$: since $q = 3 \bmod 4$, this would again yield a supersingular curve with $q+1$ elements. There are known curve point addition formulas that can leverage the specific choice $a = -3$ for slightly better performance in some cases, e.g. for point doubling with Jacobian coor\-dinates\cite{EFD}\footnote{In our specific case, even when using Jacobian coordinates, these formulas don't actually lead to better performance on the ARM Cortex-M0+ because field squarings are substantially more efficient than multiplications, making 1M+8S generic doubling formulas a better choice than specialized 3M+5S. However, this might not be true of other target architectures, and we'd like to keep our implementation options as open as possible.}. Moreover, for any non-zero $u$, the mapping $(x,y) \mapsto (xu^{-2},yu^{-3})$ is an isomorphism between curve $y^2 = x^3+ax+b$ and curve $y^2 = x^3+au^4x+bu^6$, which implies that we can choose the constant $a$ mostly arbitrarily: about half of all possible curves can be transformed efficiently through such an isomorphism into a curve whose equation has $a = -3$. In all generality, if $a$ is fixed, then $b$ should be chosen pseudorandomly, if we want to claim that a large fraction of possible curves could have been chosen. However, there is no known weakness induced by any specific choice of $b$; we can set it to a low Hamming weight value $b_i z^i$ for some integer $i \in [1..18]$ (as explained above, we need $b \notin \GF(p)$, hence $i \neq 0$). This should not be a controversial optimization, since it is commonly done for other curves. For instance, a similar optimization was done in the choice of Curve25519: the curve equation is $By^2 = x^3 + Ax^2 + x$, but the constant $B$ is fixed to $1$ (which does not unduly shrink the space of possible curves, thanks to isomorphisms) \emph{and} the constant $A$ is chosen to be a small integer so as to promote performance\cite{Ber2006}. \paragraph{Twist Security.} The concept of ``twist security'' is introduced in \cite{Ber2006}, in the context of a specialized point multiplication routine for Curve25519, based on a Montgomery ladder, in which only the $x$ coordinates of points are used. For any $x$ such that $Q = (x,y)$ is a curve point, there is exactly one other point $-Q = (x,-y)$ with the same $x$ coordinate; therefore, the $x$ coordinate is sufficient to represent $Q$, up to the sign of $y$. Since usual Diffie-Hellman uses only the $x$ coordinate of the result as shared secret, this is sufficient for some cryptographic applications: public points $Q$ and $-Q$ lead to the same output, and thus the sign of $y$ is not used. This allows efficient implementations, because the $y$ coordinate needs not be transmitted, and the ladder computes the $x$ coordinates only. However, a consequence is that since $y$ is not available, there is no easy (cheap) way to validate incoming points, i.e. that a received $x$ really corresponds to a point on the curve. Analysis shows that any field element is either the $x$ coordinate of a point on the intended curve $E$, or the $x$ coordinate of a point on the \emph{quadratic twist}, i.e. another curve $E'$ such that $E$ and $E'$ become the same curve in a quadratic field extension $\GF(q^2)$. If the order of $E'$ admits small prime factors, then this would allow invalid curve attacks\cite{BieMeyMul2000}, in which the attacker sends invalid points (i.e. points on the twisted curve) with a small order, allowing an easier break of Diffie-Hellman in that case, leading to partial information on the victim's private key. \emph{Twist security} is about choosing the curve $E$ such that both $E$ and $E'$ have prime order (or close to prime order, with very small cofactors): in a nutshell, it does not matter whether computations are done on $E$ or $E'$, as long as both are ``safe''. In our case, twist security is not really required; even when using an $x$-only implementation, input point validation is inexpensive thanks to the efficient quadratic residue test (see section~\ref{sec:field-ops-sqrt}): for a given $x$, we can compute $x^3+ax+b$; $x$ is valid if and only if that quantity is a quadratic residue. However, ensuring twist security is ``free'': it is only an extra parameter to the curve selection, thus with no runtime cost, as long as requiring twist security does not prevent us from using a particularly efficient curve constant. We will then try to obtain twist security in our curve choice. When using the short Weierstraß equation $y^2 = x^3+ax+b$ on a field $\GF(q)$ where $q = 3 \bmod 4$, the quadratic twist has equation $-y^2 = x^3+ax+b$, which is isomorphic to the curve of equation $y^2 = x^3+ax-b$ (with the isomorphism $(x,y) \mapsto (-x,y)$). Therefore, if $a$ is fixed to a given value (e.g. $-3$) and we look for $b$ with a minimal Hamming weight, then the twisted curve will use the same $a$, and a constant $-b$ with the same minimal Hamming weight. When enumerating possible curves, we will naturally cover the twisted curves at the same time. In that sense, when $E$ and $E'$ both have prime order, we are free to choose either as our curve. In that case, we will use the one with the smallest order: in a Diffie-Hellman context, when receiving point $Q$, the defender computes $sQ$ for a secret non-zero scalar $s$ modulo the curve order $E$; choosing $E$ to be smaller than its twist $E'$ ensures that the computation $sQ$ does not yield the point at infinity $\neutral$, either as a result or an intermediate value, even if $Q$ is really a point on $E'$ instead of $E$. \paragraph{Curve Parameters.} Given the criteria explained above, we enumerated all curves $y^2 = x^3+ax+b$ over $\GF(9767^{19})$, with $a = -3$ (i.e. $p-3$, an element of $\GF(p)$) and $b = b_i z^i$ for $b_i \in \GF(p)$, $b_i \neq 0$, and $1\leq i\leq 18$, looking for curves with prime order. Application of the Frobenius operator $\Phi_j$ maps curve $y^2 = x^3 - 3x + b_i z^i$ to curve $y^2 = x^3 - 3x + 2^{ij(p-1)/19} b_i z^i$, which is also in the set of evaluated curves. Therefore, each considered curve is really a set of 19 isomorphic curves. We can thus restrict ourselves to only one curve in each set, which speeds up the search by a factor 19. We (arbitrarily) choose the representative as the one with the smallest value $b_i$ when expressed as an integer in the $0..p-1$ range. Up to Frobenius isomorphism, there are $18(p-1)/19 = 9252$ curves to consider. Using PARI/GP\cite{PARIGP}, along with the optional \verb+seadata-small+ package (to speed up point counting), we found that exactly 23 of them have a prime order\footnote{The search script is provided with the Curve9767 source code. Enumeration took 1 hour and 40 minutes on a 3.1 GHz x86 server, using a single core.}. As luck would have it, exactly two of them are twists of each other; as per our criteria, we then choose as Curve9767 the one with the smallest order, corresponding to $b = 2048z^9$ (the set of 19 isomorphic curves for the twisted curve then corresponds to $b = 359z^9$). A conventional generator should be selected for the curve. Since the curve has prime order, any point (other than the point at infinity) generates the whole curve; moreover, the ability to solve discrete logarithm relatively to a specific generator is equivalent to the ability to solve it relatively to any other generator. We can thus choose any generator we want. Usually, the choice won't have any impact on performance, but one can imagine some edge cases where coordinates with low Hamming weight are preferable. The value with the lowest Hamming weight is zero. There is no point on Curve9767 with coordinate $y = 0$ (since this would be a point of order 2), but there are two points with $x = 0$: these are the two points $(0,\pm\sqrt{b})$. Both have a $y$ coordinate with Hamming weight 1. As in the case of $b$ within its Frobenius isomorphism class, we arbitrarily choose the point whose $y$ coordinate is the lowest when expressed as an integer in the $0$..$p-1$ range. The resulting Curve9767 parameters are summarized in table~\ref{tab:curveparams}. \begin{table}[H] \begin{center} \renewcommand{\arraystretch}{1.3} \begin{tabular}{|c|c|} \hline Field & $\GF(9767)[z]/(z^{19}-2)$ \\ \hline Field order & $9767^{19}$ \\ \hline Equation & $y^2 = x^3 - 3x + 2048z^9$ \\ \hline Order & {\fontsize{8.5}{11}\selectfont 6389436622109970582043832278503799542449455630003248488928817956373993578097} \\ \hline Generator & $G = (0, 32z^{14})$ \\ \hline \end{tabular} \end{center} \caption{\label{tab:curveparams}Curve9767 definition parameters.} \end{table} \paragraph{Embedding Degree.} For an elliptic curve defined over a finite field $\GF(q)$, and a prime $r$ that divides the curve order, such that $r$ is not the field characteristic and $r^2$ does not divide the curve order, the curve contains $r$ points of order $r$. The \emph{embedding degree} is the minimum integer $k > 0$ such that the same curve over the extension field $\GF(q^k)$ contains $r^2$ points of order $r$. It has been shown by Balasubramanian and Koblitz\cite{BalKob1998} that $k$ is the smallest positive integer such that $r$ divides $q^k-1$; in other words, $k$ is the multiplicative order of $q$ modulo $r$. $k$ is always a divisor of $r-1$. Pairing-based attacks like MOV\cite{MenOkaVan1993} and FR\cite{FreRuc1994,FreMulRuc1999} rely on transferring the elliptic curve discrete logarithm problem into the the discrete logarithm problem in the multiplicative subgroup of $\GF(q^k)$. Therefore, these attacks are possible only if $k$ is small enough that the best known sub-exponential algorithms for the latter problem are faster than generic attacks on the elliptic curve. For an ordinary curve which has not been chosen to be especially pairing-friendly, we expect $k$ to be a very large integer. Various bodies have emitted recommendations that insist on $k$ being larger than a given threshold; for instance, ANSI X9.62:2005\cite{X962} requires $k \geq 100$, while SafeCurves\cite{SafeCurves} goes much beyond (in their words, the ``overkill approach'') and requires $k \geq (r-1)/100$. In the case of Curve9767, the curve order itself is prime, hence the only possible value for $r$ is the curve order. The embedding degree then happens to be $k = r-1 \approx 2^{251.82}$, i.e. the maximum possible value. This makes Curve9767 as immune to MOV and FR attacks as it is possible for an elliptic curve to be. \paragraph{Complex Multiplication Discriminant.} For an elliptic curve defined over a finite field $\GF(q)$ and with order $r$, the \emph{trace} is the value $t = q + 1 - r$. By Hasse's theorem, $|t| \leq 2\sqrt{q}$; thus, $t^2-4q$ is a negative integer. Write that quantity as $DV^2$, where $D$ is a square-free negative integer, and $V$ is a positive integer. The value $D$ is the \emph{complex multiplication field discriminant}\footnote{Strictly speaking, when $D$ is a multiple of 4, the actual discriminant is defined to be $4D$. But this cannot happen for an odd-order curve over an odd-characteristic field, because then $t$ must be odd, implying that $D = 1\bmod 4$.}. When $|D|$ is very small, it may accept low-degree (i.e. efficiently computable) curve endomorphisms that can be used to speed up point multiplications\cite{GalLamVan2001,LonSic2014}. This has been used in curves specially designed to that effect, e.g. secp256k1\cite{SEC2} and Four$\mathbb{Q}$\cite{CraLon2015}. However, when a curve has not been specifically chosen for a small discriminant, it is expected that the value of $|D|$ is large. Curves with a small discriminant are certainly not broken, but an \emph{unexpected} small discriminant would be indicative of some unaccounted for underlying structure, which would be suspicious. In Curve9767, the $t^2-4q$ quantity is already square-free (i.e. $V = 1$) leading to a very large discriminant $D \approx -2^{253.82}$, as is expected of most ordinary curves. \subsection{Point Representation} In our implementation, a point $Q$ on the curve is the combination of three elements $(x,y,N)$: \begin{itemize} \item $x$ and $y$ are the affine coordinates of $Q$; they are elements of $\GF(q)$, in the representation used in section ~\ref{sec:field-ops-repr} (40 bytes each, including the dummy slot for 32-bit alignment). \item $N$ is the ``neutral flag'': an integer with value $1$ (if $Q = \neutral$) or $0$ (if $Q \neq \neutral$). \end{itemize} We encode $N$ over a 32-bit field, again for alignment purposes. When $N = 1$, the contents of $x$ and $y$ are unspecified; since we use the exact-width type \verb+uint16_t+, access does not lead to ``undefined behavior'' in the C standard sense, even if not explicitly set in the code\footnote{Exact-width types are not allowed to have any padding bits or trap representations, therefore they always have a readable value, even if it is not specified.}, but these values are ultimately ignored since the point at infinity does not have coordinates. A consequence is that $\neutral$ has multiple representations, while all other points have a unique in-memory representation. \subsection{Point Addition} We implement point addition by applying the affine equations, as shown in section~\ref{sec:intro}. We want a \emph{complete}, \emph{constant-time} routine, i.e. one that works on all combinations of inputs, with an execution time and memory access pattern independent of input values (see section~\ref{sec:impl:sidechannels} for details). This is achieved with the process described in algorithm~\ref{alg:pointadd}. \begin{algorithm}[H] \caption{\ \ Point addition for Curve9767}\label{alg:pointadd} \begin{algorithmic}[1] \Require{$Q_1 = (x_1, y_1, N_1)$ and $Q_2 = (x_2, y_2, N_2)$ points on Curve9767} \Ensure{$Q_3 = Q_1 + Q_2$} \State{$e_x \gets \text{\textsf{EQ}}(x_1, x_2)$}\Comment{$e_x = 1$ if $x_1 = x_2$, $0$ otherwise} \State{$e_y \gets \text{\textsf{EQ}}(y_1, y_2)$}\Comment{$e_y = 1$ if $y_1 = y_2$, $0$ otherwise} \State{$t_1 \gets x_2 - x_1$} \State{$t_3 \gets 2y_1$} \State{$\text{\textsf{CONDCOPY}}(\&t_1, t_3, e_x)$}\Comment{$t_1$ is the denominator of $\lambda$} \State{$t_2 \gets y_2 - y_1$} \State{$t_3 \gets 3x_1^2 - 3$} \State{$\text{\textsf{CONDCOPY}}(\&t_2, t_3, e_x)$}\Comment{$t_2$ is the numerator of $\lambda$} \State{$t_1 \gets t_2 / t_1$}\Comment{$t_1 = \lambda$} \State{$x_3 \gets \lambda^2 - x_1 - x_2$} \State{$y_3 \gets \lambda(x_1 - x_3) - y_1$} \State{\label{alg:pointadd:ccx1}$\text{\textsf{CONDCOPY}}(\&x_3, x_2, N_1)$} \State{\label{alg:pointadd:ccx2}$\text{\textsf{CONDCOPY}}(\&x_3, x_1, N_2)$} \State{\label{alg:pointadd:ccy1}$\text{\textsf{CONDCOPY}}(\&y_3, y_2, N_1)$} \State{\label{alg:pointadd:ccy2}$\text{\textsf{CONDCOPY}}(\&y_3, y_1, N_2)$} \State{$N_3 \gets (N_1 N_2) + (1 - N_1)(1 - N_2) e_x (1 - e_y)$} \State{\Return $Q_3 = (x_3, y_3, N_3)$} \end{algorithmic} \end{algorithm} In algorithm~\ref{alg:pointadd}, two helper functions are used: \begin{itemize} \item $\text{\textsf{EQ}}(u,v)$ returns $1$ if $u = v$, $0$ if $u\neq v$. \item $\text{\textsf{CONDCOPY}}(\&u,v,F)$ overwrites $u$ with $v$ if $F = 1$, but leaves $u$ unmodified if $F = 0$. \end{itemize} Both functions are implemented with constant-time code: for instance, in $\text{\textsf{CONDCOPY}}$, all words of $u$ and $v$ are read, and all words of $u$ written to, regardless of whether $F$ is $0$ or $1$. This description is formal; in the actual implementation, some operations are combined to lower memory traffic. Typically, the conditional copies in step~\ref{alg:pointadd:ccx1} and~\ref{alg:pointadd:ccx2} are done in a single loop; similarly for steps~\ref{alg:pointadd:ccy1} and~\ref{alg:pointadd:ccy2}. We can see that the algorithm implements all edge cases properly: \begin{itemize} \item If $Q_1 = \neutral$ and $Q_2 = \neutral$, then $N_1 = 1$ and $N_2 = 1$, leading to $N_3 = 1$, i.e. $Q_1 + Q_2 = \neutral$. \item If $Q_1 = \neutral$ and $Q_2 \neq \neutral$, then $(x_3,y_3)$ is set to $(x_2,y_2)$ in steps~\ref{alg:pointadd:ccx1} and~\ref{alg:pointadd:ccy1}, but not modified in steps~\ref{alg:pointadd:ccx2} and~\ref{alg:pointadd:ccy2}. Also, $N_3$ is set to $0$. The result is thus point $Q_2$, as expected. \item If $Q_1 \neq \neutral$ and $Q_2 = \neutral$, then $(x_3,y_3)$ is set to $(x_1,y_1)$ in steps~\ref{alg:pointadd:ccx2} and~\ref{alg:pointadd:ccy2}, but not modified in steps~\ref{alg:pointadd:ccx1} and~\ref{alg:pointadd:ccy1}. Also, $N_3$ is set to $0$. The result is thus point $Q_1$, as expected. \item Otherwise, $Q_1 \neq \neutral$ and $Q_2 \neq \neutral$, i.e. $N_1 = 0$ and $N_2 = 0$. The following sub-cases may happen: \begin{itemize} \item If $Q_1 = Q_2$ then $e_x = 1$ and $e_y = 1$. The numerator and denominator of $\lambda$ are computed to be $3x_1^2+a$ and $2y_1$, respectively, as befits a point doubling operation. The result neutral flag $N_3$ is properly set to $0$: there is no point of order 2 on the curve, thus the result cannot be the point at infinity. \item If $Q_1 = -Q_2$, then $e_x = 1$ and $e_y = 0$; this leads to $N_3 = 1$ in the final step, i.e. the point at infinity is properly returned. \item Otherwise, $Q_1 \neq Q_2$ and $Q_1 \neq -Q_2$; this implies that $x_1 \neq x_2$, hence $e_x = 0$. The numerator and denominator of $\lambda$ are set to $y_2 - y_1$ and $x_2 - x_1$, respectively, in application of the generic point addition formula. $N_3$ is set to $0$: the result cannot be the point at infinity. \end{itemize} \end{itemize} Our optimized implementation for ARM Cortex-M0+ computes a point addition in a total of $16\,331$ cycles, i.e. about $10.4$ times the cost a field multiplication. This cost is detailed in table~\ref{tab:pointadd}. We may notice that since the process involves one inversion ($9\,508$~cycles), two multiplications ($1\,574$~cycles each) and two squarings ($994$~cycles each), the overhead for all ``linear'' operations (subtractions, conditional copies...) is $1\,687$~cycles, i.e. about 10.3\% of the total. This function furthermore follows the C ABI: it saves and restores registers properly and thus can be called from external application code. \begin{table}[H] \begin{center} \begin{tabular}{|l|r|} \hline \textsf{\textbf{Operation}} & \textsf{\textbf{Cost (cycles)}} \\ \hline function prologue & 20 \\ $e_x$ and $e_y$ & 116 \\ denominator of $\lambda$ & 285 \\ $x_1^2$ & 999 \\ numerator of $\lambda$ & 300 \\ division $t_2 / t_1$ (inversion + multiplication) & $11\,093$ \\ $x_3$ & $1\,253$ \\ $y_3$ & $1\,958$ \\ conditional copy of $(x_1,y_1)$ or $(x_2,y_2)$ & 269 \\ $N_3$ & 22 \\ function exit & 16 \\ \hline \textsf{\textbf{Total}} & $16\,331$ \\ \hline \end{tabular} \end{center} \caption{\label{tab:pointadd}Point addition cost.} \end{table} \paragraph{Completeness.} In the context of elliptic curves, \emph{complete formulas} are formulas that work in all cases, including edge cases such as adding a point to itself, or to the point at infinity. In practice, applications that use elliptic curves in various cryptographic protocols need \emph{complete routines} that can add points without edge cases (that could lead to incorrect results) and without timing variations when an internally handled edge-case is encountered. Complete formulas are a means through which a complete routine can be obtained. Here, we implemented a complete routine which is \emph{not} based on complete formulas, but is still efficient. Notably, making the point addition function complete does not require difficult trade-offs with regard to performance. An incomplete addition routine that cannot handle doublings (when adding a point to itself) would save about $1\,500$~cycles (it would avoid computing $3x_1^2+a$ and $2y_1$, as well as some $\text{\textsf{CONDCOPY}}$ calls), less than 10\% of the point addition cost. We argue that obtention of a complete routine whose efficiency is close to that of any potential incomplete routine is sufficient for security in all generality. A developer who is intent on reimplementing curve operations in an incomplete or non-timing-resistant way will be able to do so, but will also succeed in ruining the best complete formulas. In that sense, complete formulas are a nice but not strictly required mechanism for achieving complete constant-time routines, and do not in themselves provide absolute protection against implementation mishaps. \subsection{Repeated Doublings}\label{sec:curve9767:doublings} In order to speed up point multiplication, we implemented an optimized function for multiple point doublings. That function takes as input parameters a point $Q_1 = (x_1,y_1)$ and an integer $k \geq 0$; it returns the point $2^k Q_1$. The parameter $k$ is not secret. The two special cases $k = 0$ and $k = 1$ are first handled, by copying the input into the output (66~cycles) or tail-calling the generic point addition routine ($16\,337$~cycles), respectively. When $k\geq 2$, the doublings are performed by using Jacobian coordinates. This is only an \emph{internal} use: the result is converted to affine coordinates after the $k$-th doubling. It should be noted that point doublings are ``safe'' in Curve9767, because its order is odd: if the input point $Q_1$ is the point at infinity, then $2^k Q_1$ is the point at infinity, but if $Q_1$ is not the point at infinity, then none of the successive $2^j Q_1$ values is the point at infinity. Therefore, the only edge case to cover is $Q_1 = \neutral$, and it is handled in a very simple way: the ``neutral flag'' $N_1$ is simply copied to the result. For point $(x,y)$, the Jacobian coordinates $(X{:}Y{:}Z)$ are such that $x = X/Z^2$ and $y = Y/Z^3$. Since the input point is in affine coordinates, we can optimize the first two doublings. Following an idea of~\cite{LeNgu2012}, we can implement the first doubling in only four squarings, and some linear operations; if $Q_2 = (X_2{:}Y_2{:}Z_2) = 2Q_1$, then the following holds: \begin{eqnarray*} X_2 &=& x_1^4 - 2 a x_1^2 + a^2 - 8 b x_1 \\ Y_2 &=& y_1^4 + 18 b y_1^2 + 3 a x_1^2 - 6 a^2 x_1^2 - 24 a b x_1 - 27 b^2 - a^3 \\ Z_2 &=& 2 y_1 \\ \end{eqnarray*} Thanks to our choice of curve constants $a = -3$ and $b = 2048z^9$ with very low Hamming weight, multiplications by $a$ and by $b$ are inexpensive. Remaining doublings use the 1M+8S formulas from~\cite{EFD}, which are valid for all short Weierstraß curves in Jacobian coordinates (we do not use the 3M+5S or 4M+4S formulas that leverage $a = -3$, since that does not yield any performance benefit on the ARM Cortex-M0+, thanks to the high speed of squarings relatively to multiplications). We recall these formulas here, for the doubling of point $(X{:}Y{:}Z)$ into point $(X'{:}Y'{:}Z')$: \begin{eqnarray*} T_1 &=& X^2 \\ T_2 &=& Y^2 \\ T_3 &=& T_2^2 \\ T_4 &=& Z^2 \\ T_5 &=& 2((X+T_2)^2 - T_1 - T_3) \\ T_6 &=& 3T_1 + aT_4^2 \\ X' &=& T_6^2 - 2T_5 \\ Y' &=& T_6(T_5 - X') - 8T_3 \\ Z' &=& (Y+Z)^2 - T_2 - T_4 \\ \end{eqnarray*} Note that the first doubling set $Z = 2y_1$; therefore, the computations of $T_4 = Z^2$ and $T_4^2$ (as part of the computation of $T_6$) really compute $4y_1^2$ and $16y_1^4$. Since $y_1^2$ and $y_1^4$ were already computed as part of the first doubling, we can save two squarings in the second doubling. The total function cost for $k\geq 2$ is $7\,584 + 11\,392k$; this includes the cost of converting back the result to affine coordinates. Table~\ref{tab:pointmul2k} details the cost items. For $k = 4$, this means a cost of $53\,152$ cycles, i.e. about 81.3\% of the $65\,348$ cycles that would have been used to call the generic point addition routine four times (this optimization saves $756\,152$~cycles from a complete point multiplication by a scalar, which is not negligible). \begin{table}[H] \begin{center} \begin{tabular}{|l|r|} \hline \textsf{\textbf{Operation}} & \textsf{\textbf{Cost (cycles)}} \\ \hline function prologue & 28 \\ first doubling & $5\,675$ \\ second doubling & $9\,394$ \\ subsequent doublings ($k-2$ times) & $11\,392$ \\ conversion to affine coordinates & $15\,255$ \\ function exit & 16 \\ \hline \textsf{\textbf{Total}} & $7\,584 + 11\,392k$ \\ \hline \end{tabular} \end{center} \caption{\label{tab:pointmul2k}Point multiplication by $2^k$ cost.} \end{table} \subsection{Point Multiplication By A Scalar} \paragraph{Generic Point Multiplication.} Generic point multiplication receives a point $Q$ and multiplies it by the scalar $s$. In our implementation, scalars are integers modulo $r$ (where $r$ is the curve prime order); scalars are \emph{decoded} from sequences of bytes using unsigned little-endian convention. Two scalar decoding methods are provided, one that ensures that the value is in the $0$ to $r-1$ range, the other reducing the source value modulo $r$. In either case, the scalar value for point multiplication is less than $r$. Operations on scalars are not critical for performance; therefore, we use a simple, generic and compact routine in C. For multiplications and modular reductions, Montgomery multiplication is used. The total compiled code footprint for all scalar operations is $1\,064$~bytes (when compiled with GCC 7.3.0 for an ARM Cortex-M0+ target with ``\verb+-Os+'' optimization flag). Like the rest of our code, the scalar implementation is fully constant-time. To compute $sQ$, we used a simple four-bit window. For a window of $w$ bits, the process is the following: \begin{enumerate} \item Let $k = \lfloor (\log r)/w \rfloor$. We will use the binary representation of the scalar by chunks of $w$ bits, and there will be exactly $k$ chunks (the last chunk might be incomplete). \item Compute and store in a dedicated RAM space (the \emph{window}, usually on the stack) the points $iQ$ for $i = 1$ to $2^{w-1}$. This can use the generic point addition routine, calling it $2^{w-1}-1$ times. \item \label{proc:pointmul:addconst}Add $2^{w-1}\sum_{i=0}^{k-1} 2^{wi}$ to $s$ (modulo $r$). \item Start with $Q' = \neutral$. \item For $i = 0$ to $k-1$: \begin{enumerate} \item Extract the $i$-th $w$-bit chunk from the scalar: $j = \lfloor s/2^{wi} \rfloor \bmod 2^w$. \item Look up point $T = |j-2^{w-1}|Q$ from the window; if $j = 2^{w-1}$, the lookup returns $T = \neutral$. \item If $j < 2^w$, set $T \gets -T$ (i.e. negating the $y$ coordinate of $T$). \item Set $Q' \gets 2^w Q' + T$. The multiplication by $2^w$ uses the optimized repeated doublings procedure described in section~\ref{sec:curve9767:doublings}, and the addition with $T$ uses the generic point addition routine. When $i = 0$, the doublings can be skipped, since it is statically known that $Q' = \neutral$ at this point. \end{enumerate} \item Return $Q'$. \end{enumerate} Note that for a window of $w$ bits, we only store $2^{w-1}$ points. We then use a lookup index skewed by $2^{w-1}$, and obtain the actual point to add with a conditional negation. For instance, if using a 4-bit window $w = 4$, we store points $Q$, $2Q$, $3Q$,\ldots $8Q$; the lookup index $j$ is between $0$ and $8$ (inclusive); and the final point $T$ will range from $-8Q$ to $+7Q$, instead of $0Q$ to $15Q$. The addition of the specific constant to the scalar (in step~\ref{proc:pointmul:addconst}) counterbalances this skew. Within the window, we only store the $x$ and $y$ coordinates of the points $iQ$. The ``neutral flag'' of the looked-up point $T$ is adjusted afterwards (it is set to $1$ if $Q = \neutral$ or if the lookup index $j = 0$). The window size is a trade-off. With a larger window, fewer iterations are needed, thus reducing the number of window lookups and point additions; large windows also make repeated doublings slightly more efficient (since our repeated doublings procedure has a $11\,392$-cycle cost for each doubling \emph{plus} a fixed $7\,584$-cycle overhead). On the other hand, larger windows increase the lookup time (we use a constant-time lookup with a cost proportional to the number of stored points) and, more importantly, increase temporary RAM usage. Systems that use the ARM Cortex-M0+ usually have severe RAM constraints. Each point in the window uses 80~bytes (40~bytes per coordinate, including the two extra bytes for 32-bit alignment); a 4-bit window thus implies 640~bytes of (temporary) storage. Depending on the usage context, a larger window may or may not be tolerable. We might note that typical point multiplication routines on Edwards25519 store windows with points in projective, inverted or extended coordinates, using three or four field elements per point, hence at least 96 or 128 bytes. Since our Curve9767 points are in affine coordinates, they use less RAM, and may thus allow larger windows for a given RAM budget. Of course, the Montgomery ladder (on Curve25519) does not use a window and is even more compact in RAM. Our generic point multiplication routine has been measured to work in $4\,493\,999$~cycles. During development, we also wrote another version that was keeping the intermediate point $Q'$ in Jacobian coordinates; doublings we thus more efficient (we avoided the $7\,584$-cycle overhead for each multiplication by 16), through point additions (adding an affine point from the window to the current point in Jacobian coordinates) were slightly more expensive (about one thousand extra cycles per addition). This yielded a point multiplication routine in about 4.07 million cycles, i.e. 9.4\% fewer than our current implementation. We did not keep that variant for the following reasons: \begin{itemize} \item The addition routine in Jacobian coordinates required a nonnegligible amount of extra code, mostly for all the ``linear'' operations. \item Handling of edge cases (when the current point $Q'$ is the point at infinity) required extra flags and more conditional copies. \item The method could not scale to combined multiplications, as described below (computing $s_1Q + s_2G$). When multiplying a single point $Q$ by a scalar $s$ which is such that $0 \leq s < r$, it can be shown that none of the point additions in the main loop is in fact a doubling (adding a point to itself). However, this is not true when doing a combined multiplication: intermediate values may lead to a hidden doubling, and the pure Jacobian point addition routine does not handle that edge case correctly. \end{itemize} We thus prefer sticking to affine coordinates, even though they lead to a slightly slower point multiplication routine. Compared to the baseline (Curve25519), Curve9767 then provides a point multiplication routine which is about 1.29 times slower. Depending on context, this may or may not be tolerable. However, this slowdown factor is less than the ``1.5 to 2.9 factor'' from the analysis in~\cite{SchSpr2019}; in that sense, this result shows that the design strategy of Curve9767 is worth some attention. \paragraph{Combined Point Multiplications.} Some cryptographic protocols require computing $s_1Q_1 + s_2Q_2$ for two points $Q_1$ and $Q_2$. In particular, verification of ECDSA or Schnorr signatures uses that operation, with $Q_1$ being the public key, and $Q_2$ the conventional curve generator point. Instead of doing both point multiplications separately and adding the results together, we can mutualize the doublings, using ``Shamir's trick'' (originally described in the context of ElGamal signature verification and credited by ElGamal to a private communication from Shamir\cite{ElG1988}). Namely: \begin{itemize} \item Two windows are computed, for points $Q_1$ and $Q_2$. \item A single accumulator point $Q'$ is kept. \item At each loop iteration, two lookups are performed, using indices from each scalar, and resulting in points $T_1$ and $T_2$. The doubling-and-add computation is then $Q' \gets 2^w Q' + T_1 + T_2$. \end{itemize} It is also possible to compute a \emph{combined window} with all points $i_1 Q_1 + i_2 Q_2$ for $0\leq i_1 < 2^{w-1}$ and $0\leq i_2 < 2^w$, but for a given RAM budget, this is usually not worth the effort, the RAM being better spent on two individual windows with twice as many bits. This process naturally extends to more than two points. Each extra point requires its own window, but all doublings are mutualized. Our implementation of the combined point multiplication routine, with source point $Q_2$ being the conventional curve generator (this is the operation needed for Schnorr signature verification), computes $s_1 Q_1 + s_2 G$ in $5\,590\,769$~cycles. Since the curve generator is fixed, its window can be precomputed and stored in ROM (Flash), so that RAM usage is no more than with the generic point multiplication routine. \paragraph{Generator Multiplication.} Multiplying a point which is known in advance (normally the conventional curve generator $G$) is the operation used in key pair generation, and also for each signature generation. Several optimizations are possible: \begin{itemize} \item Since the point is known at compile-time, its window can be precomputed and stored in ROM/Flash. This saves the dynamic computation time. \item ROM size constaints are usually less strict in embedded systems than RAM constraints, because ROM is cheaper\footnote{As a rule of thumb, each SRAM bit needs 6 transistors, but a ROM bit only requires 1 transistor-equivalent space.}. This allows the use of larger windows. \item A process similar to combined point multiplications can be used: the multiplier $s$ can be split into several chunks. For instance, if $s$ is split into two halves $s_1$ and $s_2$, with $s = s_1 + 2^{128} s_2$, and $s_1$ and $s_2$ being each less than $2^{128}$, then $sG = s_1 G + s_2 2^{128} G$, which can leverage the mutualization of doublings, provided that $2^{128} G$ is precomputed and stored (preferably along with its precomputed window). Since the sub-scalars $s_1$ and $s_2$ are half-width, the number of iterations is halved. \end{itemize} In our implementation, to compute $sG$, we split $s$ into four 64-bit chunks, and we store precomputed 4-bit windows for $G$, $2^{64}G$, $2^{128}G$ and $2^{192}G$. Only 16 internal iterations are used, each involving a multiplication by 16, four lookups, and four point additions (for the first iteration, the accumulator point $Q'$ is the point at infinity, and we can avoid the multiplication by 16 and one of the point additions). In total we compute a multiplication of the generator $G$ by a scalar in $1\,877\,847$~cycles. We used a four-way scalar split and 4-bit windows for implementation convenience; however, both the number of scalar chunks and the size of the windows can be adjusted, for various trade-offs between implementation speed and ROM usage. In our case, the four precomputed windows add up to 2560~bytes of ROM. \subsection{Point Compression} The in-RAM format for a point uses 84~bytes (including the ``neutral flag'', and alignment padding). However, curve points can be encoded in a much more compact format, over only 32~bytes (specifically, 255~bits, the last bit is not used). \paragraph{Field Element Encoding.} For a field element $u = \sum_{i=0}^{19} u_i z_i$, there are 19 polynomial coefficients to encode. Each coefficient is an integer in the $0$ to $p-1 = 9766$ range. The in-RAM values use Montgomery representation and furthermore encode 0 as the integer $p$; however, we convert back the coefficients to non-Montgomery representation and into the $0$..$p-1$ range so that encoding formats do not force a specific implementation strategy. For a compact encoding, we encode the first 18 coefficients by groups of three. Each group $(u_{3i}, u_{3i+1}, u_{3i+2})$ uses exactly 40~bits (5 bytes): \begin{itemize} \item Each coefficient is split into an 11-bit low part $l_j = u_j \bmod 2^{11})$, and a high part $h_j = \lfloor u_j / 2^{11} \rfloor$. This is implemented with simple masks and shifts. Since $u_j < 9767$, the high part $h_j$ is lower than 5. \item The low and high parts of the three coefficients are assembled into value: \begin{equation*} v_i = l_{3i} + 2^{11} l_{3i+1} + 2^{22} l_{3i+2} + 2^{33} (h_{3i} + 5 h_{3i+1} + 25 h_{3i+2}) \end{equation*} Note that since $l_j < 2^{11}$ and $h_j < 5$, it is guaranteed that $v_i < 2^{40}$. \item The value $v_i$ is encoded over 5~bytes in unsigned little-endian convention. \end{itemize} The 18 coefficients $u_0$ to $u_{17}$ yield 6 groups of three, hence a total of 30 bytes. The last coefficient ($u_{18}$) is then encoded in unsigned little-endian convention over the last two bytes. Since $u_{18} < 9767$, it uses at most 14 bits, and the two most significant bits of the last byte are free. Decoding must recover the $l_j$ and $h_j$ elements from the received bytes. Obtaining the high parts ($h_j$) entails divisions by 5; for constant-time implementations, one can use the fact that $\lfloor x/5 \rfloor = \lfloor (103x)/2^9 \rfloor$ for all integers $x$ in at least the $0$..$127$ range\footnote{Division opcodes are not constant-time on many CPU. Optimizing compilers can implement divisions by constants through multiplications and shifts, using the techniques from~\cite{GraMon1994}, but they may prefer to use division opcodes, especially when optimizing for code size instead of raw speed. Expliciting the use of multiplications and shifts avoids such issues.}. The decoding routine should detect and report invalid encodings, i.e. encodings that lead to coefficients not in the $0$..$9766$ range. \paragraph{Point Encoding.} Since each point $(x,y)$ on the curve fulfills the curve equation $y^2 = x^3 + ax + b$, knowledge of $x$ is sufficient to recompute $y^2$, from which $y$ can be obtained with a square root extraction. The fast square root extraction algorithm described in section~\ref{sec:field-ops-sqrt} makes this process efficient. Since $y^2$ admits two square roots, an extra bit is needed to designate a specific $y$ value. We define the \emph{sign} of a field element $u$ as follows: \begin{itemize} \item If $u = 0$ then its sign is zero. \item Otherwise, let $i$ be the largest index such that $u_i \neq 0$. We define that the sign of $u$ is one if $u_i > p/2$, zero otherwise. (This uses the normalized $u_i$ value in the $0$..$p-1$ range, not in Montgomery representation). \end{itemize} It is easily seen that if $u\neq 0$, then $u$ and $-u$ have opposite signs (i.e. exactly one of $u$ and $-u$ has sign 1, the other having sign 0). The encoding of a Curve9767 point $(x,y)$ into 32 bytes then consists of the encoding of $x$, with the sign of $y$ inserted into the next-to-top bit of the last byte (i.e. the bit of numerical value 64 within the 32nd byte of the encoding); the top bit of the last byte (of numerical value 128 within that byte) is cleared. The decoding process then entails decoding the value of $x$ (masking out the two top bits of the last byte), computing $y^2$, extracting the square root $y$, and finally replacing $y$ with $-y$ if the sign bit of the recomputed $y$ does not match the next-to-top bit in the last byte of the encoding\footnote{Since Curve9767's order is odd, there is no point with coordinate $y = 0$; therefore, there exists no value $x$ such that $x^3+ax+b = 0$, avoiding the edge case of $y = 0$ but a requested sign bit of 1.}. The decoding function reports an error in the following cases: \begin{itemize} \item The top bit of the last byte is not zero. \item The first 254 bits do not encode a valid field element (at least one coefficient is out-of-range). \item A value $x$ is validly encoded, but $x^3+ax+b$ is not a quadratic residue, and there is thus no curve point with that value as $x$ coordinate. \end{itemize} There is no formally defined encoding for the point at infinity. However, if requested to encode $\neutral$, the encoding function produces an all-ones pattern (all bytes of value \verb+0xFF+, except the last byte which is set to \verb+0x7F+). This is not a valid encoding (it would yield out-of-range coefficients). Similarly, when the decoding function detects an invalid encoding, it reliably sets the destination point to the point-at-infinity in addition to reporting the error. In that sense, it is possible to use that invalid all-ones pattern as the encoding of the point at infinity. It is up to the calling application to decide whether neutral points should be allowed or not; most protocols don't tolerate neutral points. In our implementation, point encoding takes $1\,527$~cycles, while decoding uses $32\,228$~cycles (field multiplication, squaring, and square roots use our assembly routines, but the rest of the code is written in C). \subsection{Hash-To-Curve} The \emph{hash-to-curve} functionality maps arbitrary input bit sequences to curve points in a way which is indifferentiable from a random oracle. Some cryptographic protocols can tolerate weaker properties, but in general we want the resulting point to be such that, informally, all curve points could have been obtained with quasi-uniform probability, and no information is leaked about the discrete logarithm of the result relatively to a given base point. We moreover require \emph{constant-time} hashing, i.e. not leaking any information on the source value through timing-based side channels; this, in particular, prevents us from using rejection sampling methods in which pseudorandom $x$ values are generated from the input with a strong pseudorandom generator until one is found such that $x^3+ax+b$ is a quadratic residue\footnote{With our fast square root and quadratic residue tests, such a process would hash an arbitrary input in an \emph{average} cost under $60\,000$~cycles, but occasionally much higher.}. Since we work with field $\GF(q)$ with $q = 2\bmod 3$, we can use a process based on Icart's map\cite{Ica2009} and formally proven\cite{BriCorIcaMadRanTib2010} to be indifferentiable from a random oracle, when the underlying hash function is itself modeled as a random oracle. It consists of three elements: \begin{itemize} \item $\text{\textsf{MapToField}}$: an input sequence of bytes is mapped to a field element $u$ by interpreting the sequence as an integer $U$ (using unsigned little-endian convention) then converting it to base $p$: $U = \sum_i U_i p^i$. The first (least significant) 19 digits $U_0$ to $U_{18}$ are then used as the polynomial coefficients of $u$. \item $\text{\textsf{IcartMap}}$: from a given field element $u$, a curve point is obtained. If $u = 0$ then the point at infinity $\neutral$ is produced; otherwise, the point $(x,y)$ is produced with the following formulas: \begin{eqnarray*} v &=& \frac{3a - u^4}{6u} \\ x &=& \left( v^2 - b - \frac{u^6}{27} \right)^{1/3} + \frac{u^2}{3} \\ y &=& ux + v \\ \end{eqnarray*} \item $\text{\textsf{HashToCurve}}$: a message $m$ is used as input to an extendable-output function (XOF) such as SHAKE\cite{Fips202}; a 96-byte output is obtained, split into two 48-byte halves $d_1$ and $d_2$. We then define: \begin{equation*} \text{\textsf{HashToCurve}}(m) = \text{\textsf{IcartMap}}(\text{\textsf{MapToField}}(d_1)) + \text{\textsf{IcartMap}}(\text{\textsf{MapToField}}(d_2)) \end{equation*} \end{itemize} Using 48~bytes (i.e. 384~bits) for each half implies that $\text{\textsf{MapToField}}$'s output is quasi-uniform with bias lower than $2^{-132}$ (since the field cardinal is lower than $2^{252}$), i.e. appropriate for the ``128-bit'' security level that Curve9767 provides. The conversion to base 9767 is done with repeated divisions by 9767, themselves implemented with multiplications and shifts only, using the techniques described in~\cite{GraMon1994}. Our Curve9767 implementation comes with a perfunctory SHAKE implementation; our hash-to-curve function takes as input the SHAKE context, pre-loaded with the input message $m$ and ready to produce bytes. It is up to the caller to organize the injection of $m$ into SHAKE, preferably with a domain separation header to avoid unwanted interactions with other protocols and operations that use SHAKE on the same input $m$. The hash-to-curve operation is computed in $195\,211$~cycles. Out of these, each $\text{\textsf{MapToField}}$ uses $20\,082$~cycles; this function was written in C and compiled with code size optimizations (``\verb+-Os+'') and could probably be made to run faster with handmade assembly optimizations. The SHAKE invocation itself, with our C implementation also compiled with code size optimizations, amounts to about $34\,000$ cycles. Icart's map is evaluated in $50\,976$~cycles. Maps other than Icart's could have been used. In particular, the Shallue-Woestijne-Ulas map\cite{ShaWoe2006,Ula2007}, as simplified in~\cite{BriCorIcaMadRanTib2010} for curves defined over fields $\GF(q)$ with $q = 3 \bmod 4$ (which is the case of Curve9767), can be implemented with a few operations, mostly one inversion, one quadratic residue test, one square root extraction, and a few multiplications and squarings. In our case, it is slightly more expensive than Icart's map, though the difference is slight. More discussion on the practical implementation of hash-to-curve procedures can be found in~\cite{DraftHashToCurve05}. \subsection{Higher-Level Protocols}\label{sec:curve9767:highlevel} In order to have benchmarks for Curve9767 when applied in realistic protocols, we defined and implemented Diffie-Hellman key exchange and Schnorr signatures. When a XOF is required, we use SHAKE256\footnote{Nominally, we only target the 128-bit security level, and SHAKE128 would be sufficient. However, using SHAKE256 makes no difference in performance in our case, and the ``256'' figure has a greater marketing power.}. All uses of SHAKE include ``domain separation strings'', i.e. conventional headers for the XOF input that avoid the same output occurring in different contexts. All our domain separation strings start with ``\verb+curve9767-+'' and end with a colon character ``\verb+:+''. When a string is specified below, e.g. as ``\verb+curve9767-keygen:+'', the ASCII encoding of the string as a sequence of bytes, without the double-quote characters and without any terminating NUL byte, is used. \paragraph{Key Pair Generation.} From a given random seed (presumably obtained from a cryptographically strong RNG, with entropy at least 128~bits), we generate a private key $s$ (an integer modulo the curve order $r$), an additional secret $t$ used for signature generation (32~bytes), and the public key $Q = sG$ with $G$ being the curve conventional generator. The process is the following: \begin{itemize} \item The concatenation of the domain separation string ``\verb+curve9767-keygen:+'' and the seed is injected into a new SHAKE256 instance. \item SHAKE256 is used to produce 96~bytes of output. The first 64~bytes are interpreted as an integer with unsigned little-endian convention; that integer is reduced modulo the curve order $r$, yielding the secret scalar $s$. The remaining 32~bytes from the SHAKE256 output are the additional secret $t$. It may theoretically happen that we obtain $s = 0$; in that case, we set $s = 1$. This is only a theoretical concern, since there is no known seed value that results in such an outcome, and while it makes the value $1$ conceptually twice as probable as any other, the bias is negligible. \item The public key $Q = sG$ is computed. \end{itemize} The same process is used for Diffie-Hellman key pairs and signature key pairs. In the former case, the $t$ value may be skipped, since it is used only for signatures (but SHAKE256 produces output by chunks of 136~bytes, so there is no saving in performance obtained by not generating $t$). Note that in all generality, key exchange key pairs and signature key pairs should be separate; they have different lifecycles and it is never recommended to use the same private key in two different cryptographic algorithms. Nothing prevents us, though, from using the same \emph{process} (hence the same implementation code) for generating both kinds of key pairs, provided that they work on different seeds. The cost of key pair generation is almost entirely that of the computation of the public key $Q = sG$, at least on an ARM Cortex-M0+, where SHAKE is inexpensive compared to curve point multiplications. The public key computation uses the ``multiplication of the generator by a scalar''. \paragraph{Diffie-Hellman Key Exchange.} Each party in a Diffie-Hellman key exchange executes the following steps: \begin{itemize} \item A new key pair $(s, Q)$ is generated, if using ephemeral Diffie-Hellman. For static Diffie-Hellman, the key pair is recovered from storage, and used for multiple Diffie-Hellman instances. \item The public key $Q$ is encoded and sent to the peer. \item A public key (32 bytes) is received from the peer, and decoded as a point $Q'$. \item The point $sQ'$ is computed, then its $x$ coordinate is encoded as a sequence of 32 bytes; this is the \emph{pre-master secret}. We use only the $x$ coordinate, without the sign bit from the $y$ coordinate, in order to follow traditional Diffie-Hellman on elliptic curves\cite{X963}, and to make the process compatible with $x$-only ladder implementations of point multiplication. \item The concatenation of the domain separation string ``\verb+curve9767-ecdh:+'' and the pre-master secret are input into a new SHAKE256 instance, whose output is the shared secret between the two peers participating to the exchange. Since SHAKE256 is a XOF, the two parties can obtain unbounded amounts of shared key material, e.g. to power both symmetric encryption and MAC for two unidirectional data tunnels. \end{itemize} The decoding of the received point $Q'$ may fail. In usual contexts, it is acceptable to simply abort the protocol in such a case. In order to support unusual usage contexts in which the key exchange is used as part of a larger protocol in which points are not observable and attackers should not be able to observe which key exchanges succeed or fail, an alternate pre-master secret is used when $Q'$ fails to decode properly. The alternate pre-master secret is the 32-byte SHAKE256 output computed over an input consisting of the concatenation of the domain separation string ``\verb+curve9767-ecdh-failed:+'', the encoding (over 32 bytes, unsigned little-endian convention) of the secret scalar $s$, and the 32 bytes received as purported encoded $Q'$ point. Our implementation always computes both the normal pre-master secret and the alternate one, and selects the latter in case the decoding failed (and the point multiplication was performed over invalid data). This process ensures that, from the outside, the ECDH process always results in some unpredictable key that is still deterministically obtained for a given 32-byte sequence purportedly encoding $Q'$. Almost all of the computation time in Diffie-Hellman is spent in the two point multiplications, for computing $Q = sG$ (as part of key pair generation) and $sQ'$, the latter requiring the generic point multiplication routine. \paragraph{Schnorr Signatures.} We define Schnorr signatures in a process similar to EdDSA\cite{BerDuiLanSchYan2012,EdDSArfc8032}. The message to sign or to verify is provided as a hash value $h$, obtained from some collision-resistant hash function\footnote{This ``hash function'' may be the identity function, as for the ``Pure EdDSA'' mode. This avoids relying on the collision resistance of a hash function; however, such hash-less processing requires verifiers to already know the public key and the signature value when the beginning of the data is being known, which prevents streamed processing and is a problem for some tasks on memory-constrained devices, e.g. X.509 certificate chain validation as part of TLS. We therefore recommend always using a proper hash function first, e.g. SHA3.}. Whenever $h$ is used, we really use the concatenation of an identifier for the hash function, and the value $h$ itself. The identifier is the ASCII encoding of the decimal representation of the standard OID for the hash function, followed by a colon character. For instance, if using SHA3-256, the identifier string is: ``\verb+2.16.840.1.101.3.4.2.8:+'' To generate a signature, using the public/private key pair $(s,t,Q)$: \begin{enumerate} \item Concatenate the domain separation string ``\verb+curve9767-sign-k:+'', the additional secret $t$, the hash function identifier string, and the hash value $h$. This is the input for a new SHAKE256 instance. 64~bytes of output are obtained from SHAKE256, and interpreted as an integer (unsigned little-endian encoding) which is then reduced modulo $r$ (the curve order), yielding the scalar $k$. For completeness, if $k = 0$, it is replaced with $1$ (this happens only with negligible probability). \item Compute the curve point $C = kG$ and encode it as value $c$ (32~bytes). This is the full point encoding, including the sign bit for the $y$ coordinate. \item \label{proc:sign:e}Concatenate the domain separation string ``\verb+curve9767-sign-e:+'', the value $c$, the encoding of the public key $Q$, the hash function identifier and the hash value $h$. This is the input for a new SHAKE256 instance. Generate 64~bytes of output, interpret them as an integer (unsigned little-endian encoding), and reduce that integer modulo the curve order $r$. This yields the scalar $e$. \item Compute $d = k + es \bmod r$. \item The signature is the concatenation of $c$ (32~bytes) and $d$ (encoded over exactly 32~bytes with unsigned little-endian convention). \end{enumerate} This signature generation process is deterministic: for the same input (hashed) message $h$ and private key, the same signature is obtained. It is not strictly required that this process is used to generate $k$; any mechanism that selects $k$ uniformly and unpredictably in the $1$..$r-1$ range can be used. However, the deterministic process described above has the advantage of not requiring a strong random source, and its determinism makes it testable against known-answer vectors. Conversely, determinism may increase vulnerability to some classes of physical attacks, especially fault attacks. See section~\ref{sec:impl:sidechannels} for more details. To verify a signature, the following process is used: \begin{enumerate} \item Split the signature (64~bytes) into its two halves $c$ and $d$ (32~bytes each). \item Decode $d$ as an integer (unsigned little-endian convention). If $d \geq r$, the signature is invalid. \item Recompute the challenge value $e$ as in step~\ref{proc:sign:e} of the signature generation process. \item Compute the point $C = dG - eQ$, using the alleged signer's public key $e$. \item Encode point $C$. The signature is valid if that encoding matches $c$ (the first half of the signature), invalid otherwise. \end{enumerate} The signature generation cost consists almost entirely of the computation of $kG$ (multiplication of the curve generator by a scalar). The signature verification cost is dominated by $dG - eQ$, which is a combined point multiplication process. \section{Implementation Issues and Benchmarks}\label{sec:impl} \subsection{Side Channel Attacks}\label{sec:impl:sidechannels} \paragraph{Constant-Time Code.} Among side channel attacks, a well-known category consists of timing attacks, or, more generally, side channel attacks that exploits measures based on time (but not necessarily of the execution time of the target system itself). These attacks include all sorts of cache attacks, that try to obtain information on secret values based on the memory access pattern of the attacked system and its effect on various cache memories. Timing-based side channels are ``special'' because they can often be exercised remotely: either the timing differences can be measured over a high-speed network, or the attacker has control of a generic system close to the target (e.g. another VM co-hosted on the same hardware) and can use the abilities of such systems at measuring very short amounts of time. All other side channel attacks require some special measuring hardware in the physical vicinity of the target system, and can often be ruled out based on usage context. \emph{Constant-time coding} is a relatively confusing terminology that designates code which does not necessarily execute in a fixed amount of time, but such that any timing-related measures yield no information whatsoever on secret values. Constant-time code makes no memory access at secret-dependent values, performs no conditional jump based on secret boolean values, and avoids use of any hardware opcode with a varying execution time (a category which includes some multiplication opcodes on some platforms\cite{BearSSLctmul}). It shall be noted that the two Curve25519 implementations we use as baseline are \emph{not} truly constant-time: when performing the conditional swap in each iteration of the Montgomery ladder, they only exchange the pointers to the relevant field elements, not the values. Subsequent memory accesses then happen at addresses that depend on the conditional boolean, which is a private key bit. It is asserted in~\cite{HaaLab2019} that: \begin{quote} \emph{Note that for internal memories of Cortex M4 and M0 access timing is deterministic.} \end{quote} This is not true in all generality. The ARM Cortex-M0 and M4 cores do not include any cache by themselves and issue read and write requests with timings that do not depend on the target address. However, the \emph{system} in which these cores are integrated may induce timing differences. These cores are not full CPUs in their own right; they are hardware designs that a CPU designer uses in a larger chip, along with extra pieces such as a memory controller. \emph{In general}, RAM is provided with a SRAM block that offers deterministic and uniform access timing, but this is not always the case. Memory controller designs with cache capabilities commercially exist\cite{CastCacheCtrl}. Other potential sources of address-dependent timing differences include automatic arbitration of concurrent memory accesses (when other cores, or peripherals, access RAM concurrently to the CPU) or refresh cycles for DRAM. Accesses to ROM/Flash may also have caches and other wait states (for instance, STM32F407 boards with an ARM Cortex-M4 implement both data and instruction caches for all accesses to Flash). Therefore, while on a specific microcontroller, a not-truly constant-time implementation may get away with making memory accesses at secret-dependent addresses, this is a relatively fragile assertion, and a generic software implementation should use true constant-time code by default. Our implementation is truly constant-time. In particular, all lookups in the windows for point multiplication use a constant-time implementation that reads all values from the window, and combines them with bitwise operations to extract the right one. For a 4-bit window (containing eight pre-computed points), the lookup process executed in 777 cycles. Since the generic point multiplication entails 63 such lookups, true constant-time discipline implies an overhead of $48\,591$~cycles, which is not large in practice (about 1.1\% of the total time) but should conceptually be taken into account when comparing Curve9767 with the baseline Curve25519 implementations that are not truly constant-time in that sense (i.e. a fair comparison would first deduce these $48\,591$ cycles from our code's performance, or add a similar amount to the baseline implementation performance). We also applied constant-time discipline more generally; all our functions are constant-time, including code paths that are usually safe. For instance, public keys or signature values are normally exchanged publicly; we still decode them in constant-time and do not even leak (through timings) whether the decoding was successful or not. While this maniacal insistence on full constant-time is useless in most contexts, we feel that it may matter in some unusual cases and thus should be the default for any generic-purpose implementation. Moreover, the runtime overhead is usually negligible or very small; following constant-time discipline mostly implies forfeiting conditional jumps (``\verb+if+'' clauses) and propagating an error status through the call tree. \paragraph{Power Analysis.} Side channel attacks can rarely be addressed in all generality, since they rely on specific hardware properties and usage context. We can have a generic ``constant-time'' implementation because most CPUs have similar timing-related properties (namely, caches whose behaviour depends on the accessed address, not on the stored value), and also because timing attacks that can be enacted remotely use measures that are amenable to an abstract description, owing to the fact that the measuring apparatus is itself a generic computer with merely a cycle counter. This simple context does not extend to other side channel attacks, e.g. power analysis attacks. Consequently, it is not usually feasible to make a software implementation that can be said to be immune to side channel attacks \emph{in abstracto}. However, it is known that some ``generic'' mitigations can help with a nonnegligible proportion of particular situations. In the case of elliptic curve implementations, projective coordinates can be \emph{randomized}: given a point $(X{:}Y{:}Z)$, one can always generate a random non-zero field element $\mu$ and multiply it with all three coordinates, since $(\mu X{:}\mu Y{:}\mu Z)$ represents the exact same curve point\footnote{Other coordinate systems, e.g. Jacobian coordinates, can also be randomized in a similar way.}. If randomization is applied regularly throughout a curve operation (e.g. after each doubling in a double-and-add point multiplication algorithm), then the extra randomness is expected to somehow blur the information leaking through side channels and make analysis more expensive, especially in terms of number of required traces. The effectiveness of this countermeasure varies widely depending on context, but in most it will help defenders. Using affine coordinates prevents applying that kind of randomization. In order to use randomization, one has to use redundant coordinate systems. On short Weierstraß curves, Jacobian coordinates provide in general the best performance for point multiplication, but not for combined multiplications or other operations since they are incomplete formulas and don't handle edge cases properly. If a Curve9767 implementation must be made resistant to side channel attacks such as power analysis, in the sense explained above, then we recommend using projective coordinates with the complete formulas from~\cite{RenCosBat2015}. With these formulas, on a short Weierstraß curve with $a = -3$, doublings cost 8M+3S, along with a number of ``linear'' operations (including some multiplications by $b$, which are fast on Curve9767 since $b$ has low Hamming weight). The number of such extra operations is known to be relatively high (21 additions and 2 multiplications by $b$) and we estimate that they collectively add an overhead of 20\%; this would put the cost of a point doubling at close to $19\,000$ cycles on an ARM Cortex-M0+. Similarly, for generic point additions (12M and 31 ``linear'' operations), we estimate the cost around $23\,000$ cycles. Each randomization is an extra $5\,000$~cycles, assuming a very fast random generator\footnote{The random $\mu$ can be slightly biased, allowing generation of the 19 coefficients by generating random 31-bit values and applying Montgomery reduction on each of them; if the 31-bit values are obtained from dedicated hardware or a very fast process, the reduction themselves won't cost more than a basic field addition.}. In total, assuming a 4-bit window, extra randomization for each doubling, and a $1\,100$-cycle window lookup operation\footnote{Points in projective coordinates are larger than in affine coordinates, hence constant-time lookup is more expensive.}, we can estimate a total side-channel-resistant point multiplication cost of about 7.5~million cycles. This is only an estimate; we did not implement it. \paragraph{Fault Attacks.} Fault attacks are a kind of side channel attack where the attacker forces the computation to derail in some way, through a well-targeted physical intervention, such as sending a short-time voltage glitch (abnormally high or low voltage for a small amount of time) or chip alteration (cutting or bridging specific chip wires with lasers under microscopic inspection). Deterministic algorithms are known to be more vulnerable to fault attacks, since they allow attackers to repeat experiments with the same intermediate values in all computation steps; this has been applied in particular to signature algorithms\cite{AmbBosFayJoyLocMur2017,PodSomSchLocRos2018}. The Schnorr signature scheme which we described in section~\ref{sec:curve9767:highlevel} is deterministic: for a given signature key $(s,t,Q)$ and hashed message $h$, the per-signature secret scalar $k$ is generated with a deterministic pseudorandom process. Having a fully specified deterministic process has quality assurance benefits: the signature scheme can be tested against known-answer test vectors\cite{DeterministicECDSArfc}. However, randomization can be applied nonetheless: the signature verification process does not (and cannot) rely on such deterministic generation. In order to capture both the immunity to random generators of poor quality\footnote{Notably, fault attacks can also impact hardware RNGs and force them to produce predictable output.} and to still randomize data to make fault attacks harder, the generation of $k$ can be amended by appending a newly-generated random value to the concatenation of the domain separation string, the additional secret $t$, the hash function identifier string and the hash value $h$; this extra input to SHAKE256 will make the process non-repeatable, thus increasing the difficulty of fault exploitation by attackers. \subsection{Benchmarks} As described in section~\ref{sec:field-ops-platform}, all measures were performed on a SAM D20 Xplained Pro board. The microcontroller is configured to use the internal 8~MHz oscillator, with no wait state for reading Flash memory. The internal oscillator is also configured to power a 32-bit counter. No interrupts are used; the counter value is read directly\footnote{This gives about 9 minutes after boot to make measures, before the counter overflows.}. The benchmark code runs a target function in a loop; the loop is invoked three times, with 1, 10 and 100 iterations. The cycle count is measured three times. The same loop is used for all functions, to avoid variability (7 pointer-sized arguments are passed to the target function; as part of the ABI, the callee can ignore extra arguments, since the caller is responsible for removing them afterwards). The loop overhead depends on the C compiler version and compilation options; in our tests (GCC 6.3.1, optimization flags ``\verb+-Os+''), it appears that the loop has a fixed 38~cycles overhead, and an additional 29~cycles per iteration. We could thus obtain the exact cycle counts for each function call. Since the board is used without any interrupts, measures are perfectly reproducible. The table~\ref{tab:benchmarks} lists all measured execution times; they are reported both in raw cycle counts, and as a cost relative to the cost of a multiplication. For the low-level field operations, the implementation uses an internal ABI that does not save registers; the measure was made through a wrapper that adds 31~cycles of overhead per call (these 31 cycles were subtracted to obtain the values in the table). For all operations implemented in assembly, the measured cycle counts match manual counting exactly. \begin{table}[H] \begin{center} \begin{tabular}{|l|r|r|} \hline \textsf{\textbf{Operation}} & \textsf{\textbf{Cost (cycles)}} & \textsf{\textbf{Cost (rel. to mul)}} \\ \hline Field: multiplication $(*)$ & $1\,574$ & $1.00$M \\ Field: squaring $(*)$ & $994$ & $0.63$M \\ Field: inversion $(*)$ & $9\,508$ & $6.04$M \\ Field: square root extraction $(*)$ & $26\,962$ & $17.13$M \\ Field: test quadratic residue status $(*)$ & $9\,341$ & $5.93$M \\ Field: cube root extraction $(*)$ & $31\,163$ & $19.80$M \\ \hline Generic curve point addition & $16\,331$ & $10.38$M \\ Curve point $\times 2$ (doubling) & $16\,337$ & $10.38$M \\ Curve point $\times 4$ & $30\,368$ & $19.29$M \\ Curve point $\times 8$ & $41\,760$ & $26.53$M \\ Curve point $\times 16$ & $53\,152$ & $33.77$M \\ Constant-time lookup in 8-point window & $777$ & $0.49$M \\ Curve point decoding (point decompression) & $32\,228$ & $20.48$M \\ Curve point encoding (compression) & $1\,527$ & $0.97$M \\ \hline Generic point multiplication by a scalar & $4\,493\,999$ & $2\,855.15$M \\ Generator multiplication by a scalar & $1\,877\,847$ & $1\,193.04$M \\ Two combined point multiplications & $5\,590\,769$ & $3\,551.95$M \\ \hline $\text{\textsf{MapToField}}$ & $20\,082$ & $12.76$M \\ Icart's map & $50\,976$ & $32.39$M \\ Hash 48 bytes to a curve point & $195\,211$ & $124.02$M \\ \hline ECDH: key pair generation & $1\,937\,792$ & $1\,231.13$M \\ ECDH: compute shared secret from peer data & $4\,598\,756$ & $2\,921.70$M \\ Schnorr signature: generate & $2\,054\,110$ & $1\,305.03$M \\ Schnorr signature: verify & $5\,688\,642$ & $3\,614.13$M \\ \hline \end{tabular} \end{center} \caption{\label{tab:benchmarks}All benchmarks. Operations tagged with $(*)$ use the internal non-standard ABI that does not preserve registers.} \end{table} \subsection{Other Architectures} While we concentrated on improving performance on the ARM Cortex-M0+, Curve9767 is not necessarily slow on other architectures. Use of a small 14-bit modulus $p$ does not exercise abilities of bigger CPUs at computing multiplications on larger operands. However, many modern CPUs have SIMD units that can compute several operations on small operands in parallel; such units should prove effective at implementing operations on $\GF(9767^{19})$ elements. \paragraph{ARM Cortex-M4.} The ARM Cortex-M4 implements the ARMv7-M architecture. It is backward compatible with the ARMv6-M architecture; thus, our implementation should run just fine, with very similar timings, on the M4. However, that CPU offers many extra instructions, including some from the ``DSP extension'' that incarnate some SIMD abilities. Most interesting are the \verb+smlad+ and \verb+smladx+ opcodes, that can perform two $16\times16$ multiplications and add both 32-bit results to a given accumulator register, all in a single cycle; on the M0+, the equivalent operations take 6~cycles (4~cycles for the two multiplications and two additions, and 2~cycles for copies to avoid consuming the multiplication inputs). Moreover, the ARMv7-M instruction set allows full access to all registers, as well as many non-consuming operations, and various literal operands. We expect considerable speed-ups on the M4, compared with the M0+, when optimized assembly leveraging the M4 abilities is written. \paragraph{x86 with SSE2 and AVX2.} The x86 instruction set now includes extensive SIMD instructions. The SSE2 instructions operate on 128-bit registers. The \verb+pmullw+ and \verb+pmulhuw+ opcodes compute eight $16\times 16$ unsigned multiplications in parallel, returning the low or high 16-bit halves, respectively. On an Intel Skylake core, each instruction has a latency of 5 cycles, but a reciprocal throughput of 0.5~cycles per instruction, meaning that eight full $16\times 16\rightarrow 32$ multiplications can be performed at each cycle. Since polynomial multiplications do not have any carry propagation, considerable internal parallelism can be leveraged. AVX2 opcodes further improve that situation, by offering 256-bit registers and basically doubling all operations: the \verb+vpmullw+ and \verb+vpmulhuw+ have the same timing characteristics as their SSE2 counterparts, but compute sixteen $16\times 16$ unsigned multiplications in parallel. Conversely, the inversion in $\GF(p)$ to compute $x^{-r}$ from $x^r$ is not parallel, and we suppose that its relative cost within the inversion routine will grow. On the ARM Cortex-M0+, its cost is mostly negligible (110~cycles out of a total of $9\,508$), but this might not be true in an optimized inversion routine that leverages SSE2 or AVX2 for multiplications and Frobenius operators. In that case, it is possible that for such architectures, more classic projective coordinate systems become more attractive than affine coordinates for point multiplications. Inversion, square roots and cube roots would still be fast enough to provide benefits, when compared with prime-order fields, for conversion to affine coordinates, point compression, and hash-to-curve operations. \section{Conclusion And Future Work} In this article, we presented Curve9767, a new elliptic curve defined over a finite field extension $\GF(p^n)$, where both the modulus $p$ and the extension degree $n$ where specially chosen to promote performance on small architectures such as the ARM Cortex-M0+. Our novel results include in particular the following: \begin{itemize} \item an optimization of Montgomery reduction for a small modulus; \item choosing a modulus $p$ such that these fast reductions can be used but also mutualized as part of a multiplication of polynomials; \item using a finite field extension $\GF(p^n)$ to leverage fast Itoh-Tsujii inversion for efficient constant-time curve computations in affine coordinates; \item fast square root and cube root in $\GF(p^n)$. \end{itemize} In total, generic curve point multiplication is about 1.29 times slower with Curve9767 than the optimized Curve25519 Montgomery ladder, on the ARM Cortex-M0+. On the other hand, our curve offers very fast routines for a number of other operations (e.g. point compression, or hash-to-curve); maybe more importantly, it has prime order, which simplifies analysis for use in larger protocols. The relatively small difference in performance shows that affine coordinates and fast inversion can be a viable implementation strategy for an elliptic curve, offering an alternate path to the projective coordinate systems that have been prevalent in elliptic curve implementation research over the last two decades. Future work on Curve9767 will include the following: \begin{itemize} \item Making optimized implementations for other architectures, notably the ARM Cortex-M4, and x86 systems with SSE2/AVX2. Whether SIMD opcodes will allow competitive performance on ``big CPUs'' is as yet an open question. \item Exploring formal validation of the correctness of the implementations. Computations in $\GF(p^n)$ have some informal advantages in that respect: since they don't have carries to propagate, they don't suffer from rare carry propagation bugs. Moreover, a small modulus $p$ allows for exhaustive tests: for instance, the correction of our fast Montgomery reduction routine modulo $p$ has been exhaustively tested for all inputs $x$ such that $1\leq x\leq 3\,654\,952\,486$. \item Exploring other field choices, in particular smaller moduli $p$ for use in 8-bit systems that can only do $8\times 8\rightarrow 16$ multiplications. This might be combined with other field extensions such as $\GF(p)[z]/(z^n-z-c)$ for some constant $c$. Such an extension polynomial would increase the cost of Frobenius operators, but also expand the set of usable values for $p$. \end{itemize} \begin{thebibliography}{20} \bibitem{X962} Accredited Standard Committee X9, Inc., \emph{ANSI X9.62: Public Key Cryptography for the Financial Services Industry: the Elliptic Curve Digital Signature Algorithm (ECDSA)}, 2005. \bibitem{X963} Accredited Standard Committee X9, Inc., \emph{ANSI X9.63: Public Key Cryptography for the Financial Services Industry: Key Agreement and Key Transport Using Elliptic Curve Cryptography}, 2001. \bibitem{AmbBosFayJoyLocMur2017} C.~Ambrose, J.~Bos, B.~Fay, M.~Joye, M.~Lochter and B.~Murray, \emph{Differential Attacks on Deterministic Signatures},\\ \url{https://eprint.iacr.org/2017/975} \bibitem{RistrettoWeb} T.~Arcieri, H.~de Valence and I.~Lovecruft, \emph{The Ristretto Group},\\ \url{https://ristretto.group/} \bibitem{AriMatNagShi2004} S.~Arita, K.~Matsuo, K.~Nagao and M.~Shimura, \emph{A Weil Descent Attack against Elliptic Curve Cryptosystems over Quartic Extension Fields}, IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, vol.~E89-A, issue.~5, 2006. \bibitem{BaiPaa1998} D.~Bailey and C.~Paar, \emph{Optimal extension fields for fast arithmetic in public key algorithms}, Advances in Cryptology – Crypto 1998, Lecture Notes in Computer Science, vol.~1462, 1998. \bibitem{BalKob1998} R.~Balasubramanian and N.~Koblitz, \emph{The Improbability That an Elliptic Curve Has Subexponential Discrete Log Problem under the Menezes-Okamoto-Vanstone Algorithm}, Journal of Cryptology, vol.~11, pp.~141-145, 1998. \bibitem{Ber2006} D.~Bernstein, \emph{Curve25519: new Diffie-Hellman speed records}, PKC~2006, Lecture Notes in Computer Science, vol.~3958, pp.~207--228, 2006. \bibitem{BerDuiLanSchYan2012} D.~Bernstein, N.~Duif, T.~Lange, P.~Schwabe and B.-Y.~Yang, \emph{High-speed high-security signatures}, Journal of Cryptographic Engineering, vol.~2, issue~2, pp.~77-89, 2012. \bibitem{EFD} D.~Bernstein and T.~Lange, \emph{Explicit-Formulas Database},\\ \url{https://hyperelliptic.org/EFD/} \bibitem{SafeCurves} D.~Bernstein and T.~Lange, \emph{SafeCurves: choosing safe curves for elliptic-curve cryptography},\\ \url{https://safecurves.cr.yp.to/} \bibitem{BieMeyMul2000} I.~Biehl, B.~Meyer and V.~Müller, \emph{Differential Fault Attacks on Elliptic Curve Cryptosystems}, Advances in Cryptology - CRYPTO 2000, Lecture Notes in Computer Science, vol.~1880, pp.~131-146, 2000. \bibitem{ECTLSrfc4492} S. Blake-Wilson, N.~Bolyard, V.~Gupta, C.~Hawk and B.~Moeller, \emph{Elliptic Curve Cryptography (ECC) Cipher Suites for Transport Layer Security (TLS)},\\ \url{https://tools.ietf.org/html/rfc4492} \bibitem{BreKun1983} R.~Brent and H.~Kung, \emph{Systolic VLSI arrays for linear-time GCD computation}, VLSI 1983, pp.~145-154, 1983. \bibitem{BriCorIcaMadRanTib2010} E.~Brier, J.-S.~Coron, T.~Icart, D.~Madore, H.~Randriam and M.~Tibouchi, \emph{Efficient Indifferentiable Hashing into Ordinary Elliptic Curves}, Advances in Cryptology - CRYPTO 2010, Lecture Notes in Computer Science, vol.~6223, pp.~237-254, 2010. \bibitem{Buildroot} Buildroot, \emph{Making Embedded Linux Easy},\\ \url{https://buildroot.org/} \bibitem{CastCacheCtrl} CAST, Inc., \emph{CACHE-CTRL: AHB Cache Controller Core},\\ \url{http://www.cast-inc.com/ip-cores/peripherals/amba/cache-ctrl/} \bibitem{SEC2} Certicom Research, \emph{SEC 2: Recommended Elliptic Curve Domain Parameters},\\ \url{http://www.secg.org/sec2-v2.pdf} \bibitem{CraLon2015} C.~Costello and P.~Longa, \emph{Four$\mathbb{Q}$: Four-Dimensional Decompositions on a $\mathbb{Q}$-curve over the Mersenne Prime}, Advances in Cryptology - ASIACRYPT 2015, Lecture Notes in Computer Science, vol.~9452, pp.~214-235, 2015. \bibitem{CreJac2019} C.~Cremers and D.~Jackson, \emph{Prime, Order Please! Revisiting Small Subgroup and Invalid Curve Attacks on Protocols using Diffie-Hellman}, IEEE 32nd Computer Security Foundations Symposium (CSF), 2019. \bibitem{Die2003} C.~Diem, \emph{The GHS attack in odd characteristic}, Journal of the Ramanujan Mathematical Society, vol.~18, issue~1, pp.~1-32, 2003. \bibitem{DulHaaHinHutPaaSanSch2015} M.~Düll, B.~Haase, G.~Hinterwälder, M.~Hutter, C.~Paar, A.~Sánchez and P.~Schwabe, \emph{High-speed Curve25519 on 8-bit, 16-bit, and 32-bit microcontrollers}, Designs, Codes and Cryptography, vol.~77, issue~2-3, pp.493-514, 2015. \bibitem{ElG1988} T.~ElGamal, \emph{A public key cryptosystem and a signature scheme based on discrete logarithm}, IEEE Transactions on Information Theory, vol.~31, pp.~469-472, 1985. \bibitem{DraftHashToCurve05} A.~Faz-Hernandez, S.~Scott, N.~Sullivan, R.~Wahby and C.~Wood, \emph{Hashing to Elliptic Curves}, Internet-Draft (November 02, 2019),\\ \url{https://tools.ietf.org/html/draft-irtf-cfrg-hash-to-curve-05} \bibitem{FreRuc1994}, G.~Frey and H.-G.~Rück, \emph{A remark concerning m-divisibility and the discrete logarithm problem in the divisor class group of curves}, Mathematics of Computation, vol.~62, issue~206, pp.~865-874, 1994. \bibitem{FreMulRuc1999} G.~Frey, M.~Müller and H.-G.~Rück, \emph{The Tate pairing and the discrete logarithm applied to elliptic curve cryptosystems}, IEEE Transactions on Information Theory, vol.~45, issue~5, pp.~1717-1719, 1999. \bibitem{GalLamVan2001} R.~Gallant, J.~Lambert and S.~Vanstone, \emph{Faster Point Multiplication on Elliptic Curves with Efficient Endomorphisms}, Advances in Cryptology - CRYPTO 2001, Lecture Notes in Computer Science, vol.~20139, pp.~190-200, 2001. \bibitem{Gau2009} P.~Gaudry, \emph{Index calculus for abelian varieties of small dimension and the elliptic curve discrete logarithm problem}, Journal of Symbolic Computation, vol.~44, issue~12, pp. 1690-1702, 2009. \bibitem{GauHesSma2002} P.~Gaudry, F.~Hess and N.~Smart, \emph{Constructive and destructive facets of Weil descent on elliptic curves}, Journal of Cryptology, vol.~15, issue~1, pp.~19-46, 2002. \bibitem{GraMon1994} T.~Grandlund and P.~Montgomery, \emph{Division by Invariant Integers using Multiplication}, ACM SIGPLAN Notices, vol.~29, issue~6, pp.~61-72, 1994. \bibitem{HaaLab2019} B.~Haaser and B.~Labrique, \emph{AuCPace: Efficient verifier-based PAKE protocol tailored for the IIoT}, IACR Transactions on Cryptographic Hardware and Embedded Systems, 2019(2), pp.~1-48, 2019. \bibitem{HanMenVan2003} D.~Hankerson, A.~Menezes and S.~Vanstone, \emph{Guide to Elliptic Curve Cryptography}, Springer-Verlag, 2003. \bibitem{Ica2009} T.~Icart, \emph{How to hash into elliptic curves}, Advances in Cryptography - CRYPTO 2009, Lecture Notes in Computer Science, vol.~5677, pp.~303-316, 2009. \bibitem{Fips202} Information Technology Laboratory, \emph{SHA-3 Standard: Permutation-Based Hash and Extendable-Output Functions}, National Institute of Standard and Technology, FIPS~202, 2015. \bibitem{ItoTsu1988} T.~Itoh and S.~Tsujii, \emph{A Fast Algorithm for Computing Multiplicative Inverses in $\GF(2^m)$ Using Normal Bases}, Information and Computation, vol.~78, pp.~171-177, 1988. \bibitem{EdDSArfc8032} S.~Josefsson and I.~Liusvaara, \emph{Edwards-Curve Digital Signature Algorithm (EdDSA)},\\ \url{https://tools.ietf.org/html/rfc8032} \bibitem{KarOfm1962} A.~Karatsuba and Y.~Ofman, \emph{Multiplication of Many-Digital Numbers by Automatic Computers}, Proceedings of the USSR Academy of Sciences, vol.~145, pp.~293-294, 1962. \bibitem{Curve25519rfc7748} A.~Langley, M.~Hamburg and S.~Turner, \emph{Elliptic Curves for Security},\\ \url{https://tools.ietf.org/html/rfc7748} \bibitem{LeNgu2012} D.-P.~Le and B.~Nguyen, \emph{Fast Point Quadrupling on Elliptic Curves}, Proceedings of the Third Symposium on Information and Communication Technology (SoICT '12), pp.~218-222, 2012. \bibitem{LonSic2014} P.~Longa and F.~Sica, \emph{Four-Dimensional Gallant-Lambert-Vanstone Scalar Multiplication}, Journal of Cryptology, vol.~27, issue~2, pp.~248-283, 2014. \bibitem{MoneroBug2017} luigi1111 and R.~Spagni, \emph{Disclosure of a Major Bug in CryptoNote Based Currencies},\\ \url{https://www.getmonero.org/2017/05/17/disclosure-of-a-major-bug-in-cryptonote-based-currencies.html} \bibitem{MenOkaVan1993} A.~Menezes, T.~Okamoto and S.~Vanstone, \emph{Reducing elliptic curve logarithms to logarithms in a finite field}, IEEE Transactions on Information Theory, vol.~39, issue~5, pp.~1639-1646, 1993. \bibitem{SAMD20} Microchip, \emph{SAM D20 Family} (microcontroller datasheet), \url{http://ww1.microchip.com/downloads/en/DeviceDoc/SAM_D20_%20Family_Datasheet_DS60001504C.pdf} \bibitem{Mih1997} P.~Mihăilescu, \emph{Optimal Galois field bases which are not normal}, presented at the Workshop on Fast Software Encryption in Haifa, 1997. \bibitem{Mon1985} P.~Montgomery, \emph{Modular multiplication without trial division}, Mathematics of Computation, vol.~44, pp.~519–521, 1985. \bibitem{ECTLSrfc8422} Y.~Nir, S.~Josefsson and M.~Pegourie-Gonnard, \emph{Elliptic Curve Cryptography (ECC) Cipher Suites for Transport Layer Security (TLS) Versions 1.2 and Earlier},\\ \url{https://tools.ietf.org/html/rfc8422} \bibitem{NisMam2016} T.~Nishinaga and M.~Mambo, \emph{Implementation of µNaCl on 32-bit ARM Cortex-M0}, IEICE Transactions on Information and Systems, vol.~E99-D, issue~8, 2016. \bibitem{PARIGP} PARI/GP,\\ \url{https://pari.math.u-bordeaux.fr/} \bibitem{PodSomSchLocRos2018} D.~Poddebniak, J.~Somorovsky, S.~Schinzel, M.~Lochter and P.~Rösler, \emph{Attacking Deterministic Signature Schemes Using Fault Attacks}, 2018 IEEE European Symposium on Security and Privacy (EuroS\&P), pp.~338-352, 2018. \bibitem{BearSSLctmul} T.~Pornin, \emph{Constant-Time Mul},\\ \url{https://www.bearssl.org/ctmul.html} \bibitem{DeterministicECDSArfc} T.~Pornin, \emph{Deterministic Usage of the Digital Signature Algorithm (DSA) and Elliptic Curve Digital Signature Algorithm (ECDSA)},\\ \url{https://tools.ietf.org/html/rfc6979} \bibitem{QEMU} QEMU, \emph{the FAST! processor emulator},\\ \url{https://www.qemu.org/} \bibitem{RenCosBat2015} J.~Renes, C.~Costello and L.~Batina, \emph{Complete addition formulas for prime order elliptic curves},\\ Advances in Cryptology – Eurocrypt 2016, Lecture Notes in Computer Science, vol.~9665, pp.~403-428, 2016. \bibitem{SchSpr2019} P.~Schwabe and D.~Sprenkels, \emph{The complete cost of cofactor $h = 1$},\\ Progress in Cryptology - INDOCRYPT 2019, Lecture Notes in Computer Science, vol.~11898, pp.~375-397, 2019. \bibitem{ShaWoe2006} A.~Shallue and C.~van de Woestijne, \emph{Construction of rational points on elliptic curves over finite fields}, Algorithm Number Theory Symposium - ANTS 2006, Lecture Notes in Computer Science, vol.~4076, pp.~510-524, 2006. \bibitem{ThoKelLar1986} J.~Thomas, J.~Keller and G.~Larsen, \emph{The calculation of multiplicative inverses over $\GF(p)$ efficiently where $p$ is a Mersenne prime}, IEE Transactions on Computers, vol.~35, issue~5, pp.~478-482, 1986. \bibitem{Ula2007} M.~Ulas, \emph{Rational Points on Certain Hyperelliptic Curves over Finite Fields}, Bulletin of the Polish Academy of Sciences - Mathematics, vol.~55, issue~2, pp.~97-104, 2007. \bibitem{ZhaLinZhaZhoGao2018} W.~Zhang, D.~Lin, H.~Zhang, X.~Zhou and Y.~Gao, \emph{A Lightweight FourQ Primitive on ARM Cortex-M0}, 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications / 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE), 2018. \end{thebibliography} \appendix \section{Unused or Failed Ideas}\label{sec:unused} It is uncommon in scientific articles to describe failures. However, we feel that the ideas described in this section, while not applicable or not interesting enough in our case, might be of interest in other contexts. \subsection{Signed Integers}\label{sec:unused-signed} All our computations with integers modulo $p$ used nonnegative integers. When such integers are represented as values in the $0$..$p-1$ range, we need the product of two such values to fit in the output operand size; when our largest multiplication opcode produces a 32-bit output, this limits the modulus $p$ to $2^{16}$. In particular, that implementation strategy cannot cope with modulus $p = 65\,537$ (a Mersenne prime) since multiplying $p-1$ with itself would yield $2^{32}$, truncated to $0$ because of the limited range of the multiplier output. This limitation can be worked around by using \emph{signed} integers. For instance, values modulo $65\,537$ can be represented by integers in the $-32\,768$..$+32\,768$ range. In that case, the maximum absolute value of a product of two such integers will be $2^{30}$, well within the representation limit of signed integers on 32-bit words ($-2^{31}$ to $2^{31}-1$). In our case, we want to mutualize modular reductions, meaning that we need to accumulate intermediate results without overflowing the representable range of values. With an unsigned representation of $\GF(p)$ and an extension polynomial $z^n-2$, this requires $(2n-1)p^2 < 2^{32}$ (here we only consider representability, not the specificities of the fast Montgomery reduction). If using a \emph{signed} representation of $\GF(p)$, then values are only up to $\lceil p/2\rceil$ in absolute value; the representable range is halved ($2^{31}$) to account for the sign bit, leading to the new requirement: $(2n-1)p^2/4 < 2^{31}$. Generally speaking, using signed integers increases the possible range of prime moduli $p$ by a factor $\sqrt{2}$. We did not use signed integers for Curve9767, for the following reasons: \begin{itemize} \item Signed integers make some operations, in particular Montgomery reduction, but also combined additions or subtractions of two elements at a time, more complicated and expensive. \item Conversely, the main parameter for performance is the field extension degree $n$, and the larger range is not enough to allow us to obtain a field of at least 250 bits with a prime degree $n$ smaller than 19. The largest prime $p$ such that $33(\lceil p/2\rceil)^2 < 2^{31}$ and $z^{17}-2$ is irreducible in $\GF(p)[z]$ is $p = 15\,913$ and leads to a field of size $p^{17} \approx 2^{237.28}$, which falls too short of the target ``128-bit security level'', even taking into account the traditional allowance for small multiplicative constants to cost estimates. \end{itemize} Therefore, there was no benefit to using signed representation in our case. The technique may still be useful in other contexts, in particular when working with Mersenne primes, such as 17, 257 or $65\,537$. The internal representation range of values can even be slightly extended to allow for easier and faster reduction. For instance, the following routine computes a multiplication of two integers modulo $65\,537$: \begin{verbatim} int32_t mul_mod_65537(int32_t x, int32_t y) { x *= y; x = (x & 0xFFFF) - (x >> 16); x += 32767; x = (x & 0xFFFF) - (x >> 16); return x - 32767; } \end{verbatim} If the two inputs are in the $-46\,340$ to $+46\,340$ range, then the intermediate product will fit in the representable range (no overflow); then the first reduction step brings it down to the $-32\,767$ to $+98\,302$ range. With the addition of the $32\,767$ constant, the range becomes $0$..$+131\,069$, and the second reduction step brings it down to $-1$..$+65\,535$. The final subtraction of $32\,767$ (compensating the addition of the same constant two lines before) makes the final range $-32\,768$..$+32\,768$, i.e. fully reduced. \subsection{Towers of Fields}\label{sec:unused-towers} A preliminary idea for this work was to use $\GF(p^n)$ with $p$ being a prime with easy reduction, and $n = 2^m$ a power of two. In particular, one could take $p = 65\,537$, and $n = 16$ (using the ``signed integer'' representation detailed in the previous section). The field $\GF(p^{16})$ can be defined as a quadratic extension of $\GF(p^8)$, itself a quadratic extension of $\GF(p^4)$, and so on. In $\GF(p)$, we can choose a non-quadratic residue $d_0$. Then, we recursively define $\GF(p^{2^{i+1}})$ as involving $d_{i+1}$, a formal square root of $d_{i}$. Operations in such a tower of fields are inexpensive; there are natural analogs to Karatsuba multiplication. Inversion is efficient: \begin{equation*} \frac{1}{u_0 + d_i u_1} = \frac{u_0 - d_i u_1}{u_0^2 + d_{i-1}u_1^2} \end{equation*} We can thus compute inversion in $\GF(p^{2^{i+1}})$ at the cost of two squarings, two multiplications and one inversion in $\GF(p^{2^i})$; this last operation then applies the same method recursively, down the tower. At the lowest level, only an inversion in $\GF(p)$ is required. Exact performance depends on the implementation architecture (notably its abilities are parallel evaluation, with SIMD units) but getting inversion cost down to only three times that of a multiplication is plausible. We abandoned that idea because curves based on field towers with quartic extension degrees seem vulnerable to Weil descent attacks; an attack with asymptotic cost $O(p^{3n/8})$ has been described\cite{Gau2009}. Using such a field for our curve would have required a complex argumentation to explain that the attack cost would be still too high in practice for a specific size; this would have been a ``hard sale''. Using field towers may still be useful in different contexts, e.g. to build universal hash functions for MAC-building purposes, especially with small Mersenne primes such as $p = 257$, for some lightweight architectures. \subsection{Alternate Inverse Computations}\label{sec:unused-inverse} The Itoh-Tsujii inversion algorithm that we used in section~\ref{sec:field-ops-inv} is an optimization on Fermat's little theorem. There are other strategies for computing inversions in a finite field; we present two here, which work, but have worse performance than Itoh-Tsujii. \paragraph{Binary GCD.} The binary GCD algorithm was introduced under the name \emph{plus-minus} by Brent and Kung\cite{BreKun1983}. Nominally for inverting integers against an odd modulus, it can be adapted to polynomials, and is in general a \emph{division} algorithm. Consider the problem of dividing $x$ by $y$ in finite field $\GF(p^n)$, the finite field being defined with the extension polynomial $M$ (crucially, the coefficient of degree zero of $M$ is not equal to zero). We have $y\neq 0$. We consider four variables $a$, $b$, $u$ and $v$ which are polynomials in $\GF(p)[z]$ (all will have degree less than $n$, except for the starting value of $b$, which is equal to $M$), and an extra small integer $\delta$. The process is described in algorithm~\ref{alg:bingcd}. \begin{algorithm}[H] \caption{\ \ Division in $\GF(p^n)$ with binary GCD}\label{alg:bingcd} \begin{algorithmic}[1] \Require{$x, y \in \GF(p^n)$, $y\neq 0$, $\GF(p^n) = \GF(p)[z]/M$} \Ensure{$x/y$} \State{$a\gets y$, $b\gets M$, $u\gets x$, $v\gets 0$} \State{$\delta\gets 0$} \For{$1\leq i\leq 2n$} \If{$a_0 = 0$} \State{$(a, u) \gets (a/z, u/z \bmod M)$} \State{$\delta \gets \delta - 1$} \ElsIf{$\delta \geq 0$} \State{$(a, u) \gets ((b_0 a - a_0 b)/z, (b_0 u - a_0 v)/z \bmod M)$} \Else \State{$(b, v) \gets ((a_0 b - b_0 a)/z, (a_0 v - b_0 u)/z \bmod M)$} \State{$(a, b, u, v) \gets (b, a, v, u)$} \State{$\delta \gets -\delta$} \EndIf \EndFor \State{\Return $v/b_0$} \end{algorithmic} \end{algorithm} The following invariants are maintained throughout the algorithm: \begin{itemize} \item $ax = uy \bmod M$ and $bx = vy \bmod M$. \item $b_0 \neq 0$. \item If the maximum possible size of $a$ is $n_a$ (i.e. the highest degree of a non-zero coefficient is at most $n_a-1$) and the maximum size of $b$ is $n_b$, then $\delta = n_a - n_b$, and every iteration decreases $n_a+n_b$ by 1. \end{itemize} The algorithm converges after a maximum of $2n$ iterations on $a = 0$ and $b$ a polynomial of degree 0; at that point, we have $b_0 x = vy \bmod M$, hence the result. Classical descriptions of this algorithm use a test on $a$ to stop when it reaches 0; here, we use a constant number of iterations to help with constant-time implementations. In a constant-time implementation, each iteration involves reading and rewriting four polynomials ($a$, $b$, $u$ and $v$, with multipliers in $\GF(p)$). Some optimizations can be obtained with the following remarks: \begin{itemize} \item The decisions for $k$ consecutive iterations depend only on $\delta$ and the $k$ low degree coefficients of $a$ and $b$. It is possible to aggregate $k$ iterations working only on these values (which might fit all in registers) and mutualize the updates on $a$, $b$, $u$ and $v$ into a multiplication of each by polynomials of degree less than $k$ (with some divisions by $z^k$). \item If computing an inversion (i.e. $x = 1$) instead of a division, $u$ is initially small and some of the first iterations can be made slightly faster. \item In the last iteration, since we are interested only in $v$, we can avoid updating $a$, $b$ and $u$. \end{itemize} Nevertheless, our attempts at optimizing this algorithm did not yield a cost lower than 12 times the cost of a multiplication, hence not competitive with Itoh-Tsujii. \paragraph{Thomas-Keller-Larsen.} In 1986, Thomas, Keller and Larsen described different inversion algorithms for modular integers\cite{ThoKelLar1986}; their main algorithm was dedicated to Mersenne primes, but another one was more generic and can be adapted to polynomials when working modulo $M = z^n-c$. The main idea is to repeatedly multiply the value to invert with custom factors of increasing degree, each shrinking the value by one element. Algorithm~\ref{alg:ThomasKellerLarsen} describes the process. \begin{algorithm}[H] \caption{\ \ Inversion in $\GF(p^n)$ with the Thomas-Keller-Larsen algorithm}\label{alg:ThomasKellerLarsen} \begin{algorithmic}[1] \Require{$y \in \GF(p^n)$, $y\neq 0$, $\GF(p^n) = \GF(p)[z]/(z^n-c)$} \Ensure{$1/y$} \State{$a\gets y$, $r\gets 1$} \For{$i = n-1$ down to $1$} \If{$a_i \neq 0$} \State{$q_{n-i} \gets 1/a_i$} \For{$j = n-1-i$ down to $0$} \State{$q_j = (1/a_i) \sum_{k=j+1}^{\text{min}(n-i,i+j)} q_k a_{i+j-k}$} \EndFor \State{$a\gets qa + c\bmod z^i$ (where $q = \sum_{j=0}^{n-i} q_j z^j$)} \State{$r\gets qr$} \EndIf \EndFor \State{\Return $r/r_0$} \end{algorithmic} \end{algorithm} The algorithm works on the following invariants: \begin{itemize} \item $ar = y$. \item At the entry of each iteration of the outer loop, the degree of $a$ is at most $i$; upon exit, it is at most $i-1$. \end{itemize} The polynomial $q$ which is computed in each loop iteration (when $a_i \neq 0$) is the unique polynomial such that $qa = z^n + t$ for a polynomial $t$ of degree at most $i-1$. In the finite field, we have $qa = t + c \bmod (z^n-c)$, hence multiplying $a$ by $q$ (and $r$ by $q$ too, to maintain the first invariant) yields $t+c$, of degree at most $i-1$. Since $t$ has degree less than $i$, it can be obtained by considering the product $qa$ modulo $z^i$. In a constant-time implementation, $q$ is always computed even if $a_i = 0$: the constant-time inversion of $0$ is assumed to yield some value, which we ignore, and a fixing step is added to avoid modifying $a$ and $r$ in such a case. This fixing step is only linear in the degree $n$, thus inexpensive relatively to the rest of the algorithm. We can avoid computing an inversion in $\GF(p)$ at each iteration by multiplying $a_i^{n-i} q$ instead of $q$; however, this implies computing the powers of $a_i$ and saving them, increasing memory traffic. Depending on the implementation platform, this may decrease or increase overall cost. Computing each $q$ grows in cost as $i$ approaches $n/2$, then decreases afterwards, because then the degree of $a$ becomes smaller and smaller. However, the value $r$ is the product of $n-1$ polynomials $q$ of degrees $1$ to $n-1$, and cannot really be made less expensive than the cost of $(n-1)/2$ multiplications in the field, making this algorithm less efficient than the Itoh-Tsujii method. \end{document}
{ "alphanum_fraction": 0.7244885673, "avg_line_length": 48.9084582441, "ext": "tex", "hexsha": "64b353466ae3c41978a7a7efd5d104e7a94cdcf5", "lang": "TeX", "max_forks_count": 5, "max_forks_repo_forks_event_max_datetime": "2021-12-29T08:15:45.000Z", "max_forks_repo_forks_event_min_datetime": "2020-01-09T22:30:32.000Z", "max_forks_repo_head_hexsha": "58f46524005ad09bc706c2443b37d41284a0ea09", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "pornin/curve9767", "max_forks_repo_path": "doc/curve9767.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "58f46524005ad09bc706c2443b37d41284a0ea09", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "pornin/curve9767", "max_issues_repo_path": "doc/curve9767.tex", "max_line_length": 127, "max_stars_count": 76, "max_stars_repo_head_hexsha": "58f46524005ad09bc706c2443b37d41284a0ea09", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "pornin/curve9767", "max_stars_repo_path": "doc/curve9767.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-21T17:26:40.000Z", "max_stars_repo_stars_event_min_datetime": "2020-01-03T23:34:19.000Z", "num_tokens": 51713, "size": 182722 }
\subsection{Data} \label{subsec:data_logistic_regression} For logistic regression and Support Vector Machines, we use the Wisconsin Breast Cancer Dataset \footnote{\url{https://www.kaggle.com/uciml/breast-cancer-wisconsin-data}}. This dataset contains measurements for breast cancer cases. There are two types of cancer in the dataset benign and malignant. An overview of the dataset if given in the jupyter notebook \href{https://github.com/am-kaiser/CompSci-Project-1/blob/main/regression_analysis/examples/logistic_regression_analysis.ipynb}{logistic\_regression\_analysis} which can be found in the GitHub repository corresponding to this report. Based on this dataset we want to find a model which predicts the diagnosis, i.e. either benign or malignant. For the design matrix, we drop the column id and diagnosis from the data. The id is not important for making predictions and the diagnosis is what we want to predict.
{ "alphanum_fraction": 0.8196544276, "avg_line_length": 463, "ext": "tex", "hexsha": "010ec9f55b8733f4c365c1f20cb0cb59bc677863", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-11-17T10:51:25.000Z", "max_forks_repo_forks_event_min_datetime": "2021-11-17T10:51:25.000Z", "max_forks_repo_head_hexsha": "098363c47c9409d6ffce1d03a968b6f2265c5fcc", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "am-kaiser/CompSci-Project-1", "max_forks_repo_path": "documentation/report/sections/data_logistic_regression.tex", "max_issues_count": 9, "max_issues_repo_head_hexsha": "098363c47c9409d6ffce1d03a968b6f2265c5fcc", "max_issues_repo_issues_event_max_datetime": "2021-12-16T19:51:18.000Z", "max_issues_repo_issues_event_min_datetime": "2021-11-01T08:32:11.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "am-kaiser/CompSci-Project-1", "max_issues_repo_path": "documentation/report/sections/data_logistic_regression.tex", "max_line_length": 868, "max_stars_count": null, "max_stars_repo_head_hexsha": "098363c47c9409d6ffce1d03a968b6f2265c5fcc", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "am-kaiser/CompSci-Project-1", "max_stars_repo_path": "documentation/report/sections/data_logistic_regression.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 201, "size": 926 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \eglabel{6} \section{Example \theexamples: Estimation with IOV} \label{sec:eg6} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Description} In this example we will look at more complex trial design and a correspondingly complex variability model. The model also includes categorical covariates, which is again something we have not encountered thus far. The example is based on example IOV1 from Monolix 4.1 (see \cite{Monolix4.1.4UserGuide:2012} for a detailed description) and features a cross-over design and inter-occasion variability (see section \ref{sec:variabilityModel}). As before we will go through the key elements of the model before we look at the \pharmml examples, but given the complex nature of the trial design we will describe that first then move onto the model definition. \begin{figure}[htb] \centering \includegraphics[width=0.7\linewidth]{TwoArmsThreeEpochs_withWashout.pdf} \caption{Schematic representation of a crossover design with washout. The reader is referred to Figure \ref{fig:templateTrialDesign} for the colour code used to identify the elements of a trial. See tables \ref{fig:eg6:segmentCellArmEpoch} and \ref{fig:eg6:epochDef} for the detailed definition of segments, cells, arms, epochs and occasions in this example.} \label{fig:TwoArmsThreeEpochs_withWashout} \end{figure} %\noindent \begin{table}[h] \begin{center} \begin{tabular}{lrr}\toprule Arm & \textbf{1} & \textbf{2} \\\midrule Number of subjects & 33 & 33\\ Dose variable & \var{D} & \var{D} \\ Dosing Amount & 100 & 150 \\ Dose Units & $\mg$ & $\mg$ \\ Dose per kg & no & no \\ Dosing times (h) & [0 : 12 : 72] & [0 : 12 : 72\\ \bottomrule \end{tabular} \end{center} \caption{Arms overview with dosing specification.} \label{tab:ArmOverview} \end{table} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsubsection{Trial Design} The model features a basic crossover design (see Figure \ref{fig:TwoArmsThreeEpochs_withWashout}) with washout period and inter-occasion variability (IOV). There are two treatments and the subjects are organised into two arms that start with a different treatment. In between each treatment there is a washout period during which time the drug is eliminated from each subject. In the model the treatments, fi treated as occasions, provide a second level of variability -- IOV \index{variability!IOV} (see section \ref{sec:variabilityModel}). This is summarised in Figure \ref{fig:eg6-IOV_2levels} (see also the listing in section \ref{eg6:variabilityModel}, showing relevant code within the element \xelem{VariabilityModel}). The model also uses covariates to model the variability within the model and so the treatments, the sequence of treatments (i.e.\xspace treatments A, B or B,A) and the occasion itself are described in the covariate section below. \begin{figure}[ht!] \centering \includegraphics[width=120mm]{IOV_2levels} \caption{Two levels of variability -- inter-individual and inter-occasion within individual variability.} \label{fig:eg6-IOV_2levels} \end{figure} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsubsection{Covariate Model} \label{eg6:covariates-defn} As discussed about all but the `Sex' covariate is used to capture the variability in the model, see Table \ref{tab:CovariatesOverview}. \begin{table}[h] \begin{center} \begin{tabular}{lrrrr}\toprule & \textbf{Sex} &{\color{red}\textbf{Treat}}&{\color{mediumgreen}\textbf{TreatSeq}}&{\color{magenta}\textbf{Occasion}}\\\midrule Type & Categorical & Categorical & Categorical & Categorical \\ Category Count & 2 & 2 & 2 & 2\\ Categories & F, M & A, B & A--B,B--A & 1, 2\\ Reference & F & A & A--B & 1\\ %Reference Probability & $14/36$ & 0.5 & 0.5 & 0.5\\ \bottomrule \end{tabular} \end{center} \caption{Covariates overview.} \label{tab:CovariatesOverview} \end{table} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsubsection{Parameter Model} The parameter model includes random effects that represent the IIV and {\color{lightblue}IOV} levels of variability. It also relates the parameters to the covariates described above\footnote{To improve clarity we have colour coded the contributions of the different levels of variability and the different covariates.} \begin{align} \log(ka_{i}) &= \log(ka_{pop}) + {\color{mediumgreen}\beta_{ka,TreatSeq}}1_{TreatSeq_i=A-B} + \eta_{ka,i} \label{eqn:eg6-param-ka}\\ \begin{split} \log(V_{ik}) &= \log(V_{pop}) + {\boldsymbol \beta_V}1_{S_i=F} + {\color{magenta}\beta_{V,OCC}} 1_{OCC_{ik}=1} \\ &\quad+ {\color{red}\beta_{V,Treat}}1_{Treat_{ik}=A} + {\color{mediumgreen}\beta_{V,TreatSeq}}1_{TreatSeq_i=A-B} \\ & \quad+ \eta_{V,i}^{(0)} + {\color{lightblue} \eta_{V,ik}^{(-1)} } \end{split} \label{eqn:eg6-parameter-v}\\ \begin{split} \log(\CL_{ik}) &= \log(\CL_{pop}) + {\boldsymbol \beta_{\CL}}1_{S_i=F} + {\color{magenta}\beta_{\CL,OCC}} 1_{OCC_{ik}=1}\\ &\quad + \eta_{\CL,i}^{(0)} + {\color{lightblue} \eta_{Cl,ik}^{(-1)} } \end{split}\nonumber \end{align} where \begin{gather*} \eta_{ka,i}^{(0)} \sim \mathcal{N}(0, \omega_{ka}), \quad \eta_{V,i}^{(0)} \sim \mathcal{N}(0, \omega_{V}), \quad \eta_{\CL,i}^{(0)} \sim \mathcal{N}(0, \omega_{\CL}), \\ {\color{lightblue} \eta_{V,ik}^{(-1)} \sim \mathcal{N}(0,\gamma_V)}, \quad {\color{lightblue} \eta_{\CL,ik}^{(-1)} \sim \mathcal{N}(0, \gamma_{\CL})} \end{gather*} The full variance-covariance matrix for our model is : \begin{gather} \Omega^{(0)} = \begin{pmatrix} \omega_{ka}^2 & 0 & 0 \\ & \omega_{V}^2 & 0 \\ & & \omega_{\CL}^2\\ \end{pmatrix}\label{eqn:eg6-covariance-mat}\\ \Omega^{(-1)} = \begin{pmatrix} 0 & 0 & 0\\ & \gamma_{V}^2 & 0 \\ & & \gamma_{\CL}^2\\ \end{pmatrix}\label{eqn:eg6-gamma-mat} \end{gather} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsubsection{Structural model} The model is first order absorption with linear elimination, with multiple dosing. This is the equivalent to oral1\_1cpt\_kaVCl (model 8) from \cite[Appendix I]{Bertrand:2008}. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsubsection{Observation model} We apply a residual error models to the output variable \var{C}. %\noindent \begin{center} \begin{tabular*}{0.6\textwidth}{@{\extracolsep{\fill}} >{\bfseries}l l}\toprule Output Variable & \textbf{\itshape C} \\\midrule Observations Name & Concentration\\ Units & $\mg/l$ \\ Observations Type & Continuous \\ Residual Error Model & Combined \\ Error Model Parameters & $a = 0.1,\quad b=0.1$\\ \bottomrule \end{tabular*} \end{center} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsubsection{Modelling Steps} Compared to the last example, we have define here two tasks: \begin{itemize} \item Estimation of population paramaters. \item Estimation of the individual parameters. \end{itemize} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Trial Design} We have summaries the dosing regimen and organisation of the trial design below, see also Figure \ref{fig:TwoArmsThreeEpochs_withWashout}. \begin{table}[htdp!] \begin{center} \begin{tabular}{ccccccc} \hline Segment&Activity & Treatment & DoseTime & DoseSize & Target Variable \\ \hline TA& OR1 & OR bolus & $0:12:72$ & 150 & Ac \\ TA& OR2 & OR bolus & $0:24:72$ & 100 & Ac \\ \hline \end{tabular} \end{center} \caption{Segment/activity overview.} \label{fig:eg6:segmentCellArmEpoch} \end{table} \begin{table}[htdp!] \begin{center} \begin{tabular}{cccc} \hline Epoch & Occasion & Start time & End time \\ \hline Treatment Epoch & OCC1 & 0 & 180 \\ Washout & -- & 0 & 10 \\ Treatment Epoch & OCC2 & 0 & 180 \\ \hline \end{tabular} \end{center} \caption{Epoch and occasion definition.} \label{fig:eg6:epochDef} \end{table} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsubsection{Structure} The implementation of the treatments, in \pharmml we use the \xelem{Activity} element, is different compared to the previous example. See Table \ref{fig:eg6:segmentCellArmEpoch} for the details. The difference is that now we have one dose administered at multiple dosing time points instead of single time point. See the following listing \inputxml{exp6_dosingTimes.xml} how one can describe it within the \xelem{DosingTimes} element using the \xelem{Sequence} structure defining the start/end times and step size. Table \ref{fig:eg6:epochDef} gives an overview of the \var{Epochs} and \var{Occasions} in this example. Here, the occasions overlap with the epochs, the start and end times are identical, this is not always the case, the occasions can span one or more epochs. The \var{Washout} epoch is given here with start/end times as well which is in fact a redundant piece of information (but required by construction of an \var{Epoch}) as a \var{Washout} always assumes total reset of all drug amounts. %\begin{listing}[ht!] %\inputxml{exp6_structure_part3.xml} %\caption{The implementation of the mapping of IOV to the trial design.} %\label{exp6_structure_part3} %\end{listing} As discussed in the section \ref{subsec:TrialStructure}, in \xelem{Structure} block we encode the variability which is located below the subject (see the hierarchy of the random variability discussed in section \ref{sec:variabilityModel}). We call it the \textit{inter-occasion variability}, IOV. The following listing \inputxml{exp6_structure_part3.xml} shows how this is done. In this case the occasions coincide with the epochs so we use the \xelem{EpochRef} element. Alternatively, we could use the \xelem{Period} element to define explicitly the start and end times of the occasions as shown in this listing: \inputxml{exp6_structure_part4.xml} This is of course very useful if the occasions do not coincide with the epochs, or there are two or more occasions within one epoch. In this case we set the \var{Start} and \var{End} times to $0$ and $180$, respectively. These are exactly the same time points as are used in the epoch definition (see the first listing in section \ref{eg4_subsec:trialDesign} for how to encode epochs in the \xelem{Structure} definition). %\begin{listing}[ht!] %\inputxml{exp6_structure_part4.xml} %\caption{Alternative implementation of the mapping of IOV to the trial design shown in previous listing.} %\label{exp6_structure_part4} %\end{listing} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsubsection{Population} We pick up where we left off in the \xelem{Structure}, implementing the hooks to the variability structure. The aspect we have not covered yet is related to IIV. The \xelem{Population} element is the place to define any subject related variability and those levels above it. The following listing shows how this works \inputxml{exp6_population_part0.xml} Here we deal only with the IIV so we are done with this aspect. %\begin{listing}[ht!] %\inputxml{exp6_population_part0.xml} %\caption{The implementation of the IIV mapping.} %\label{exp6_population_part0} %\end{listing} The next part of the \xelem{Population} block was discussed previously, with one exception. Beside the standard assignment of subjects to an \var{Arm} and providing information regarding \var{Sex}, we need to encode the information about \var{Treat}, i.e. treatment type considered here as covariate, which varies by definition in this cross-over design as the study progress from \var{Epoch1} to \var{Epoch3}. To encode this we use the \textit{nested table} concept as described in section \ref{sec:dataset}. Here the child table is defined by using a \xelem{Table} element instead of the usual \xelem{Column} element and given the identifier 'treat-tab'. Within the nested table definition another set of relevant columns is specified, \var{epoch} and \var{treat}. Next these nested tables are populated with data as can be seen in the following listing \inputxml{exp6_population_part2.xml} here for \var{Arm1}. Listing \inputxml{exp6_population_part2B.xml} shows one data record for \var{Arm2}. %\begin{listing}[ht!] %\inputxml{exp6_population_part2.xml} %\caption{The implementation of the nested table for time varying covariate, \var{Treat}, for \var{Arm1}.} %\label{exp6_population_part2} %\end{listing} %\begin{listing}[ht!] %\inputxml{exp6_population_part2B.xml} %\caption{The implementation of the nested table for time varying covariate, \var{Treat}, for \var{Arm2}.} %\label{exp6_population_part2B} %\end{listing} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Variability Model} \label{eg6:variabilityModel} In this example the variability model is more complex than before, with IIV\index{variability!IIV} and IOV\index{variability!IOV} levels of variability, see Figure \ref{fig:eg6-IOV_2levels}. As you will see, in \pharmml the complexity comes later -- in the parameter model. At this point in the \pharmml document all we need to do is define the variability levels to be used in the rest of the document. You can see in the following listing \inputxml{exp6_iov.xml} that this is done simply by listing the variability levels using the \xelem{VariabilityLevel} element. There are three important points to note here: \begin{enumerate} \item There is parent-child relationship between the levels of variability. The \var{Subject} level, in the \pharmml it is referenced with the attribute \xatt{symbId="indiv"} is higher in the hierarchy and directly above the \var{Occasion} level, referenced with the attribute \xatt{symbId="iov1"} which is exactly what is done using the \xelem{ParentLevel} in the listing above. \item The name given to a level, using the \xatt{symbId} attribute, is \textbf{not} significant. We used the names \var{iov1} and \var{indiv} to provide clarity in other parts of the example document. \item the type of each variability level (e.g.,\xspace between-subject, inter-occasion, between-centre) is not defined here or in the Model Definition as a whole\footnote{N.B.,\xspace The numerical levels described in the variability model (section \ref{sec:variabilityModel}) are not used.}. \end{enumerate} So in this example the \pharmml document tells us that there are two variability levels and that the lowest level of variability is called ``\texttt{iov1}''\index{variability!IOV}\@. This may seem odd, but to simulate or estimate the model we do not need to know which level of variability is considered IIV and which IOV. We only need to know their level relative to each other. Of course it may be desirable to know this when exchanging a model, and we feel that this information can be provided by annotation of the \pharmml document. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Covariate Model} The covariate model describes categorical covariates, listed in Table \ref{tab:CovariatesOverview}, \index{covariate!categorical}which we have not seen in the previous examples. Because this is an estimation example no probabilities are provided and only the categories are defined, placed in the \xelem{Categorical} element. Then the implementation of each covariate follows the same schema, which will be explained for the gender covariate \var{Sex}. There are obviously two categories the covariate can be associate with \textit{F} or \textit{M}, which are encoded using the \xelem{Category} element followed by an optional \xelem{Name}. See the following listing how this is done \inputxml{exp6_covariates.xml} %%% TODO %%%In a later simulation example (example \egref{8}, section %%%\ref{eg8:example8}) you will see how to assign probabilities to categorical covariates. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Parameter Model} In example \egref{1} (section \ref{sec:eg1}) we showed you how to define an individual parameter in \pharmml and relate that to a continuous covariate. Now in this example we will show how \pharmml can be used to describe parameters that have multiple levels of variability and are related to categorical covariates\index{covariate!categorical}. In the following listing \inputxml{exp6_ka.xml} we show the definition of parameter \var{ka}, which corresponds to (\ref{eqn:eg6-param-ka}). You should be familiar with this structure by now, but you should take note of the \xelem{Category} element within the \xelem{FixedEffect} element. We use this to tell \pharmml that this fixed effect is related to the ``\texttt{AB}'' category of the \var{TreatSeq} covariate. This is equivalent to the expression $\beta_{ka,TreatSeq}1_{TreatSeq_i=A-B}$ in (\ref{eqn:eg6-param-ka}). Note that it is possible to do this more than once, for example if the covariate has more than two categories. Parameter \var{ka} has only one level of variability, but this \inputxml{exp6_V_part1.xml} and this listing \inputxml{exp6_V_part2.xml} show how we describe parameter \var{V} with both IIV and IOV levels of variability. Very simply we add a \xelem{RandomVariable} for each level of variability and use the \xatt{symbIdRef} attribute in the \xelem{RandomEffects} element to map the random effect to the appropriate variability model as defined at the beginning of the \xelem{ModelDefinition} element. Thus \var{eta\_V} and \var{kappa\_V} correspond to the random effects $\eta^{(0)}_{V,i}$ and $\eta^{(-1)}_{V,ik}$ in (\ref{eqn:eg6-parameter-v}). This parameter is related to all four covariates, but we only show the \var{Sex} covariate. The others defined in a very similar manner as all the covariates in this model contain just 2 categories. %%% TODO We will not show parameter \var{Cl} as it does not illustrate any new concepts, nor are any of the random effects in the model correlated. This does not mean there is no covariance matrix defined within the \pharmml document. There is. The matrices in (\ref{eqn:eg6-covariance-mat}) and (\ref{eqn:eg6-gamma-mat}) are implicitly defined because all the random effects follow a normal distribution and we can deduce the diagonal of each matrix at each level of variability from the definition of each random effect. \subsection{Covered in previous examples} The remaining elements of this example to be encoded in \pharmml are nearly identical to those described before, such as \xelem{EstimationStep} and \xelem{StepDepend\-encies} within the\\ \xelem{ModellingSteps} block, and will not be discussed here. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\eglabel{4} %\section{Example \theexamples: Simulation with IOV} %\label{sec:eg4} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \eglabel{5} \section{Example \theexamples: Estimation with individual dosing} \label{sec:Ribba} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Description} This example is based on \cite{Ribba:2012uq} and deals with a mathematical model describing the inhibition of the tumour growth of low-grade glioma treated with chemotherapy. Although previous estimation examples were complex enough to illustrate most important aspects of the current \pharmml specification we would like briefly to discuss this example due to its role as a use case. It also illustrates a new feature of the language, the fact that we can encode patient specific administration scenarios. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Trial design} We will start with the definition of \xelem{Structure}, \xelem{Population}. The next language element, \xelem{IndividualDosing}, is, as mentioned above, new but it's easy to understand. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsubsection{Structure} Figure \ref{fig:1Arm1Epoch_RibbaDesign} shows the design structure of this example consisting of one arm and one epoch, meaning there is one treatment type 'IV' for all patients. As explained in section \ref{sec:CTS} the design element \xelem{Cell} comprises the essential elements specifying the information about the arm, epoch and segment/activities. \xelem{Segment} contains treatment definition, here an IV bolus administration, defined in the \xelem{Activity} element. Figure \ref{fig:cellHierarchy_Ribba} shows the general relationship of these elements (left) and how it applies to the current example (right). See the following listing \inputxml{Ribba_structure.xml} %\caption{Defining \textit{Structure} of the example, i.e. \textit{Epoch}, \textit{Arm}, \textit{Cell} and \textit{Segment}. \textit{Segment} contains \textit{Activity} definition, here a bolus administration.} %\label{lst:Ribba_structure} %\end{listing} for the PharmML implementation. \begin{figure}[ht!] \centering \includegraphics[width=0.7\linewidth]{pics/designPattern_1Arm1Epoch_Ribba} \caption{Design overview: single arm design.} \label{fig:1Arm1Epoch_RibbaDesign} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width=0.7\linewidth]{pics/cellHierarchy_Ribba} \caption{General cell hierarchy (left); The root of the trial design structure hierarchy is the 'Cell' which can contain one 'Segment', one 'Epoch' and multiple 'Arms'. The 'Segment' element can have multiple child elements, the 'Activities', e.g. treatments or a washout. (right) An example of how it is applied in \cite{Ribba:2012uq}.} \label{fig:cellHierarchy_Ribba} \end{figure} \begin{table}[htdp!] \begin{center} \begin{tabular}{ccccccc} \hline Segment&Activity & Treatment & DoseTime & DoseSize & Target Variable \\ \hline TA& bolusIV & IV bolus & individual & 1 & C \\ \hline \end{tabular} \end{center} \caption{Segment/activity overview.} \label{tab:segementActivity_Ribba} \end{table} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsubsection{Population} In the next step, the \textit{Population} is defined, i.e. attributes of the individuals in the study. This means creating an individual template with columns for an identifier, arm and repetition and then populating the table with appropriate data. As no covariates are used here the \textit{Population} description reduces to the assignment of the subjects to the single study arm, \textit{Arm1}. As a shorthand we use the \textit{repetition} method by defining the column 'rep', as can be seen in the following listing \inputxml{Ribba_population.xml} The identifiers, ID, created here are unique and will be used to the refer to specific subjects in the subsequent \xelem{IndividualDosing} structure element described in the following section. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsubsection{Individual Dosing} \label{subsubsec:Ribba_indivDosing} This model utilises the idea of the so called K-PK model, meaning that the rate of the drug entry is relevant but not its absolute value. Such models often assume, as it is the case here, that the dose is equal 1 for all subject and dosing events, see Table \ref{tab:Ribba_dataSet}. \begin{table}[htdp] \begin{center} \begin{tabular}{rrrr | rrrr | rrrr}\toprule ID&TIME&DV&DOSE & ID&TIME&DV&DOSE& ID&TIME&DV&DOSE \\\midrule 1&0&.&.& 1&116.23&72.04&.& 20&13.4&.&1\\ 1&3.43&45.7&.& 1&121.87&90.16&.& 20&17.13&42.62&.\\ 1&5.3&48.03&.& $\dots$ &$\dots$ &$\dots$ & $\dots$& 21&0&.&.\\ 1&42.13&71.34&.& $\dots$ &$\dots$ &$\dots$ & $\dots$& 21&1.5&.&1\\ 1&52.63&79.3&.& 20&0&48.61&.& 21&3.17&.&1\\ 1&54.57&.&1 & 20&4&.&1& 21&4.85&.&1\\ 1&57.53&72.3&.& 20&5.88&.&1& 21&6.52&.&1\\ 1&59.77&.&1 & 20&6.7&46.64&.& 21&8.19&.&1\\ 1&63.3&72.07&.& 20&7.76&.&1& 21&9.77&72.35&.\\ 1&68.97&70.24&.& 20&9.27&44.97&.& 21&9.87&.&1\\ 1&76.53&66.81&.& 20&9.64&.&1& 21&14.23&66.96&.\\ 1&94.53&60.48&.& 20&11.52&.&1& 21&18.13&56.79&.\\ 1&106.1&62&.& 20&13.23&42.96&.& 21&23.9&60.06&.\\\bottomrule \end{tabular} \end{center} \caption{Data used in \cite{Ribba:2012uq}, an excerpt from the experimental data set in NONMEM format. The columns are: the identifier, ID, time for measurements and dosing events, dependent variable, DV, which stand for \var{PSTAR} -- the total tumour size and the dose, DOSE. As common for K-PD models, the dose is equal 1 for all subjects and dosing events.} \label{tab:Ribba_dataSet} \end{table}% The element \textit{IndividualDosing} is used to implementing all such subject specific dosing events. First we have to associate the data which follow to an appropriate activity, this is done by referring to the 'bolusIV' which defined previously in \xelem{Structure}, as as shown in the following listing \inputxml{Ribba_individualDosing.xml} Next we map the subject's identifier \var{ID} to that created in the population definition. Finally a data set template using \xelem{Definition} element is defined, i.e. the columns \var{ID}, \var{TIME} and \var{DOSE}. Then the table is populated with subject specific values as shown here for subjects 1, 2 and 21. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Structural model definition} The following ODE system is defined: \begin{align*} \frac{dC}{dt} &= -\textit{KDE} \times C \nonumber \\ \frac{dP}{dt} &= \lambda_P \times P \Big( 1 - \frac{P^\star}{K} \Big) + k_{\textit{QPP}} \times Q_P - k_{\textit{PQ}} \times P - \gamma \times C \times \textit{KDE} \times P \nonumber \\ \frac{dQ}{dt} &= k_{PQ}\times P - \gamma \times C\times \mathit{KDE}\times Q \nonumber \\ \frac{dQ_P}{dt} &= \gamma \times C \times \textit{KDE} \times Q - k_{\textit{QPP}} \times Q_P - \delta_{\textit{QP}} \times Q_P \nonumber \\ \nonumber \\ P^{\star} &= P + Q + Q_P \nonumber \end{align*} with initial conditions \begin{align*} C(t=0) = 1; \quad P(t=0) = P0; \quad Q(t=0) = Q0; \quad Q_P(t=0) = 0. \nonumber \end{align*} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsubsection{Defining initial conditions} This example differs from the previous ones. It requires, in addition to model parameters, the estimation of the initial conditions of two tumour growth related variables. Moreover, the inter-individual variability is assumed for these variables. The value for $Q_P(t=0)=Q_{P_0}$ is fixed to $0$ but the values for $P(t=0)=P_0$ and $Q(t=0)=Q_0$ are allowed to vary according to a log-normal distribution, see the following listing \inputxml{Ribba_initialConditionsDef.xml} where the definition of the distribution for the initial condition $P_0$ is shown. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Modelling steps} This requires the specification of the following items: \textit{EstimationStep} and \textit{StepDependencies}. It has been described in previous examples in detail and will be skipped here. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\begin{table}[htdp!] %\begin{center} %\begin{tabular}{cc} %\hline %Arm & N \\ %\hline %Arm 1 & 21\\ %\hline %\end{tabular} %\end{center} %\label{default} %\caption{Arm definition} %\end{table}% % % %\begin{table}[htdp!] %\begin{center} %\begin{tabular}{p{0.3\textwidth}} %%\hline %\hline %\small %Cell 1 %\begin{itemize} \itemsep1pt \parskip0pt \parsep0pt %\item %Arm 1 %\item %Epoch1 %\item %Segment TA %\begin{itemize} \itemsep1pt \parskip0pt \parsep0pt %\item %Activity -- IV %\end{itemize} %\end{itemize} \\ %\hline %\end{tabular} %\end{center} %\label{default} %\caption{Cell/Segment/Activity/Arm/Epoch overview} %\end{table}% % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\begin{table}[htdp] %\begin{center} %\begin{tabular}{lcccc} %\hline %Treatment & Administration Type & DoseTime & DoseSize & Target Variable \\ %\hline %Treatment A & OR bolus & \text{individual} & \text{individual} & C \\ %\hline %\end{tabular} %\end{center} %\label{default} %\caption{Dosing overview} %\end{table}% % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\begin{table}[htdp] %\begin{center} %\begin{tabular}{cc} %\hline %Arm & N \\ %\hline %Arm 1 & 21\\ %\hline %\end{tabular} %\end{center} %\label{default} %\caption{Arm definition} %\end{table}% % %\eglabel{10} %\section{Example \theexamples: Higher levels of variability} % %When using the data-set to define the trial design then it is possible %to in this way to define many more levels of variability than are %shown here. To do this you simply need to define the variability %lebels you need using the \xelem{VariabilityLevel} elements at the %start of the model definition and then map your data-set to each of %these variability levels using the \xelem{UseVariablityLevel} element %in the mapping parts of the Estimation or Simulation step. %%% Local Variables: %%% mode: latex %%% TeX-master: "../moml-specification" %%% End: %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\subsection{Modelling Steps} % %By now you must be familiar with how we map data in a data-set %\index{data-set} to the %rest of the model. This model follows the approach that you have %seen, but we have not had a model with multiple levels of %variability or categorical covariates before so how \pharmml handles %these features needs some explanation. % %\begin{table}[htb] %\centering %\begin{tabular}{r r r r r r r r}\toprule %id&Ttme&Y&dose&occ&treat&sex&streat\\\midrule %1 & 0 & . & 4 & 1 & A & M & A-B\\ %1 & 0.25 & 2.1243964 & . & 1 & A & M & A-B\\ %1 & 0.5 & 4.308573 & . & 1 & A & M & A-B\\ %1 & 1 & 7.6059305 & . & 1 & A & M & A-B\\ %1 & 2 & 6.9678311 & . & 1 & A & M & A-B\\ %1 & 0 & . & 4 & 2 & B & M & A-B \\ %1 & 0.25 & 4.2049182 & . & 2 & B & M & A-B\\ %1 & 0.5 & 7.2508737 & . & 2 & B & M & A-B\\ %1 & 1 & 8.5792413 & . & 2 & B & M & A-B\\ %1 & 2 & 8.5689542 & . & 2 & B & M & A-B\\ %31 & 0 & . & 4 & 1 & B & F & B-A\\ %31 & 0.25 & 3.7113248 & . & 1 & B & F & B-A\\ %31 & 0.5 & 5.0077184 & . & 1 & B & F & B-A\\ %31 & 1 & 8.9052428 & . & 1 & B & F & B-A\\ %31 & 2 & 6.8447695 & . & 1 & B & F & B-A\\\bottomrule %\end{tabular} %\caption{An excerpt of the data-file used to define the IOV estimation.} %\label{tab:eg6-estdata} %\end{table} % %The data-set we need to map to the model is shown in table %\ref{tab:eg6-estdata}. Our first task is to inform \pharmml that the %\dscol{id} and \dscol{occ} columns define the variability of the %model. We do this using the \xelem{UseVariabilityLevel} we first saw %in example \egref{3} (section \ref{sec:eg3-simdata-mapping}). Listing %\ref{eg:eg6-ms-prt1} shows how we use this column twice, first to map %the \dscol{id} column to the \attval{indiv} variability level and %second to map the \dscol{occ} column to the \attval{occ1} variability %level. Each unique value in the respective column is taken to indicate %a variability node within each level of variability (see section %\ref{sec:variabilityModel}). % %\begin{listing}[htb] %\inputxml{eg6_ms_prt1.xml} %\caption{The data-set and variability mapping in example \egref{6}.} %\label{eg:eg6-ms-prt1} %\end{listing} % %In listing \ref{eg:eg6-ms-prt1} we also see how the categorical %covariate \var{Treat} is populated by the \dscol{treat} column of the %dataset. Note for this to work the values in the column must %correspond \emph{identically} to the names of the covariate's %categories. In this case the covariate \var{Treat} has category %\var{A} and \var{B} so this works. % %\begin{listing}[htb] %\inputxml{eg6_ms_prt2.xml} %\caption{The complex covariate mappings in example \egref{6}.} %\label{eg:eg6-ms-prt2} %\end{listing} % %Listing \ref{eg:eg6-ms-prt2} shows us how we deal with cases where the %content of the data-set does not match the category name. In this %cases the covariate is \var{TreatSeq}, which has two categories: %\var{AB} and \var{BA}. In the data-set these are encoded as %``\texttt{A-B}'' and ``\texttt{B-A}'' respectively. By now hopefully %you know what to expect. We use a conditional expression to identify %the values we are interested in. In this first mapping this is the %string ``\texttt{A-B}''. What is new in this mapping is the %\xelem{Assign} element. We use this to assign the string value %``texttt{AB}'' to the covariate, which matches the \var{AB} %category. We take the same approach to map the other category of %\var{TreatSeq} and the categories of the \var{Occ} covariate, although %the latter is not shown here. % % %\eglabel{7} %\section{Example \theexamples: Estimation with IOV, explicitly defined trial design} %\label{eg7:example7} % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\subsection{Description} % %This example is identical to example \egref{6} except that the trial %design is defined explicitly within the \xelem{Design} element of the %\pharmml document. This trial design also introduces some concepts %that we have not seen in previous examples. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\subsection{Trial Design} % %The trial design being represented here is described in detail in %example \egref{6} (see also figure \ref{fig:eg6-crossover-design} and %section \ref{sec:eg6}). We have omitted the %\xelem{Treatment} elements in listing \ref{eg:eg7-td-prt1} because we %want to focus on the parts we have not discussed before. Here the %\xelem{TreatmentEpoch} \attval{AEp} references treatment A, as in %previous examples, but it also associates an occasion with the epoch %sing the \xelem{Occasion} element. Note that the occasion is given an %identifier, \var{occ1}, and is assigned to a variability level, in %this case \attval{iov1}. We then use this epoch and the epoch %\attval{BEp} to define the cross-over study. % %\begin{listing}[htb] %\inputxml{eg7_td_prt1.xml} %\caption{The trial design of example \egref{7}.} %\label{eg:eg7-td-prt1} %\end{listing} % %Just Group \attval{a1} is shown in the listing and this defines the %group as being a sequence of epoch \attval{AEp}, then a washout, %followed by epoch \attval{BEp}. The ordering in the \xelem{Group} %element defines the sequence of events. Finally we give an identifier, %\var{i} for the individuals in this group and assign the individuals %to a variability level in the model definition. Note that the %variability level used for \xelem{Individual} must be one level of %variability higher than that used by the \xelem{Occasion} %element. At present this section of \pharmml can only describe trials designs %with at most IOV and IIV. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\subsection{Modelling Steps} % %As in examples \egref{1} and \egref{5} we now need to map the data-set %to the trial design. In particular we need to identify the rows in the %data-set that define the groups (\attval{a1} and \attval{a2}) and the %occasions (\attval{occ1} and \attval{occ2}). Listing \ref{eg:eg7-ms} %shows how we achieve. Using the \xelem{UseVariabilityNode} element we %tell \pharmml which individual or occasion to use as we read the %data-set. Since, the structure of the study is defined explicitly, %this enables \pharmml that the trial design implicitly defined within %the data-set is consistent with it. % %\begin{listing}[htb] %\inputxml{eg7_ms.xml} %\caption{Mapping to the trial design in example \egref{7}.} %\label{eg:eg7-ms} %\end{listing} % %\eglabel{8} %\section{Example \theexamples: Simulation with IOV, explicitly defined trial design} %\label{eg8:example8} % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\subsection{Description} % %This example is in many aspects identical to the previous one, e.g.\ the same structural model and covariates, %but few features of the trial design and modelling steps are new. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\subsection{Trial Design} %Here, we define explicitly the start and end times for each epoch and each occasion, see listing \ref{eg:eg8_EpochOccasion_startEnd}. %In this case the time intervals are identical but this is rather an exception then a rule. Usually, one would have %multiple occasions within one epoch or the occasion would span over multiple epochs. % %\begin{listing}[htb] %\inputxml{eg8_EpochOccasion_startEnd.xml} %\caption{Defining start and end times for an epoch and occasion in example \egref{8}.} %\label{eg:eg8_EpochOccasion_startEnd} %\end{listing} % %Next listing, \ref{eg:eg8_groupSize} shows how to describe a group, here with three epochs, treatments and one washout in between, %and the number of subjects in this group. %As explained in the Trial Design chapter, section \ref{sec:CTS}, the 'Washout' epoch means complete reset of all %drug amounts and is defined without start and end times. % %\begin{listing}[htb] %\inputxml{eg8_groupSize.xml} %\caption{Defining group, i.e.\ treatment sequence and number of subjects in example \egref{8}.} %\label{eg:eg8_groupSize} %\end{listing} % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\subsection{Modelling steps} %The number of observations time points at which the observation variable is to be estimated, here \textit{Cc}, is defined %using a construct seen before in listing \ref{eg:eg5-trial-design-vals} in the definition of the dosing time sequence. %Here the time sequence is uniqely specified by providing the %\begin{itemize} %\item %start time, \textit{begin} %\item %number of equidistant intervals, \textit{repetitions} %\item %length of each interval, \textit{stepSize} %\end{itemize} % % %\begin{listing}[htb] %\inputxml{eg8_modellingSteps.xml} %\caption{Defining \textit{Observations} and \textit{StepDependencies} in example \egref{8}.} %\label{eg:eg8_modellingSteps} %\end{listing} % % %\eglabel{9} %\section{Example \theexamples: Simulation with IOV, trial design specified in a data file} % %This example is identical as the previous one except that the trial design is specified by the experimental data file. %The major consequences are that %\begin{itemize} %\item %the \xelem{Design} block is missing %\item %we have to define the external data source, see listing \ref{eg:eg9_dataSet} %\item %and instead of defining the \xelem{Observations} we have the \xelem{SimDataSet} block to interpret the data-set, %listing \ref{eg:eg9_simDataSet}, described in detail in section \ref{sec:eg3-simdata-mapping}. %\end{itemize} % % %\begin{listing}[htb] %\inputxml{eg9_dataSet.xml} %\caption{Defining external data source in example \egref{9}.} %\label{eg:eg9_dataSet} %\end{listing} % % %\begin{listing}[htb] %\inputxml{eg9_simDataSet.xml} %\caption{Defining \xelem{SimDataSet} block to interpret the data-set in example \egref{9}.} %\label{eg:eg9_simDataSet} %\end{listing} % %\clearpage %%\newpage
{ "alphanum_fraction": 0.6935732252, "avg_line_length": 43.243063263, "ext": "tex", "hexsha": "40924fb8f891476e0ba88585bd1101ce367fba46", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b102aedd082e3114df26a072ba9fad2d1520e25f", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "pharmml/pharmml-spec", "max_forks_repo_path": "input/explanatory_examples_prt2.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b102aedd082e3114df26a072ba9fad2d1520e25f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "pharmml/pharmml-spec", "max_issues_repo_path": "input/explanatory_examples_prt2.tex", "max_line_length": 539, "max_stars_count": 1, "max_stars_repo_head_hexsha": "b102aedd082e3114df26a072ba9fad2d1520e25f", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "pharmml/pharmml-spec", "max_stars_repo_path": "input/explanatory_examples_prt2.tex", "max_stars_repo_stars_event_max_datetime": "2018-01-26T13:17:54.000Z", "max_stars_repo_stars_event_min_datetime": "2018-01-26T13:17:54.000Z", "num_tokens": 10979, "size": 38962 }
\par \section{Data Structure} \label{section:InpMtx:dataStructure} \par \par The {\tt InpMtx} structure has the following fields. \begin{itemize} \item {\tt int coordType} : coordinate type. The following types are supported. \begin{itemize} \item {\tt INPMTX\_BY\_ROWS} --- row triples, the coordinates for $a_{i,j}$ is $(i,j)$. \item {\tt INPMTX\_BY\_COLUMNS} --- column triples, the coordinates for $a_{i,j}$ is $(j,i)$. \item {\tt INPMTX\_BY\_CHEVRONS} --- chevron triples, the coordinates for $a_{i,j}$ is $(\min(i,j), j-i)$. (Chevron $j$ contains $a_{j,j}$, $a_{j,k} \ne 0$ and $a_{k,j} \ne 0$ for $k > j$.) \item {\tt INPMTX\_CUSTOM} --- custom coordinates. \end{itemize} \item {\tt int storageMode} : mode of storage \begin{itemize} \item {\tt INPMTX\_RAW\_DATA} --- data is raw pairs or triples, two coordinates and (optionally) one or two double precision values. \item {\tt INPMTX\_SORTED} --- data is sorted and distinct triples, the primary key is the first coordinate, the secondary key is the second coordinate. \item {\tt INPMTX\_BY\_VECTORS} --- data is sorted and distinct vectors. All entries in a vector share something in common. For example, when {\tt coordType} is {\tt INPMTX\_BY\_ROWS}, {\tt INPMTX\_BY\_COLUMNS} or {\tt INPMTX\_BY\_CHEVRONS}, row vectors, column vectors, or chevron vectors are stored, respectively. When {\tt coordType} is {\tt INPMTX\_CUSTOM}, a custom type, entries in the same vector have something in common but it need not be a common row, column or chevron coordinate. \end{itemize} \item {\tt int inputMode} : mode of data input \begin{itemize} \item {\tt INPMTX\_INDICES\_ONLY} --- only indices are stored, not entries. \item {\tt SPOOLES\_REAL} --- indices and real entries are stored. \item {\tt SPOOLES\_COMPLEX} --- indices and complex entries are stored. \end{itemize} \item {\tt int maxnent} -- present maximum number of entries in the object. This quantity is initialized by the {\tt InpMtx\_init()} method, but will be changed as the object resizes itself as necessary. \item {\tt int nent} -- present number of entries in the object. This quantity changes as data is input or when the raw triples are sorted and compressed. \item {\tt double resizeMultiple} -- governs how the workspace grows as necessary. The default value is 1.25. \item {\tt IV ivec1IV} -- an {\tt IV} vector object of size {\tt mxnent} that holds first coordinates. \item {\tt IV ivec2IV} -- an {\tt IV} vector object of size {\tt mxnent} that holds second coordinates. \item {\tt DV dvecDV} -- a {\tt DV} vector object of size {\tt mxnent} that holds double precision entries. Used only when {\tt inputMode} is {\tt SPOOLES\_REAL} or {\tt SPOOLES\_COMPLEX}. \item {\tt int maxnvector} -- present maximum number of vectors. This quantity is initialized by the {\tt InpMtx\_init()} method, but will be changed as the object resizes itself as necessary. Used only when {\tt storageMode} is {\tt INPMTX\_BY\_VECTORS}. \item {\tt int nvector} -- present number of vectors. Used only when {\tt storageMode} is {\tt INPMTX\_BY\_VECTORS}. \item {\tt IV vecidsIV} -- an {\tt IV} vector object of size {\tt nvector} to hold the id of each vector. Used only when {\tt storageMode} is {\tt INPMTX\_BY\_VECTORS}. \item {\tt IV sizesIV} -- an {\tt IV} vector object of size {\tt nvector} to hold the size of each vector. Used only when {\tt storageMode} is {\tt INPMTX\_BY\_VECTORS}. \item {\tt IV offsetsIV} -- an {\tt IV} vector object of size {\tt nvector} to hold the offset of each vector into the {\tt ivec1IV}, {\tt ivec2IV} and {\tt dvecDV} vector objects. Used only when {\tt storageMode} is {\tt INPMTX\_BY\_VECTORS}. \end{itemize} \par One can query the attributes of the object with the following macros. \begin{itemize} \item {\tt INPMTX\_IS\_BY\_ROWS(mtx)} returns {\tt 1} if the entries are stored by rows, and {\tt 0} otherwise. \item {\tt INPMTX\_IS\_BY\_COLUMNS(mtx)} returns {\tt 1} if the entries are stored by columns, and {\tt 0} otherwise. \item {\tt INPMTX\_IS\_BY\_CHEVRONS(mtx)} returns {\tt 1} if the entries are stored by chevrons, and {\tt 0} otherwise. \item {\tt INPMTX\_IS\_BY\_CUSTOM(mtx)} returns {\tt 1} if the entries are stored by some custom coordinate, and {\tt 0} otherwise. \item {\tt INPMTX\_IS\_RAW\_DATA(mtx)} returns {\tt 1} if the entries are stored as unsorted pairs or triples, and {\tt 0} otherwise. \item {\tt INPMTX\_IS\_SORTED(mtx)} returns {\tt 1} if the entries are stored as sorted pairs or triples, and {\tt 0} otherwise. \item {\tt INPMTX\_IS\_BY\_VECTORS(mtx)} returns {\tt 1} if the entries are stored as vectors, and {\tt 0} otherwise. \item {\tt INPMTX\_IS\_INDICES\_ONLY(mtx)} returns {\tt 1} if the entries are not stored, and {\tt 0} otherwise. \item {\tt INPMTX\_IS\_REAL\_ENTRIES(mtx)} returns {\tt 1} if the entries are real, and {\tt 0} otherwise. \item {\tt INPMTX\_IS\_COMPLEX\_ENTRIES(mtx)} returns {\tt 1} if the entries are complex, and {\tt 0} otherwise. \end{itemize}
{ "alphanum_fraction": 0.7067980296, "avg_line_length": 39.0384615385, "ext": "tex", "hexsha": "34ce5b663cecd62b1402819d8a11175b64d4e3f1", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-08-29T18:41:28.000Z", "max_forks_repo_forks_event_min_datetime": "2019-08-29T18:41:28.000Z", "max_forks_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "alleindrach/calculix-desktop", "max_forks_repo_path": "ccx_prool/SPOOLES.2.2/InpMtx/doc/dataStructure.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_issues_repo_issues_event_max_datetime": "2018-01-25T16:08:31.000Z", "max_issues_repo_issues_event_min_datetime": "2017-09-21T17:03:55.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "alleindrach/calculix-desktop", "max_issues_repo_path": "ccx_prool/SPOOLES.2.2/InpMtx/doc/dataStructure.tex", "max_line_length": 72, "max_stars_count": null, "max_stars_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "alleindrach/calculix-desktop", "max_stars_repo_path": "ccx_prool/SPOOLES.2.2/InpMtx/doc/dataStructure.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1592, "size": 5075 }
\documentclass[a4paper, 11pt]{article} \usepackage{myfile} \title{\Large \textbf{\uppercase{PSet for Practice}} \\ [.5cm]\ \large\signature} \begin{document} \printtitle {\hypersetup{hidelinks} \tableofcontents }\newpage \section{China} \subsection{TST 2018} \prob{Let $p,q$ be positive reals with sum 1. Show that for any $n$-tuple of reals $(y_1,y_2,...,y_n)$, there exists an $n$-tuple of reals $(x_1,x_2,...,x_n)$ satisfying $$p\cdot \max\{x_i,x_{i+1}\} + q\cdot \min\{x_i,x_{i+1}\} = y_i$$for all $i=1,2,...,2017$, where $x_{2018}=x_1$.} \prob{A number $n$ is interesting if 2018 divides $d(n)$ (the number of positive divisors of $n$). Determine all positive integers $k$ such that there exists an infinite arithmetic progression with common difference $k$ whose terms are all interesting.} \prob{Circle $\omega$ is tangent to sides $AB$,$AC$ of triangle $ABC$ at $D$,$E$ respectively, such that $D\neq B$, $E\neq C$ and $BD+CE<BC$. $F$,$G$ lies on $BC$ such that $BF=BD$, $CG=CE$. Let $DG$ and $EF$ meet at $K$. $L$ lies on minor arc $DE$ of $\omega$, such that the tangent of $L$ to $\omega$ is parallel to $BC$. Prove that the incenter of $\triangle ABC$ lies on $KL$.} \prob{Functions $f,g:\mathbb{Z}\to\mathbb{Z}$ satisfy $$f(g(x)+y)=g(f(y)+x)$$for any integers $x,y$. If $f$ is bounded, prove that $g$ is periodic.} \prob{Given a positive integer $k$, call $n$ good if among $$\binom{n}{0},\binom{n}{1},\binom{n}{2},...,\binom{n}{n}$$at least $0.99n$ of them are divisible by $k$. Show that exists some positive integer $N$ such that among $1,2,...,N$, there are at least $0.99N$ good numbers.} \prob{Let $A_1$, $A_2$, $\cdots$, $A_m$ be $m$ subsets of a set of size $n$. Prove that $$ \sum_{i=1}^{m} \sum_{j=1}^{m}|A_i|\cdot |A_i \cap A_j|\geq \frac{1}{mn}\left(\sum_{i=1}^{m}|A_i|\right)^3.$$} \prob{Given a triangle $ABC$. $D$ is a moving point on the edge $BC$. Point $E$ and Point $F$ are on the edge $AB$ and $AC$, respectively, such that $BE=CD$ and $CF=BD$. The circumcircle of $\triangle BDE$ and $\triangle CDF$ intersects at another point $P$ other than $D$. Prove that there exists a fixed point $Q$, such that the length of $QP$ is constant.} \prob{An integer partition, is a way of writing n as a sum of positive integers. Two sums that differ only in the order of their summands are considered the same partition. The number of partitions of n is given by the partition function $p\left ( n \right )$. So $p\left ( 4 \right ) = 5$ . Determine all the positive integers so that $p\left ( n \right )+p\left ( n+4 \right )=p\left ( n+2 \right )+p\left ( n+3 \right )$.} \prob{Two positive integers $p,q \in \mathbf{Z}^{+}$ are given. There is a blackboard with $n$ positive integers written on it. A operation is to choose two same number $a,a$ written on the blackboard, and replace them with $a+p,a+q$. Determine the smallest $n$ so that such operation can go on infinitely.} \prob{Let $k, M$ be positive integers such that $k-1$ is not squarefree. Prove that there exist a positive real $\alpha$, such that $\lfloor \alpha\cdot k^n \rfloor$ and $M$ are coprime for any positive integer $n$.} \prob{Given positive integers $n, k$ such that $n\ge 4k$, find the minimal value $\lambda=\lambda(n,k)$ such that for any positive reals $a_1,a_2,\ldots,a_n$, we have \[ \sum\limits_{i=1}^{n} {\frac{{a}_{i}}{\sqrt{{a}_{i}^{2}+{a}_{{i}+{1}}^{2}+{\cdots}{{+}}{a}_{{i}{+}{k}}^{2}}}} \le \lambda\]Where $a_{n+i}=a_i,i=1,2,\ldots,k$} \prob{Let $M,a,b,r$ be non-negative integers with $a,r\ge 2$, and suppose there exists a function $f:\mathbb{Z}\rightarrow\mathbb{Z}$ satisfying the following conditions: (1) For all $n\in \mathbb{Z}$, $f^{(r)}(n)=an+b$ where $f^{(r)}$ denotes the composition of $r$ copies of $f$ (2) For all $n\ge M$, $f(n)\ge 0$ (3) For all $n>m>M$, $n-m|f(n)-f(m)$\\ Show that $a$ is a perfect $r$-th power.} \prob{Let $\omega_1,\omega_2$ be two non-intersecting circles, with circumcenters $O_1,O_2$ respectively, and radii $r_1,r_2$ respectively where $r_1 < r_2$. Let $AB,XY$ be the two internal common tangents of $\omega_1,\omega_2$, where $A,X$ lie on $\omega_1$, $B,Y$ lie on $\omega_2$. The circle with diameter $AB$ meets $\omega_1,\omega_2$ at $P$ and $Q$ respectively. If $$\angle AO_1P+\angle BO_2Q=180^{\circ},$$find the value of $\frac{PX}{QY}$ (in terms of $r_1,r_2$).} \prob{Let $G$ be a simple graph with 100 vertices such that for each vertice $u$, there exists a vertice $v \in N \left ( u \right )$ and $ N \left ( u \right ) \cap N \left ( v \right ) = \o $. Try to find the maximal possible number of edges in $G$. The $ N \left ( . \right )$ refers to the neighborhood.} \prob{Prove that there exists a constant $C>0$ such that $$H(a_1)+H(a_2)+\cdots+H(a_m)\leq C\sqrt{\sum_{i=1}^{m}i a_i}$$holds for arbitrary positive integer $m$ and any $m$ positive integer $a_1,a_2,\cdots,a_m$, where $$H(n)=\sum_{k=1}^{n}\frac{1}{k}.$$} \prob{Suppose $A_1,A_2,\cdots ,A_n \subseteq \left \{ 1,2,\cdots ,2018 \right \}$ and $\left | A_i \right |=2, i=1,2,\cdots ,n$, satisfying that $$A_i + A_j, \; 1 \le i \le j \le n ,$$are distinct from each other. $A + B = \left \{ a+b|a\in A,\,b\in B \right \}$. Determine the maximal value of $n$.} \prob{Let $ABC$ be a triangle with $\angle BAC > 90 ^{\circ}$, and let $O$ be its circumcenter and $\omega$ be its circumcircle. The tangent line of $\omega$ at $A$ intersects the tangent line of $\omega$ at $B$ and $C$ respectively at point $P$ and $Q$. Let $D,E$ be the feet of the altitudes from $P,Q$ onto $BC$, respectively. $F,G$ are two points on $\overline{PQ}$ different from $A$, so that $A,F,B,E$ and $A,G,C,D$ are both concyclic. Let M be the midpoint of $\overline{DE}$. Prove that $DF,OM,EG$ are concurrent.} \prob{Find all pairs of positive integers $(x, y)$ such that $(xy+1)(xy+x+2)$ be a perfect square .} \prob{Define the polymonial sequence $\left \{ f_n\left ( x \right ) \right \}_{n\ge 1}$ with $f_1\left ( x \right )=1$, $$f_{2n}\left ( x \right )=xf_n\left ( x \right ), \; f_{2n+1}\left ( x \right ) = f_n\left ( x \right )+ f_{n+1} \left ( x \right ), \; n\ge 1.$$Look for all the rational number $a$ which is a root of certain $f_n\left ( x \right ).$} \prob{There are $32$ students in the class with $10$ interesting group. Each group contains exactly $16$ students. For each couple of students, the square of the number of the groups which are only involved by just one of the two students is defined as their $interests-disparity$. Define $S$ as the sum of the $interests-disparity$ of all the couples, $\binom{32}{2}\left ( =\: 496 \right )$ ones in total. Determine the minimal possible value of $S$.} \prob{In isosceles $\triangle ABC$, $AB=AC$, points $D,E,F$ lie on segments $BC,AC,AB$ such that $DE\parallel AB$, $DF\parallel AC$. The circumcircle of $\triangle ABC$ $\omega_1$ and the circumcircle of $\triangle AEF$ $\omega_2$ intersect at $A,G$. Let $DE$ meet $\omega_2$ at $K\neq E$. Points $L,M$ lie on $\omega_1,\omega_2$ respectively such that $LG\perp KG, MG\perp CG$. Let $P,Q$ be the circumcenters of $\triangle DGL$ and $\triangle DGM$ respectively. Prove that $A,G,P,Q$ are concyclic.} \prob{Let $p$ be a prime and $k$ be a positive integer. Set $S$ contains all positive integers $a$ satisfying $1\le a \le p-1$, and there exists positive integer $x$ such that $x^k\equiv a \pmod p$. Suppose that $3\le |S| \le p-2$. Prove that the elements of $S$, when arranged in increasing order, does not form an arithmetic progression.} \prob{Suppose the real number $\lambda \in \left( 0,1\right),$ and let $n$ be a positive integer. Prove that the modulus of all the roots of the polynomial $$f\left ( x \right )=\sum_{k=0}^{n}\binom{n}{k}\lambda^{k\left ( n-k \right )}x^{k}$$are $1.$} \prob{Suppose $a_i, b_i, c_i, i=1,2,\cdots ,n$, are $3n$ real numbers in the interval $\left [ 0,1 \right ].$ Define $$S=\left \{ \left ( i,j,k \right ) |\, a_i+b_j+c_k<1 \right \}, \; \; T=\left \{ \left ( i,j,k \right ) |\, a_i+b_j+c_k>2 \right \}.$$Now we know that $\left | S \right |\ge 2018,\, \left | T \right |\ge 2018.$ Try to find the minimal possible value of $n$.} \subsection{TST 2017} \prob{Find out the maximum value of the numbers of edges of a solid regular octahedron that we can see from a point out of the regular octahedron.(We define we can see an edge $AB$ of the regular octahedron from point $P$ outside if and only if the intersection of non degenerate triangle $PAB$ and the solid regular octahedron is exactly edge $AB$.} \prob{Let $x>1$ ,$n$ be positive integer. Prove that$$\sum_{k=1}^{n}\frac{\{kx \}}{[kx]}<\sum_{k=1}^{n}\frac{1}{2k-1}$$Where $[kx ]$ be the integer part of $kx$ ,$\{kx \}$ be the decimal part of $kx$.} \prob{Suppose $S=\{1,2,3,...,2017\}$,for every subset $A$ of $S$,define a real number $f(A)\geq 0$ such that: $(1)$ For any $A,B\subset S$,$f(A\cup B)+f(A\cap B)\leq f(A)+f(B)$; $(2)$ For any $A\subset B\subset S$, $f(A)\leq f(B)$; $(3)$ For any $k,j\in S$,$$f(\{1,2,\ldots,k+1\})\geq f(\{1,2,\ldots,k\}\cup \{j\});$$$(4)$ For the empty set $\varnothing$, $f(\varnothing)=0$. Confirm that for any three-element subset $T$ of $S$,the inequality $$f(T)\leq \frac{27}{19}f(\{1,2,3\})$$holds.} \prob{Find out all the integer pairs $(m,n)$ such that there exist two monic polynomials $P(x)$ and $Q(x)$ ,with $\deg{P}=m$ and $\deg{Q}=n$,satisfy that $$P(Q(t))\not=Q(P(t))$$holds for any real number $t$.} \prob{In the non-isosceles triangle $ABC$,$D$ is the midpoint of side $BC$,$E$ is the midpoint of side $CA$,$F$ is the midpoint of side $AB$.The line(different from line $BC$) that is tangent to the inscribed circle of triangle $ABC$ and passing through point $D$ intersect line $EF$ at $X$.Define $Y,Z$ similarly.Prove that $X,Y,Z$ are collinear.} \prob{For a given positive integer $n$ and prime number $p$, find the minimum value of positive integer $m$ that satisfies the following property: for any polynomial $$f(x)=(x+a_1)(x+a_2)\ldots(x+a_n)$$($a_1,a_2,\ldots,a_n$ are positive integers), and for any non-negative integer $k$, there exists a non-negative integer $k'$ such that $$v_p(f(k))<v_p(f(k'))\leq v_p(f(k))+m.$$Note: for non-zero integer $N$,$v_p(N)$ is the largest non-zero integer $t$ that satisfies $p^t\mid N$.} \prob{Let $n$ be a positive integer. Let $D_n$ be the set of all divisors of $n$ and let $f(n)$ denote the smallest natural $m$ such that the elements of $D_n$ are pairwise distinct in mod $m$. Show that there exists a natural $N$ such that for all $n \geq N$, one has $f(n) \leq n^{0.01}$.} \prob{$2017$ engineers attend a conference. Any two engineers if they converse, converse with each other in either Chinese or English. No two engineers converse with each other more than once. It is known that within any four engineers, there was an even number of conversations and furthermore within this even number of conversations: i) At least one conversation is in Chinese. ii) Either no conversations are in English or the number of English conversations is at least that of Chinese conversations. Show that there exists $673$ engineers such that any two of them conversed with each other in Chinese.} \prob{Let $ABCD$ be a quadrilateral and let $l$ be a line. Let $l$ intersect the lines $AB,CD,BC,DA,AC,BD$ at points $X,X',Y,Y',Z,Z'$ respectively. Given that these six points on $l$ are in the order $X,Y,Z,X',Y',Z'$, show that the circles with diameter $XX',YY',ZZ'$ are coaxal.} \prob{An integer $n>1$ is given . Find the smallest positive number $m$ satisfying the following conditions: for any set $\{a,b\}$ $\subset \{1,2,\cdots,2n-1\}$ ,there are non-negative integers $ x, y$ ( not all zero) such that $2n|ax+by$ and $x+y\leq m.$} \prob{Let $ \varphi(x)$ be a cubic polynomial with integer coefficients. Given that $ \varphi(x)$ has have 3 distinct real roots $u,v,w $ and $u,v,w $ are not rational number. there are integers $ a, b,c$ such that $u=av^2+bv+c$. Prove that $b^2 -2b -4ac - 7$ is a square number .} \prob{Let $M$ be a subset of $\mathbb{R}$ such that the following conditions are satisfied: a) For any $x \in M, n \in \mathbb{Z}$, one has that $x+n \in \mathbb{M}$. b) For any $x \in M$, one has that $-x \in M$. c) Both $M$ and $\mathbb{R}$ \ $M$ contain an interval of length larger than $0$. For any real $x$, let $M(x) = \{ n \in \mathbb{Z}^{+} | nx \in M \}$. Show that if $\alpha,\beta$ are reals such that $M(\alpha) = M(\beta)$, then we must have one of $\alpha + \beta$ and $\alpha - \beta$ to be rational.} \prob{Let $n \geq 4$ be a natural and let $x_1,\ldots,x_n$ be non-negative reals such that $x_1 + \cdots + x_n = 1$. Determine the maximum value of $x_1x_2x_3 + x_2x_3x_4 + \cdots + x_nx_1x_2$.} \prob{Let $ABCD$ be a non-cyclic convex quadrilateral. The feet of perpendiculars from $A$ to $BC,BD,CD$ are $P,Q,R$ respectively, where $P,Q$ lie on segments $BC,BD$ and $R$ lies on $CD$ extended. The feet of perpendiculars from $D$ to $AC,BC,AB$ are $X,Y,Z$ respectively, where $X,Y$ lie on segments $AC,BC$ and $Z$ lies on $BA$ extended. Let the orthocenter of $\triangle ABD$ be $H$. Prove that the common chord of circumcircles of $\triangle PQR$ and $\triangle XYZ$ bisects $BH$.} \prob{Let $X$ be a set of $100$ elements. Find the smallest possible $n$ satisfying the following condition: Given a sequence of $n$ subsets of $X$, $A_1,A_2,\ldots,A_n$, there exists $1 \leq i < j < k \leq n$ such that $$A_i \subseteq A_j \subseteq A_k \text{ or } A_i \supseteq A_j \supseteq A_k.$$} \prob{Show that there exists a degree $58$ monic polynomial $$P(x) = x^{58} + a_1x^{57} + \cdots + a_{58}$$such that $P(x)$ has exactly $29$ positive real roots and $29$ negative real roots and that $\log_{2017} |a_i|$ is a positive integer for all $1 \leq i \leq 58$.} \prob{Show that there exists a positive real $C$ such that for any naturals $H,N$ satisfying $H \geq 3, N \geq e^{CH}$, for any subset of $\{1,2,\ldots,N\}$ with size $\lceil \frac{CHN}{\ln N} \rceil$, one can find $H$ naturals in it such that the greatest common divisor of any two elements is the greatest common divisor of all $H$ elements.} \prob{Every cell of a $2017\times 2017$ grid is colored either black or white, such that every cell has at least one side in common with another cell of the same color. Let $V_1$ be the set of all black cells, $V_2$ be the set of all white cells. For set $V_i (i=1,2)$, if two cells share a common side, draw an edge with the centers of the two cells as endpoints, obtaining graphs $G_i$. If both $G_1$ and $G_2$ are connected paths (no cycles, no splits), prove that the center of the grid is one of the endpoints of $G_1$ or $G_2$.} \prob{Prove that :$$\sum_{k=0}^{58}C_{2017+k}^{58-k}C_{2075-k}^{k}=\sum_{p=0}^{29}C_{4091-2p}^{58-2p}$$} \prob{In $\varDelta{ABC}$,the excircle of $A$ is tangent to segment $BC$,line $AB$ and $AC$ at $E,D,F$ respectively.$EZ$ is the diameter of the circle.$B_1$ and $C_1$ are on $DF$, and $BB_1\perp{BC}$,$CC_1\perp{BC}$.Line $ZB_1,ZC_1$ intersect $BC$ at $X,Y$ respectively.Line $EZ$ and line $DF$ intersect at $H$,$ZK$ is perpendicular to $FD$ at $K$.If $H$ is the orthocenter of $\varDelta{XYZ}$,prove that:$H,K,X,Y$ are concyclic.} \prob{Find the numbers of ordered array $(x_1,...,x_{100})$ that satisfies the following conditions: ($i$)$x_1,...,x_{100}\in\{1,2,..,2017\}$; ($ii$)$2017|x_1+...+x_{100}$; ($iii$)$2017|x_1^2+...+x_{100}^2$.} \prob{Given integer $d>1,m$,prove that there exists integer $k>l>0$, such that $$(2^{2^k}+d,2^{2^l}+d)>m.$$} \prob{Given integer $m\geq2$,$x_1,...,x_m$ are non-negative real numbers,prove that:$$(m-1)^{m-1}(x_1^m+...+x_m^m)\geq(x_1+...+x_m)^m-m^mx_1...x_m$$and please find out when the equality holds.} \prob{A plane has no vertex of a regular dodecahedron on it,try to find out how many edges at most may the plane intersect the regular dodecahedron?} \prob{Given $n\ge 3$. consider a sequence $a_1,a_2,...,a_n$, if $(a_i,a_j,a_k)$ with i+k=2j (i<j<k) and $a_i+a_k\ne 2a_j$, we call such a triple a $NOT-AP$ triple. If a sequence has at least one $NOT-AP$ triple, find the least possible number of the $NOT-AP$ triple it contains.} \prob{Find the least positive number m such that for any polynimial f(x) with real coefficients, there is a polynimial g(x) with real coefficients (degree not greater than m) such that there exist 2017 distinct number $a_1,a_2,...,a_{2017}$ such that $g(a_i)=f(a_{i+1})$ for i=1,2,...,2017 where indices taken modulo 2017.} \prob{For a rational point (x,y), if xy is an integer that divided by 2 but not 3, color (x,y) red, if xy is an integer that divided by 3 but not 2, color (x,y) blue. Determine whether there is a line segment in the plane such that it contains exactly 2017 blue points and 58 red points.} \prob{Given a circle with radius 1 and 2 points C, D given on it. Given a constant l with $0<l\le 2$. Moving chord of the circle AB=l and ABCD is a non-degenerated convex quadrilateral. AC and BD intersects at P. Find the loci of the circumcenters of triangles ABP and BCP.} \prob{A(x,y), B(x,y), and C(x,y) are three homogeneous real-coefficient polynomials of x and y with degree 2, 3, and 4 respectively. we know that there is a real-coefficient polinimial R(x,y) such that $B(x,y)^2-4A(x,y)C(x,y)=-R(x,y)^2$. Proof that there exist 2 polynomials F(x,y,z) and G(x,y,z) such that $F(x,y,z)^2+G(x,y,z)^2=A(x,y)z^2+B(x,y)z+C(x,y)$ if for any x, y, z real numbers $A(x,y)z^2+B(x,y)z+C(x,y)\ge 0$} \prob{We call a graph with n vertices $k-flowing-chromatic$ if: 1. we can place a chess on each vertex and any two neighboring (connected by an edge) chesses have different colors. 2. we can choose a hamilton cycle $v_1,v_2,\cdots , v_n$, and move the chess on $v_i$ to $v_{i+1}$ with $i=1,2,\cdots ,n$ and $v_{n+1}=v_1$, such that any two neighboring chess also have different colors. 3. after some action of step 2 we can make all the chess reach each of the n vertices. Let T(G) denote the least number k such that G is k-flowing-chromatic. If such k does not exist, denote T(G)=0. denote $\chi (G)$ the chromatic number of G. Find all the positive number m such that there is a graph G with $\chi (G)\le m$ and $T(G)\ge 2^m$ without a cycle of length small than 2017.} \subsection{MO 2018} \prob{Let $a\le 1$ be a real number. Sequence $\{x_n\}$ satisfies $x_0=0, x_{n+1}= 1-a\cdot e^{x_n}$, for all $n\ge 1$, where $e$ is the natural logarithm. Prove that for any natural $n$, $x_n\ge 0$.} \prob{Points $D,E$ lie on segments $AB,AC$ of $\triangle ABC$ such that $DE\parallel BC$. Let $O_1,O_2$ be the circumcenters of $\triangle ABE, \triangle ACD$ respectively. Line $O_1O _2$ meets $AC$ at $P$, and $AB$ at $Q$. Let $O$ be the circumcenter of $\triangle APQ$, and $M$ be the intersection of $AO$ extended and $BC$. Prove that $M$ is the midpoint of $BC$.} \prob{Given a real sequence $\left \{ x_n \right \}_{n=1}^{\infty}$ with $x_1^2 = 1$. Prove that for each integer $n \ge 2$, $$\sum_{i|n}\sum_{j|n}\frac{x_ix_j}{\textup{lcm} \left ( i,j \right )} \ge \prod_{\mbox{\tiny$\begin{array}{c} p \: \textup{is prime} \\ p|n \end{array}$} }\left ( 1-\frac{1}{p} \right ). $$} \prob{There're $n$ students whose names are different from each other. Everyone has $n-1$ envelopes initially with the others' name and address written on them respectively. Everyone also has at least one greeting card with her name signed on it. Everyday precisely a student encloses a greeting card (which can be the one received before) with an envelope (the name on the card and the name on envelope cannot be the same) and post it to the appointed student by a same day delivery. Prove that when no one can post the greeting cards in this way any more: (i) Everyone still has at least one card; (ii) If there exist $k$ students $p_1, p_2, \cdots, p_k$ so that $p_i$ never post a card to $p_{i+1}$, where $i = 1,2, \cdots, k$ and $p_{k+1} = p_1$, then these $k$ students have prepared the same number of greeting cards initially.} \prob{Let $\omega \in \mathbb{C}$, and $\left | \omega \right | = 1$. Find the maximum length of $z = \left( \omega + 2 \right) ^3 \left( \omega - 3 \right)^2$.} \prob{Given $k \in \mathbb{N}^+$. A sequence of subset of the integer set $\mathbb{Z} \supseteq I_1 \supseteq I_2 \supseteq \cdots \supseteq I_k$ is called a $k-chain$ if for each $1 \le i \le k$ we have (i) $168 \in I_i$; (ii) $\forall x, y \in I_i$, we have $x-y \in I_i$. Determine the number of $k-chain$ in total.} \prob{Given $2018 \times 4$ grids and tint them with red and blue. So that each row and each column has the same number of red and blue grids, respectively. Suppose there're $M$ ways to tint the grids with the mentioned requirement. Determine $M \pmod {2018}$.} \prob{Let $I$ be the incenter of triangle $ABC$. The tangent point of $\odot I$ on $AB,AC$ is $D,E$, respectively. Let $BI \cap AC = F$, $CI \cap AB = G$, $DE \cap BI = M$, $DE \cap CI = N$, $DE \cap FG = P$, $BC \cap IP = Q$. Prove that $BC = 2MN$ is equivalent to $IQ = 2IP$.} \subsection{MO 2017} \end{document}
{ "alphanum_fraction": 0.6688223441, "avg_line_length": 84.6785714286, "ext": "tex", "hexsha": "cae58d60ad7e4845ea562759a6e545e080ec3dd3", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2021-09-27T15:19:26.000Z", "max_forks_repo_forks_event_min_datetime": "2020-10-15T08:59:33.000Z", "max_forks_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "AnglyPascal/BCS_Question_Bank", "max_forks_repo_path": "PSets/pracPSet.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "AnglyPascal/BCS_Question_Bank", "max_issues_repo_path": "PSets/pracPSet.tex", "max_line_length": 536, "max_stars_count": 48, "max_stars_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "M-Ahsan-Al-Mahir/BCS_Question_Bank", "max_stars_repo_path": "PSets/pracPSet.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-13T19:47:04.000Z", "max_stars_repo_stars_event_min_datetime": "2020-10-14T17:15:00.000Z", "num_tokens": 7391, "size": 21339 }
\section{Conclusion and Future Work} \label{sec:conclusion} Completing automatically playlists with tracks contained in the MPD dataset is a particularly difficult task due to the dataset dimension and the variety of playlists generated by numerous users having different likes and behaviors bringing great diversity. In this paper, we present the D2KLab recommender system that implements an ensemble approach of multiple learning models differently optimized combined with a Borda count strategy. Each model runs an RNN that exploits a wide range of playlist features such as artist, album, track, lyrics (used for the creative track), title and a so-called Title2Rec that takes as input the title and that is used, as fall-back strategy, when playlists do not contain any track. The approach showed to be robust in such a complex setting demonstrating the effectiveness of learning models for automatic playlist completion. The experimental analysis brought to further attention three points, namely the generation strategy, complementarity of the learning models, and computing time. The generation strategy has a great impact on the results and it pointed out that a recurrent decoding stage is less performing than using a ranking strategy that weighs the output of each RNN of the encoding stage. The ensemble strategy aggregates different outputs of the learning model runs by pivoting the generated ranking. This has granted a sensible increment in performance, so we plan to study further the complementarity of the runs and to build a learning model to automatically select the best candidates. Finally, the computing time has been a crucial experimental setup element due to the generation of the RNN learning model; we addressed it by creating different sizes of the MPD dataset randomly selected and by optimizing the learning models on the hardware a disposal, becoming another factor of differentiation for shaping a performing submission. %In this paper we presented a novel approach for music recommendation built on top of the Million Playlist Dataset. The strategy involves 3 types of vectors -- Sequential embeddings, Titles embeddings and Lyrics embeddings -- that are used for train a RNN based on LSTM. The final recommendation comes from an ensemble architecture the combine the results of an RNN and the ones of Title2Rec, a playlist generation algorithm that relies on the sole title of the playlist.
{ "alphanum_fraction": 0.8263374486, "avg_line_length": 303.75, "ext": "tex", "hexsha": "c79792049e8985b9faa91f17d61091da948f2c37", "lang": "TeX", "max_forks_count": 5, "max_forks_repo_forks_event_max_datetime": "2022-03-03T20:09:29.000Z", "max_forks_repo_forks_event_min_datetime": "2018-12-11T03:03:06.000Z", "max_forks_repo_head_hexsha": "5cd47d1b9df2a2bccad2889ba1d570d5a8dd0f8d", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "D2KLab/recsys18_challenge", "max_forks_repo_path": "paper/sections/conclusions.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5cd47d1b9df2a2bccad2889ba1d570d5a8dd0f8d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "D2KLab/recsys18_challenge", "max_issues_repo_path": "paper/sections/conclusions.tex", "max_line_length": 1028, "max_stars_count": 3, "max_stars_repo_head_hexsha": "5cd47d1b9df2a2bccad2889ba1d570d5a8dd0f8d", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "D2KLab/recsys18_challenge", "max_stars_repo_path": "paper/sections/conclusions.tex", "max_stars_repo_stars_event_max_datetime": "2018-12-29T15:56:45.000Z", "max_stars_repo_stars_event_min_datetime": "2018-11-09T14:04:04.000Z", "num_tokens": 440, "size": 2430 }
\documentclass{article} \usepackage{amssymb} \usepackage{comment} \usepackage{courier} \usepackage{fancyhdr} \usepackage{fancyvrb} \usepackage[T1]{fontenc} \usepackage[top=.75in, bottom=.75in, left=.75in,right=.75in]{geometry} \usepackage{graphicx} \usepackage{lastpage} \usepackage{listings} \lstset{basicstyle=\small\ttfamily} \usepackage{mdframed} \usepackage{parskip} \usepackage{ragged2e} \usepackage{soul} \usepackage{upquote} \usepackage{xcolor} % http://www.monperrus.net/martin/copy-pastable-ascii-characters-with-pdftex-pdflatex \lstset{ upquote=true, columns=fullflexible, literate={*}{{\char42}}1 {-}{{\char45}}1 {^}{{\char94}}1 } \lstset{ moredelim=**[is][\color{blue}\bf\small\ttfamily]{@}{@}, } % http://tex.stackexchange.com/questions/40863/parskip-inserts-extra-space-after-floats-and-listings \lstset{aboveskip=6pt plus 2pt minus 2pt, belowskip=-4pt plus 2pt minus 2pt} \usepackage[colorlinks,urlcolor={blue}]{hyperref} \begin{document} \fancyfoot[L]{\color{gray} C4CS -- W'16} \fancyfoot[R]{\color{gray} Revision 1.0} \fancyfoot[C]{\color{gray} \thepage~/~\pageref*{LastPage}} \pagestyle{fancyplain} \title{\textbf{Advanced Homework 13\\}} \author{Assigned: Friday, April 8} \date{\textbf{\color{red}{Due: Before 8pm Friday, April 19 (study day)}}} \maketitle \section*{Submission Instructions} To receive credit for this assignment you will need to stop by someone's office hours and demo your new skills. \section*{Taking IDEs Farther} In lecture we showed valgrind running in Xcode, we looked at online (text- based) IDEs, we saw IDEs that didn't even include a ``text editor.'' Your task is to push your IDE usage beyond what was done in the regular homework. Try 2 new things, and be ready to demo them to a course staff member. \begin{enumerate} \item Create an account with an online IDE service. Migrate a project there, and be sure to use source control (GitHub or GitLab). Do some non-trivial development, like finishing your current class project for another class, or making meaningful commits (locally) to a class or personal project. \item Build an app with 2+ hours of development time in a ``Visual IDE'' like Scratch or App Inventor. \item Set up 4 or 5 IFTTT recipes (the more related they are, the better). \item Trick out your desktop IDE (Xcode or Visual Studio) to extend or swap out current functionality. This would be like using g++ in the IDE, or make, or valgrind, gcov, eslint, etc. \end{enumerate} \emph{You are welcome to do this assignment either on your native operating system on your machine or inside the class VM, whichever helps you learn the most.} \end{document}
{ "alphanum_fraction": 0.7446176689, "avg_line_length": 34.1012658228, "ext": "tex", "hexsha": "d3781557057b0b39c9b056480b4d9f18241a4dc7", "lang": "TeX", "max_forks_count": 349, "max_forks_repo_forks_event_max_datetime": "2020-11-04T05:38:21.000Z", "max_forks_repo_forks_event_min_datetime": "2016-01-06T04:13:55.000Z", "max_forks_repo_head_hexsha": "b7921a0f480d2e0f7747eea1662b24bd90fde500", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "yuqijin/c4cs-2.github.io", "max_forks_repo_path": "archive/w16/hw/c4cs-wk13-advanced.tex", "max_issues_count": 622, "max_issues_repo_head_hexsha": "b7921a0f480d2e0f7747eea1662b24bd90fde500", "max_issues_repo_issues_event_max_datetime": "2020-02-25T07:29:08.000Z", "max_issues_repo_issues_event_min_datetime": "2016-01-22T06:17:25.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "yuqijin/c4cs-2.github.io", "max_issues_repo_path": "archive/w16/hw/c4cs-wk13-advanced.tex", "max_line_length": 100, "max_stars_count": 49, "max_stars_repo_head_hexsha": "b7921a0f480d2e0f7747eea1662b24bd90fde500", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "yuqijin/c4cs-2.github.io", "max_stars_repo_path": "archive/w16/hw/c4cs-wk13-advanced.tex", "max_stars_repo_stars_event_max_datetime": "2021-06-08T03:21:28.000Z", "max_stars_repo_stars_event_min_datetime": "2016-01-05T02:42:14.000Z", "num_tokens": 794, "size": 2694 }
\documentclass[12pt]{article} \input{physics1} \newcommand{\ap}{\mathrm{ap}} \newcommand{\peri}{\mathrm{peri}} \begin{document} \section*{NYU Physics I---Problem Set 12} Due Thursday 2018 November 29 at the beginning of lecture. \paragraph{\problemname~\theproblem:}\refstepcounter{problem}% What is the most expensive ingredient of a typical, traditional Thanksgiving dinner \emph{by weight} (that is, in dollars per ounce or per pound). Start with the turkey and show your work (that is, compare some ingredients). Don't forget the trace (that is, small in quantity) ingredients! What is the relevance of all this to world history? Keep it traditional---traditional food with traditional ingredients, like you could have cooked in 1850. You might want to discuss with someone who cooked a Thanksgiving dinner (or did the shopping for it). \paragraph{\problemname~\theproblem:}\refstepcounter{problem}% If all goes well in class on 2018-11-20, we will get a quadratic equation for the radii $r_\ap$ and $r_\peri$ of aphelion and perihelion. The argument goes like this: The total energy of an orbit can be written in terms of the angular momentum and the radial velocity \begin{eqnarray} E & = & \frac{1}{2}\,m\,v^2 - \frac{G\,M\,m}{r} \\ E & = & \frac{1}{2}\,m\,v_r^2 + \frac{1}{2}\,m\,v_\perp^2 - \frac{G\,M\,m}{r} \\ E & = & \frac{1}{2}\,m\,v_r^2 + \frac{L^2}{2\,m\,r^2} - \frac{G\,M\,m}{r} \label{foo} \end{eqnarray} where $E$ is the total energy of the orbit, $v_r$ is the radial component of the velocity, and $L$ is the angular momentum of the orbit. The radial velocity $v_r$ is exactly zero at aphelion and perihelion. So set it to zero in equation~(\ref{foo}), and solve the resulting quadratic equation! It will give two answers, which are $r_\ap$ and $r_\peri$. Use the definition of eccentricity \begin{equation} e \equiv \frac{r_\ap - r_\peri}{r_\ap + r_\peri} \end{equation} to figure out the relationship between eccentricity $e$ of an elliptical orbit and the total energy $E$ and the angular momentum $L$. \paragraph{\problemname~\theproblem:}\refstepcounter{problem}% \textsl{(a)} Sketch orbits of fixed semi-major axis but increasing eccentricity, from a circular orbit, to one that is close to radial (eccentricity close to unity). Make sure you show the location of the point around which the object is orbiting! \textsl{(b)} What is the transfer time for a radial plunge orbit from the radius of the Moon's orbit down to the surface of the Earth? Use the period of the Moon's orbit, the relevant Kepler's law, and the properties of the unit-eccentricity and circular orbits. \textsl{(c)} Look up the timeline of the Apollo~11 mission, especially the return to Earth. Do you see any issues there? What's your best explanation of what happened? \paragraph{\problemname~\theproblem:}\refstepcounter{problem}% \textsl{(a)} How fast do you have to move with respect to the Earth's surface to escape Earth's gravity? That is, what is escape velocity from the Earth. Calculate it yourself in terms of the radius $R$ of the Earth and the value of $g$ at the surface. Then give it also in $\mps$. \textsl{(b)} A spaceship of mass $m$ resting on the surface of the Earth is bound to the Earth but also to the Sun. If we make the naive (and close to correct) assumption that these energies just add, what is the total binding energy of the spaceship in the Solar System? This calculation can be confusing, because although you can assume the spaceship is stationary with respect to the Earth (so there is only gravitational potential energy with respect to the Earth), the spaceship is moving fast relative to the Sun (so there is both gravitational potential and kinetic energy with respect to the Sun). The best way to do the calculation is to just pick the Newtonian reference frame centered on the Sun, and compute the kinetic and potential energies in that frame. Now what is the escape velocity from the Solar System? Give your answer in $\mps$. \textsl{(c)} Look up the derivation of how a rocket accelerates. You should be able to find a rocket equation that relates the initial mass of the rocket+fuel, the final mass of the rocket after the fuel is spent, the speed at which the rocket ejects exhaust, and the final speed of the rocket. (Hint: The equation is exponential in a mass ratio.) If the rocket can eject exhaust at 10 times the speed of sound in air at STP (and that's optimistic!), what is the ratio of initial mass to final mass of a rocket that will leave the Solar System? What is the maximum fraction of the spaceship initial mass that can be used for payload---that is, for cabin, crew, and cargo? \textsl{(d)} Now imagine the spaceship is going to another planet just like the Earth. What fraction of the spaceship can be used for non-fuel payload in this case? The point is that it takes just as much velocity change to slow down at the end of the journey as it took to take off at the beginning, and that the end-of-flight fuel is part of the cargo that the ship has to take with it at launch. You should get that the payload fraction is (something like) the square of what you got in the previous part. \paragraph{Extra Problem (will not be graded for credit):}% What do you think the above problem means for interstellar travel? \paragraph{Extra Problem (will not be graded for credit):}% Make a spreadsheet integration that integrates a test-particle (low-mass) orbit in the central force law, and show that you do indeed get an elliptical orbit. You want to use a time step that is $<0.001$ of the period and go for $>1000$ timesteps if you want the integration to look good! This integration is harder than other integrations you have done in this class, because you have to project the force onto the $x$ and $y$ directions correctly. That is, you have to keep track of both $x$ and $y$ positions, velocities, and accelerations. Do the problem in the two-dimensional plane of the orbit. If you want feedback or get stuck, bring your intermediate work to Prof~Hogg for discussion. \end{document}
{ "alphanum_fraction": 0.7630363036, "avg_line_length": 48.0952380952, "ext": "tex", "hexsha": "cfdbdfd2a60a07000b0695b38a029beee0215819", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "6723ce2a5088f17b13d3cd6b64c24f67b70e3bda", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "davidwhogg/Physics1", "max_forks_repo_path": "tex/physics1_ps12.tex", "max_issues_count": 29, "max_issues_repo_head_hexsha": "6723ce2a5088f17b13d3cd6b64c24f67b70e3bda", "max_issues_repo_issues_event_max_datetime": "2019-01-29T22:47:25.000Z", "max_issues_repo_issues_event_min_datetime": "2016-10-07T19:48:57.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "davidwhogg/Physics1", "max_issues_repo_path": "tex/physics1_ps12.tex", "max_line_length": 90, "max_stars_count": 1, "max_stars_repo_head_hexsha": "6723ce2a5088f17b13d3cd6b64c24f67b70e3bda", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "davidwhogg/Physics1", "max_stars_repo_path": "tex/physics1_ps12.tex", "max_stars_repo_stars_event_max_datetime": "2017-11-13T03:48:56.000Z", "max_stars_repo_stars_event_min_datetime": "2017-11-13T03:48:56.000Z", "num_tokens": 1601, "size": 6060 }
\input{../header_class} %---------- start document ---------- % \section{poly.ring -- polynomial rings}\linkedzero{poly.ring} \begin{itemize} \item {\bf Classes} \begin{itemize} \item \linkingone{poly.ring}{PolynomialRing} \item \linkingone{poly.ring}{RationalFunctionField} \item \linkingone{poly.ring}{PolynomialIdeal} \end{itemize} \end{itemize} \C \subsection{PolynomialRing -- ring of polynomials}\linkedone{poly.ring}{PolynomialRing} A class for uni-/multivariate polynomial rings. A subclass of \linkingone{ring}{CommutativeRing}. \initialize \func{PolynomialRing}{% \hiki{coeffring}{CommutativeRing},\ % \hikiopt{number\_of\_variables}{integer}{1}}{\out{PolynomialRing}}\\ \spacing \quad \param{coeffring} is the ring of coefficients. \param{number\_of\_variables} is the number of variables. If its value is greater than \(1\), the ring is for multivariate polynomials. \begin{at} \item[zero]\linkedtwo{poly.ring}{PolynomialRing}{zero}:\\ zero of the ring. \item[one]\linkedtwo{poly.ring}{PolynomialRing}{one}:\\ one of the ring. \end{at} \method \subsubsection{getInstance -- classmethod}\linkedtwo{poly.ring}{PolynomialRing}{getInstance} \func{getInstance}{% \hiki{coeffring}{CommutativeRing},\ % \hiki{number\_of\_variables}{integer}}{\out{PolynomialRing}}\\ \spacing \quad return the instance of polynomial ring with coefficient ring \param{coeffring} and number of variables \param{number\_of\_variables}. \subsubsection{getCoefficientRing}\linkedtwo{poly.ring}{PolynomialRing}{getCoefficientRing} \func{getCoefficientRing}{}{CommutativeRing} \subsubsection{getQuotientField}\linkedtwo{poly.ring}{PolynomialRing}{getQuotientField} \func{getQuotientField}{}{Field} \subsubsection{issubring}\linkedtwo{poly.ring}{PolynomialRing}{issubring} \func{issubring}{\hiki{other}{Ring}}{\out{bool}} \subsubsection{issuperring}\linkedtwo{poly.ring}{PolynomialRing}{issuperring} \func{issuperring}{\hiki{other}{Ring}}{\out{bool}} \subsubsection{getCharacteristic}\linkedtwo{poly.ring}{PolynomialRing}{getCharacteristic} \func{getCharacteristic}{}{\out{integer}} \subsubsection{createElement}\linkedtwo{poly.ring}{PolynomialRing}{createElement} \func{createElement}{seed}{\out{polynomial}}\\ \quad Return a polynomial. \param{seed} can be a polynomial, an element of coefficient ring, or any other data suited for the first argument of uni-/multi-variate polynomials. \subsubsection{gcd}\linkedtwo{poly.ring}{PolynomialRing}{gcd} \func{gcd}{a, b}{\out{polynomial}}\\ \quad Return the greatest common divisor of given polynomials (if possible). The polynomials must be in the polynomial ring. If the coefficient ring is a field, the result is monic. \subsubsection{isdomain}\linkedtwo{poly.ring}{PolynomialRing}{isdomain} \subsubsection{iseuclidean}\linkedtwo{poly.ring}{PolynomialRing}{iseuclidean} \subsubsection{isnoetherian}\linkedtwo{poly.ring}{PolynomialRing}{isnoetherian} \subsubsection{ispid}\linkedtwo{poly.ring}{PolynomialRing}{ispid} \subsubsection{isufd}\linkedtwo{poly.ring}{PolynomialRing}{isufd} Inherited from \linkingone{ring}{CommutativeRing}. \subsection{RationalFunctionField -- field of rational functions}\linkedone{poly.ring}{RationalFunctionField} \initialize \func{RationalFunctionField}{% \hiki{field}{Field},\ \hiki{number\_of\_variables}{integer}}{% \out{RationalFunctionField}}\\ \spacing \quad A class for fields of rational functions. It is a subclass of \linkingone{ring}{QuotientField}.\\ \spacing \quad \param{field} is the field of coefficients, which should be a \linkingone{ring}{Field} object. \param{number\_of\_variables} is the number of variables.\\ \spacing \begin{at} \item[zero]\linkedtwo{poly.ring}{RationalFunctionField}{zero}:\\ zero of the field. \item[one]\linkedtwo{poly.ring}{RationalFunctionField}{one}:\\ one of the field. \end{at} % \method \subsubsection{getInstance -- classmethod}\linkedtwo{poly.ring}{RationalFunctionField}{getInstance} \func{getInstance}{% \hiki{coefffield}{Field},\ % \hiki{number\_of\_variables}{integer}}{\out{RationalFunctionField}}\\ \spacing \quad return the instance of {\tt RationalFunctionField} with coefficient field \param{coefffield} and number of variables \param{number\_of\_variables}. \subsubsection{createElement}\linkedtwo{poly.ring}{RationalFunctionField}{createElement} \func{createElement}{*\hiki{seedarg}{list}, **\hiki{seedkwd}{dict}}{\out{RationalFunction}}\\ \subsubsection{getQuotientField}\linkedtwo{poly.ring}{RationalFunctionField}{getQuotientField} \func{getQuotientField}{}{\out{Field}} \subsubsection{issubring}\linkedtwo{poly.ring}{RationalFunctionField}{issubring} \func{issubring}{\hiki{other}{Ring}}{\out{bool}}\\ \subsubsection{issuperring}\linkedtwo{poly.ring}{RationalFunctionField}{issuperring} \func{issuperring}{\hiki{other}{Ring}}{\out{bool}}\\ \subsubsection{unnest}\linkedtwo{poly.ring}{RationalFunctionField}{unnest} \func{unnest}{}{\out{RationalFunctionField}}\\ \spacing \quad If self is a nested {\tt RationalFunctionField} i.e. its coefficient field is also a {\tt RationalFunctionField}, then the method returns one level unnested {\tt RationalFunctionField}.\\ \quad For example: \begin{ex} >>> RationalFunctionField(RationalFunctionField(Q, 1), 1).unnest() RationalFunctionField(Q, 2) \end{ex} \subsubsection{gcd}\linkedtwo{poly.ring}{RationalFunctionField}{gcd} \func{gcd}{\hiki{a}{RationalFunction},\ \hiki{b}{RationalFunction}}{% \out{RationalFunction}}\\ \spacing \quad Inherited from \linkingone{ring}{Field}. \subsubsection{isdomain}\linkedtwo{poly.ring}{RationalFunctionField}{isdomain} \subsubsection{iseuclidean}\linkedtwo{poly.ring}{RationalFunctionField}{iseuclidean} \subsubsection{isnoetherian}\linkedtwo{poly.ring}{RationalFunctionField}{isnoetherian} \subsubsection{ispid}\linkedtwo{poly.ring}{RationalFunctionField}{ispid} \subsubsection{isufd}\linkedtwo{poly.ring}{RationalFunctionField}{isufd} Inherited from \linkingone{ring}{CommutativeRing}. \subsection{PolynomialIdeal -- ideal of polynomial ring}\linkedone{poly.ring}{PolynomialIdeal} A subclass of \linkingone{ring}{Ideal} represents ideals of polynomial rings. \initialize \func{PolynomialIdeal}{% \hiki{generators}{list},\ % \hiki{polyring}{PolynomialRing}}{\out{PolynomialIdeal}}\\ \spacing \quad Create an object represents an ideal in a polynomial ring \param{polyring} generated by \param{generators}. \begin{op} \verb/in/ & membership test\\ \verb/==/ & same ideal?\\ \verb/!=/ & different ideal?\\ \verb/+/ & addition\\ \verb/*/ & multiplication\\ \end{op} \method \subsubsection{reduce}\linkedtwo{poly.ring}{PolynomialIdeal}{reduce} \func{reduce}{\hiki{element}{polynomial}}{\out{polynomial}}\\ \spacing \quad Modulo \param{element} by the ideal. \subsubsection{issubset}\linkedtwo{poly.ring}{PolynomialIdeal}{issubset} \func{issubset}{\hiki{other}{set}}{\out{bool}}\\ \subsubsection{issuperset}\linkedtwo{poly.ring}{PolynomialIdeal}{issuperset} \func{issuperset}{\hiki{other}{set}}{\out{bool}}\\ \C %---------- end document ---------- % \input{../footer}
{ "alphanum_fraction": 0.7280419017, "avg_line_length": 42.0677966102, "ext": "tex", "hexsha": "352fbab2f0eb1f33da3c0fcbeb245ba50758b4a0", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a48ae9efcf0d9ad1485c2e9863c948a7f1b20311", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "turkeydonkey/nzmath3", "max_forks_repo_path": "manual/en/poly.ring.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a48ae9efcf0d9ad1485c2e9863c948a7f1b20311", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "turkeydonkey/nzmath3", "max_issues_repo_path": "manual/en/poly.ring.tex", "max_line_length": 111, "max_stars_count": 1, "max_stars_repo_head_hexsha": "a48ae9efcf0d9ad1485c2e9863c948a7f1b20311", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "turkeydonkey/nzmath3", "max_stars_repo_path": "manual/en/poly.ring.tex", "max_stars_repo_stars_event_max_datetime": "2021-05-26T19:22:17.000Z", "max_stars_repo_stars_event_min_datetime": "2021-05-26T19:22:17.000Z", "num_tokens": 2171, "size": 7446 }
\documentclass[11pt]{article} \usepackage{amsmath,amssymb} \usepackage{lmodern} \usepackage{cite} \usepackage{listings} \title{py-dimensional-analysis} \date{} \begin{document} \maketitle \section{py-dimensional-analysis} This Python package addresses physical dimensional analysis. In particular, \texttt{py-dimensional-analysis} calculates from a given system of (dimensional) variables those products that yield a desired target dimension. % \begin{equation} % y = A^aB^bC_c % \end{equation} The following example illustrates how the variables mass, force, time and pressure must relate to each other in order to produce the dimension length*time. \begin{lstlisting}[language=Python] import danalysis as da si = da.standard_systems.SI # predefined standard units s = da.Solver( { 'a' : si.M, # [a] is mass 'b' : si.L*si.M*si.T**-2, # [b] is force (alt. si.F) 'c' : si.T, # [c] is time 'd' : si.Pressure # [d] is pressure }, si.L*si.T # target dimension ) \end{lstlisting} Which prints \begin{lstlisting} Found 2 variable products of variables { a:Q(M), b:Q(L*M*T**-2), c:Q(T), d:Q(L**-1*M*T**-2) }, each of dimension L*T: 1: [a*c**-1*d**-1] = L*T 2: [b**0.5*c*d**-0.5] = L*T \end{lstlisting} This library is based on \cite{szirtes2007applied}, and also incorporates ideas and examples from \cite{santiago2019first, sonin2001dimensional}. \subsection{References} \bibliographystyle{alpha} \begingroup \renewcommand{\section}[2]{}% \bibliography{biblio.bib} \endgroup \end{document} %pandoc --citeproc -s README.tex -o README.md --to markdown_strict
{ "alphanum_fraction": 0.6568457539, "avg_line_length": 28.85, "ext": "tex", "hexsha": "06c063c0b2d009dd22d32179fd53235d98796155", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "6ff979f2a159569c8a92e5977433b0899ab7ddfe", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "MartialMad/py-dimensional-analysis", "max_forks_repo_path": "docs/README.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6ff979f2a159569c8a92e5977433b0899ab7ddfe", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "MartialMad/py-dimensional-analysis", "max_issues_repo_path": "docs/README.tex", "max_line_length": 220, "max_stars_count": null, "max_stars_repo_head_hexsha": "6ff979f2a159569c8a92e5977433b0899ab7ddfe", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "MartialMad/py-dimensional-analysis", "max_stars_repo_path": "docs/README.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 511, "size": 1731 }
\section{Joint Calibration}
{ "alphanum_fraction": 0.8214285714, "avg_line_length": 14, "ext": "tex", "hexsha": "15b5b02abe0d79a21f4afee30fa4834094f7d783", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7398f1bb2d668ab395d667c4d13991113bf60aa4", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "lsst-pst/pstn-019", "max_forks_repo_path": "jointcal.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7398f1bb2d668ab395d667c4d13991113bf60aa4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "lsst-pst/pstn-019", "max_issues_repo_path": "jointcal.tex", "max_line_length": 27, "max_stars_count": null, "max_stars_repo_head_hexsha": "7398f1bb2d668ab395d667c4d13991113bf60aa4", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "lsst-pst/pstn-019", "max_stars_repo_path": "jointcal.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 6, "size": 28 }
% In this section we go over related work and relevant background information for our model and experiments. The depth of the explanation is adopted to the expected prior knowledge of the reader. The reader is supposed to know the basics of machine learning and deep learning, including probability theory and basic knowledge on neural networks and their different architectures. Basic principles such as forward pass, backpropagation and convolutions are expected to be understood. Further the use and functionality of deep learning modules such as the model, the optimizer and the terms target and prediction should be known. This also includes being familiar with the training and testing pipeline of deep learning. This section derives and explains the techniques, which form the backbone of this research. For fundamental background on machine learning and probability theory we refer the reader to Bishops book \cite{bishop_pattern_2006}. We begin with introducing the VAE and its differences to a normal autoencoder. Further we show how convolutional layers can act on graphs and how these layers can be used in an encoder model, the GCN. Building on these modules, we present the main model for this thesis, the Relational Graph VAE (RGVAE). Finally, we present a popular graph matching algorithm, which is intended to match prediction and target graph \cite{paulheim_knowledge_2016}. \subsection{Knowledge Graph} % Knowledge Graphs are great! The best in the world. % Knowledge Graphs which we be using % We focus on the generation of KGs. % Representation of KG as adjacency, edge feature and node feature matrix 'Knowledge graph' has become a popular term. Yet, the term is so broad, that its definitions varies depending on domain and use case \cite{ehrlinger2016towards}. In this thesis we focus on KGs in the context of relational machine learning. % What is a KG Both datasets of this thesis are derived from large-scale KG data in RDF format. The Resource Description Framework (RDF), originally introduced as infrastructure for structured metadata, is used as general description framework for the semantic-web applications \cite{miller1998introduction}. It involves a schema-based approach, meaning that every entity has a unique identifier and possible relations limited to a predefined set. The opposite schema-free approach is used in OpenIE models, such as AllenNLP \cite{gardner_allennlp_2018}, for information extraction. These models generate triples from text based on NLP parsing techniques which results in an open set of relations and entities. In this thesis a schema-based framework is used and triples are denote as $(s,r,o)$. A exemplar triple from the FB15K-237 dataset is \begin{center} \texttt{/m/02mjmr, /people/person/place\_of\_birth, /m/02hrh0\_}. \end{center} A human readable format of the entities is given by the \textit{id2text} translation of Wikidata \cite{vrandevcic2014wikidata}. \begin{itemize} \item Subject $s$: \texttt{/m/02mjmr Barack Obama} \item Relation/Predicate $r$: \texttt{/people/person/born-in} \item Object $o$: \texttt{/m/02hrh0\_ Honolulu} \end{itemize} For all triples, $s$ and $o$ are part of a set of entities, while $r$ is part of a set of relations. This is sufficient to define a basic KG \cite{nickel_review_2016}. % Hierarchy, entities, classes Schema-based KG can include type hierarchies and type constraints. Classes group entities of the same type together, based on common criteria, e.g. all names of people can be grouped in the class 'person'. Hierarchies define the inheriting structure of classes and subclasses. Picking up our previous example, 'spouse' and 'person' would both be a subclass of 'people' and inherit its properties. At the same time the class of an entity can be the key to a relation with type constraint, since some relations can only be used in conjunction with entities fulfilling the constraining type criteria. % Ontology - semantics These schema based rules of a KG are defined in its ontology. Here properties of classes, subclasses and constraints for relations and many more are defined. Again, we have to differentiate between KGs with open-world or closed-world assumption. In a closed-world assumption all constraints must be sufficiently satisfied before a triple is accepted as valid. This leads to a huge ontology and makes it difficult to expand the KG. On the other hand open-world KGs such as Freebase, accept every triple as valid, as long as it does not violate a constrain. This again leads inevitably to inconsistencies within the KG, yet it is the preferred approach for large KGs. In context of this thesis we refer to the ontology as semantics of a KG, we research if our model can capture the implied closed-world semantics of an open-world KG \cite{nickel_review_2016}. % % Sparse and dense representation % Lastly, we point out one major difference between KGs, namely their representation. RDF KGs are represented as set of triples, consisting of a unique combination of numeric indices. Each index linking to the corresponding entry in the entity and relation vocabulary. This is called dense representation and benefits from fast computation due to an optimized use of memory. % In contrary the dense representation of a triple is the sparse representation. Here a binary square matrix also called the adjacency matrix, indicates a link between two entities. To identify the node, each node in the adjacency matrix has a one-hot encoded entity-vocabulary vector. All one-hot encoded vectors are concatenated to a node attribute matrix. % In simple cases, like citation networks this is a sufficient representation. In the case of Freebase, we need an additional edge-attribute matrix, which indicates the relation of each link. The main benefit of this method is the representation of subsets of triples, also referred to as subgraphs, with more than one relation and the possibility to perform graph convolutions. % Usecases of KG % Knowledge graphs have very different formats. The datasets we be sorting with are in rdf format. % This format can include an defined onthology or not. % This means the KG consists of triples subject, relation, object. % when indexing these triples. we get a dense representation of the KG. % About sparse KGs \subsection{Graph VAE} Since the graph VAE is a adaptation of the original VAE, we start by introducing the original version, which is unrelated to graph data. Furthermore we present each of the different modules, which compose the final model. This includes the different graph encoders as well as sparse graph loss functions. We define the notation upfront for this chapter, which touches upon three different fields. For the VAE and MLP we consider data in vector format. A bold variable denotes the full vector, e.g. $\mathbf{x}$ and variable with index denotes the element at that index, e.g. for the vector element at index $i$ we denote $x_i$. Graphs are represented in matrices, which are denoted in capital letters. $A$ typically denotes the adjacency matrix, from there on paths split and we use different notations for different methods. While $X$ is described as feature matrix, we have to specify further when it comes to Simonovsky's GraphVAE, where $E$ is the edge attribute and $F$ the node attribute matrix. The reason we change from features to attributes, is due to the singularity, also one-hot encoding of attributes per node/edge, in contrast to features which can be numerous per node. \subsubsection{VAE} \label{ssection:VAE} % The VAE as first presented by \cite{kingma_auto-encoding_2014} is an unsupervised generative model consisting of an encoder and a decoder. The architecture of the VAE differs from a common autoencoder by having a stochastic module between encoder and decoder. Instead of directly using the output of the encoder, a distribution of the latent space is predicted from which we sample the input to the decoder. The reparameterization trick allows the model to be differentiable. By placing the sampling module outside the model we get a deterministic model which can be backpropagated. The VAE as first presented by \cite{kingma_auto-encoding_2014} is an unsupervised generative model in the form of an autoencoder, consisting of an encoder and a decoder. Its architecture differs from a common autoencoder by having a stochastic module between encoder and decoder. The encoder can be represented as recognition model with the probability $p_{{\theta}}(\mathbf{z} \mid \mathbf{x})$ with $x$ being the variable we want to inference and $z$ being the latent representation given an observed value of $x$. The encoder parameters are represented by $\theta$. Similarly, we denote the decoder as $p_{{\theta}}(\mathbf{x} \mid z)$, which given a latent representation $z$ produces a probability distribution for the possible values, corresponding to the input of $x$. This is the base architecture of all our models in this thesis. The main contribution of the VAE is to be a stochastic and fully backpropagateable generative model. This is possible due to the reparametrization trick. Sampling from the latent prior distribution, creates stochasticity inside the model, which can not be backpropagated and makes training of the encoder impossible. By placing the stochastic module outside the model, it becomes fully backpropagatable. We use the predicted encoding as mean and variance for a Gaussian normal distribution, from which we then sample $\epsilon$, which acts as external parameter and does not need to be backpropagated and updated. \begin{figure}[h] \centering \includegraphics[width=0.75\textwidth]{data/images/repaTrick.png} \caption{Representation of the VAE as Bayesian network, with solid lines denoting the generator $p_{{\theta}}(z)p_{{\theta}}(\mathbf{x} \mid z)$ and the dashed lines the posterior approximation $q_{\phi}(\mathbf{z} \mid \mathbf{x})$ \cite{kingma_auto-encoding_2014}.} \label{fig:varinference} \end{figure} Figure \ref{fig:varinference} shows, that the true posterior $p_{{\theta}}(\mathbf{z} \mid \mathbf{x})$ is intractable. To approximate the posterior, we assume a Standard Gaussian prior $p(\boldmath{z})$ with a diagonal covariance, which gives us the approximated posterior \begin{equation} \log q_{\phi}\left(\mathbf{z} \mid \mathbf{x}\right)=\log \mathcal{N}\left(\mathbf{z} ; \mu(\mathbf{x}), \sigma(\mathbf{x})^{2} \mathbf{I}\right) . \end{equation} Now variational inference can be performed, which allows both $\theta$ the generative and $\phi$ the variational parameters to be learned jointly. Using Monte Carlo estimation of $q_{\phi}(\mathbf{z} \mid x_1)$ we get the variational estimated lower bound (ELBO) % Monte Carlo estimate implies ----> only one sample, cuz we sample! \begin{equation} \mathcal{L}\left({\theta}, {\phi} ; x_i\right)=-D_{K L}\left(q_{{\phi}}\left(\mathbf{z} \mid x_i\right) \| p_{{\theta}}(\mathbf{z})\right)+\mathbb{E}_{q_{\phi}\left(\mathbf{z} \mid x_i\right)}\left[\log p_{{\theta}}\left(x_i} \mid \mathbf{z}\right)\right] . \label{eq3:elbo} \end{equation} We call the first term the regularization term, as it encourages the approximate posterior to be close to the Gaussian normal prior. This term can be integrated analytically and does not require an estimation. The KL-divergence is a similarity measure between two distributions, resulting in zero for two equal distributions. Thus, the model gets penalized for learning a encoder distribution $p_{{\theta}}(\mathbf{z} \mid \mathbf{x})$ which is not Standard Gaussian. Higgins \textit{et al.} present a constrained variational framework, the $\beta$-VAE \cite{higgins_beta-vae_2016}. This framework proposes an additional hyperparameter $\beta$ which acts as factor on the regularization term. The new ELBO including $\beta$ is \begin{equation} \mathcal{L}\left({\theta}, {\phi} ; \x_i\right)=-\beta \left(D_{K L}\left(q_{{\phi}}\left(\mathbf{z} \mid \x_i\right) \| p_{{\theta}}(\mathbf{z})\right)\right)+\mathbb{E}_{q_{\phi}\left(\mathbf{z} \mid \x_i\right)}\left[\log p_{{\theta}}\left(\x_i} \mid \mathbf{z}\right)\right] . \label{eq3:elboBeta} \end{equation} For $\beta = 0$ the original VAE is restored and for $\beta>0$ the influence of the regularization term on the ELBO is emphasized. Thus the model prioritizes to learn the approximate posterior $p_{{\theta}}(\mathbf{z} \mid \mathbf{x})$ even closer to the Standard Gaussian prior. In the literature this results in a disentanglement of the latent space qualitatively outperforms the original VAE in representation learning of image data. The second term represents the reconstruction error, which requires an estimation by sampling. This means using the decoder to generate samples from the latent distribution. In the context of the VAE, these probabilistic samples are the models output, thus, the reconstruction error the similarity between prediction and target\cite{kingma_auto-encoding_2014}. Once the parameters $\phi$ and $\theta$ of the VAE are learned, the decoder can be used on its own to generate new samples from $p_{{\theta}}(\mathbf{x} \mid z)$. Conventionally, a latent input signal is sampled from $\mathbb{N^{d_z}}$ with $d_z$ being the latent dimension. In the case of discrete binary data, each element of the generated sample is used as probability parameter $p$ for a Bernoulli $\mathbb{B}(1,p)$, from which the final output is sampled. In the case of categorical data, e.g. one-hot encoding, the final output is either sampled from a Categorical distribution with the prediction as probability parameters of each class, or simply sampled with the Argmax operator \cite{kingma_introduction_2019}. \subsubsection{MLP} \label{ssec:mlp} % History introduction The Multi-Layer Perceptron (MLP) was the first of its kind, introducing a machine-learning model with a hidden layer between the input and the output. % Invented by who when Its properties as universal approximator has been discovered and widely studied since 1989. While we presume that the reader interested in the topic of this thesis does not require a definition of the MLP, it is included for completeness, as we also define the GCN encoder, which both act as encoder and decoder our final model, and contribute different hyperparameters. % Functionality % In its basic structure it takes a one dimensional input, fully-connected hidden layer, activation function and finally output layer with normalized predictions. The MLP takes a linear input vector of the form $x_1,...,x_D$ which is multiplied by the weight matrix $\mathbf{W^{(1)}}$ and then activated using a non-linear function $h(\dot)$, which results in the hidden representation of $\mathbf{x}$. Due to its simple derivative, mostly the rectified linear unit (ReLU) function is used as activation. The hidden units get multiplied with the second weight matrix, denoted $\mathbf{w^{(2)}}$ and finally transformed by a sigmoid function $\sigma(\dot)$, which produces the output. Grouping weight and bias parameter together we get the following equation for the MLP \begin{equation} y_{k}(\mathbf{x}, \mathbf{w})=\sigma\left(\sum_{j=0}^{M} w_{k j}^{(2)} h\left(\sum_{i=0}^{D} w_{j i}^{(1)} x_{i}\right)\right) \end{equation} for $j=1, \ldots, M$ and $k=1, \ldots, K$, with $M$ being the total number of hidden units and $K$ of the output. % Application Since the sigmoid function returns a probability distribution for all classes, the MLP can have the function of a classifier. Instead of the initial sigmoid function, it was found to also produce good results for multi label classification activating the output through a Softmax function instead. Images or higher dimensional tensors can be processed by flattening them to a one dimensional tensor. This makes the MLP a flexible and easy to implement model \cite{bishop_pattern_2006}. \subsubsection{Graph convolutions} \label{ssec:gcn} % TODO pretty write this % Intro on convolutions Convolutional layers benefit from the symmetry of data, correlation between neighboring datapoints. Convolutional Neural Nets (CNN) are powerful at classification and object detection on image. Neighboring pixel in an image are not independent and identically distributed {i.i.d.) but rather are highly correlated. Thus, patches of datapoints let the CNN infer and detecting local features. The model can further merged those to high-level features, e.g. a face in an image \cite{bishop_pattern_2006}. Similar conditions hold for graphs. Nodes in a graph are not i.i.d. and allow inference of missing node or link labels. % How do Graph convs work??? Different approaches for graph convolutions have been published. Here we present the graph convolution network (GCN) of \cite{kipf_semi-supervised_2017}. We consider $f(X,A)$ a GCN with an undirected graph input ${G}=(\mathcal{V}, \mathcal{E})$, where $v_{i} \in \mathcal{V}$ is a set of $n$ nodes and $\left(v_{i}, v_{i}\right) \in \mathcal{E}$ the set of edges. The input is a sparse graph representation, with $X$ being a node feature matrix and $A \in \mathbb{R}^{n \times n}$ being the adjacency matrix, defining the position of edges between nodes. In the initial case of no self-loops, the adjacency's diagonal is filled resulting in $\vec{A}=A+I_{n}$. The graph forward pass through the convolutional layer $l$ is then defined as % equation \begin{equation} H^{(l+1)}=\sigma\left(\sum_{i \in n} \frac{\vec{A_{:,i}}}{\|\vecA:,i\|} H^{(l)} W^{(l)}\right). \end{equation} \footnote{For symmetric normalization we use $\tilde{D}^{-\frac{1}{2}} \tilde{A} \tilde{D}^{-\frac{1}{2}}$ with $\tilde{D}_{i i}=\sum_{j} \tilde{A}_{i j}$} The adjacency is row-wise normalized for each node. $W^{(l)}$ is the layer-specific weight matrix and contains the learnable parameters. $H$ returns then the hidden representation of the input graph \cite{gangemi_modeling_2018}. The GCN was first introduced as node classifier, predicting a probability distribution over all classes for each node in the input graph. Thus, the output dimensions are $Z \in \mathbb{R}^{n \times d_z}$ for the GCN prediction or latent representation matrix $Z$. Let $\hat{A}$ be the normalized adjacency matrix, then the full equation for a two layer GCN is \begin{equation} Z=f(X, A)=\operatorname{softmax}\left(\hat{A} \operatorname{ReLU}\left(\hat{A} X W^{(0)}\right) W^{(1)}\right). \end{equation} % \subsubsection{RGCN} % GCN which takes further input of edge attribute matrix. % Either present\\ % Dynamic Edge-Conditioned Filters in Convolutional Neural Networks on Graphs\\ % or nixx % % Realational Graph Convolution Net (RGCN) was presented in \cite{kipf_semi-supervised_2017} for edge prediction. This model takes into account features of nodes. Both the adjacency and the feature matrix are matrix-multiplied with the weight matrix and then with them-selves. The resulting vector is a classification of the nodes. \subsubsection{Graph VAE} \label{ssec:GVAE} % Encoder options: MLP RGCN % Decoder MLP % One shot: creating adjacency and feature matrix at once. % the model we are use for challenging KG datasets We use the presented modules to compose the RGVAE. While the approaches from the literature for graph generative models differ in terms of the model and graph representation, we focus on the GraphVAE architecture presented by Simonovsky \cite{simonovsky_graphvae_2018}, A sparse graph model with graph convolutions. % TODO rewrite and describe label condition as addition. Simonovsky's GraphVAE follows the characterizing encoder decoder architecture. The encoder $q_{\phi}(\mathbf{z} \mid {G})$ takes a graph ${G}$ as input, on which graph convolutions are applied. After the convolutions the hidden representation is flattened and concatenated with the node label vector $y$. A MLP encodes the mean $\mu(\mathbf{z})$ and logvariance $\sigma^2(\mathbf{z})$ of the latent space distribution. Using the reparametrization trick the latent representation is sampled. For the decoder $p_{\theta}(\mathbf{x} \mid {G})$ the latent representation is again concatenated with the node labels. The decoder architecture for this model is a MLP with the same dimension as the encoder MLP but in inverted order, which outputs a flat prediction of ${\tilde{G}$, which is split and reshaped in to the sparse matrix representation. Simonovsky's GraphVAE \cite{simonovsky_graphvae_2018} is optimized for molecular data. Our aim is to set a proof of concept with the RGVAE for multi-relational KGs, thus, the structure of the GraphVAE is adopted but reduced to a minimum viable product by dropping the conditioning on the node labels and instead using the node attributes as pointers towards the corresponding entity in $\mathcal{E}$. By using node attributes as unique pointers, we exclude any class or type information about the entity. Simplifying further we drop the convolutional layer and directly flatten $G$ as input for the MLP encoder. To isolate the impact of graph convolutions as encoder for the RGVAE, we make the choice between MLP or GCN as encoder a hyperparameter. \begin{figure}[H] \centering \includegraphics[scale=0.6,page=1]{data/images/rgvae_diagFull2.pdf} \caption{Architecture of the RGVAE.} \label{fig3:GVAE} \end{figure} % The encoder can either be a MLP, a GCNN or an RGCN. The same holds for the decoder with the addition that model architecture needs to be inverted. An version of a Graph VAE presented in \cite{simonovsky_graphvae_2018}. This model combines both the previous methods. The input graph undergoes relational graph convolutions before it is flattened and projected into latent space. After applying the reparametrization trick, a simple MLP decoder is used to regenerate the graph. In addition the model concatenates the input with a target vector $y$, which represents ???. The same vector is concatenated with the latent tensor. ***Elborate why they do that***. % Reverse MLP % Loss function % What information can you capture when sparse compared to dense? % Graphs can be generated recursively or in an one-shot approach. This paper uses the second approach and generates the full graph in one go. ***Cite?*** % This model be the starting point for our research. % \subsubsection{One Shot vs. Recursive} % One shot: MNIST vs recursive on graphs: Belli In figure \ref{fig3:GVAE} the concept of the RGVAE is displayed. Each datapoint $G(A,E,F)$ is a subgraph from the KG dataset. Note that the model propagates batches instead of single datapoints. The RGVAE can generate graphs $\tilde{G}(\tilde{A},\tilde{E},\tilde{F})$ by sampling from the approximated posterior distribution $p_{{\theta}}\left G \mid \mathbf{z}\right)$. Since it predicts on closed sets of relations and entities, the generated subgraphs are either unseen and complement the KG or are already present in the dataset. The subgraph are sparse with $n$ nodes. A single triple being $n=2$ and a subgraph representation $2<n<40$, where $n=40$ was the explored maximum for the GraphVAE \cite{simonovsky_graphvae_2018}. \subsection{Graph Matching} \label{ssec:graphmatch} % Intro to graph matching on sparse graphs. In this subsection we explain the term permutation invariance and its impact on the RGVAE's loss function. Further we present a $k$-factor graph matching algorithm for general graphs with edge and node attributes and the Hungarian algorithm as solution for the NP-hard problem of linear sum assignment. Finally we derive the full loss of the RGVAE when applying the calculated permutation matrix to the models prediction. \subsubsection{Permutation Invariance} % Permutation Invariance % The position or rotation of a graph can vary. % Use graph matching to detect similarities between graphs A visual example of permutation invariance is the image generation of numbers. If the loss function of the model would not be permutation invariant, the generated image could show a perfect replica of the input number, but being translated by one pixel the loss function would penalize the model. Geometrical permutations can be translation, scale or rotation around any axis. In the context of sparse graphs the most common, and relevant permutation for this thesis, is the position of a link in the adjacency matrix. By altering its position through matrix-multiplication of the adjacency and the permutation matrix, the original link can change direction or turn into a self-loop. When matching graphs with more $n>2$, permutation can change the nodes which the link connects. Further it is possible to match graphs only on parts of the graph, $k$-factor, instead of the full graph. In the case of different node counts between target and prediction graph, the target graph can be fully (1-factor) matched with the larger prediction graph. In context of this thesis, a model or a function is called permutation invariant, if it can match any permutation of the original. This allows the model a wider spectrum of predictions instead of penalizing it on the element-wise correct prediction of the adjacency. % OR: An example is in object detection in images. An object can have geometrical permutations such as translation, scale or rotation, none the less the model should be able to detect and classify it. In that case, the model is not limited by permutations and is there fore permutation invariant. % In our case the object is a graph and the nodes can take different positions in the adjacency matrix. To detect similarities between graphs we apply graph matching. \subsubsection{Max-Pool Graph matching algorithm} % These are three of the state of the art graph matching algorithms. % \begin{itemize} % \item Wasserstein % \item Maxpooling % \item one more % \end{itemize} Graph matching of general (not bipartite) graphs is a nontrivial task. Inspired by Simonovsky's approach \cite{simonovsky_graphvae_2018}, the RGVAE uses the max-pool algorithm, which can be effectively integrated in its loss function. Presented in \cite{cho_finding_2014} in the context of computer vision and successful in matching feature point graphs in an image. It first calculates the affinity between two graphs, considering node and edge attributes, then applies edge-wise max-pooling to reduce the affinity matrix to a similarity matrix. Cho \textit{et al.} praise the max-pool graph matching algorithm as resilient to deformations and highly tolerant to outliers compared to the mean or sum alternatives. The output is a normalized similarity matrix in continuous space of the same shape as the target adjacency matrix, indicating the similarity between each node of target and prediction graph. The similarity matrix is subtracted from a unit matrix to receive the cost matrix, necessary for the final step in the graph matching pipeline. Notable is, that this algorithm also allows $k$-factor matching, with $k \leq 1 < n$. Thus, subgraphs with different number of nodes can be matched. The final permutation matrix is determined by linear sum assignment of the cost matrix, a np-hard problem \cite{diestel2016graph}. % Max-pooling algorithm comes here !!! We use the previously presented sparse representation for subgraphs, sampled from a KG. The discrete target graph is $G=(A, E, F)$ and the continuous prediction graph $\widetilde{G}=(\widetilde{A}, \widetilde{E}, \widetilde{F})$. The $A, E, F$ matrices store the discrete data for the adjacency, for node attributes and the node attribute matrix of form $A \in\{0,1\}^{n \times n}$ with $n$ being the number of nodes in the target graph. $E\in\{0,1\}^{n \times n \times d_e}$ is the edge attribute matrix and a node attribute tensor of the shape $F\in\{0,1\}^{n \times d_n}$ with $d_e$ and $d_n$ being the size of the entity and relation dictionary. For the predicted graph with $k$ nodes, the adjacency matrix is $\widetilde{A} \in[0,1]^{k \times k}$, the edge attribute matrix is $\widetilde{E} \in \mathbb{R}^{k \times k \times d_{e}}$ and the node attribute matrix is $\widetilde{F} \in \mathbb{R}^{k \times d_{n}}$. Given these graphs the algorithm aims to find the affinity matrix $S:(i, j) \times(a, b) \rightarrow \mathbb{R}^{+}$ where $i, j \in G$ and $a, b \in \widetilde{G}$. The affinity matrix returns a score for all node and all edge pairs between the two graphs and is calculated \begin{equation} \begin{array}{l} S((i, j),(a, b)) = \left(E_{i, j, \cdot}^{T}, \widetilde{E}_{a, b, \cdot}\right) A_{i, j} \widetilde{A}_{a, b} \widetilde{A}_{a, a} \widetilde{A}_{b, b}[i \neq j \wedge a \neq b] + \left(F_{i, \cdot}^{T} \widetilde{F}_{a, \cdot}\right) \widetilde{A}_{a, a}[i=j \wedge a=b]. \end{array} \label{eq3:s} \end{equation} Here the square brackets define Iverson brackets \cite{simonovsky_graphvae_2018}. While affinity scores resemblances which suggest a common origin, similarity directly refers to the closeness between two nodes. The next step is to find the similarity matrix $X^* \in[0,1]^{k \times n}$. Therefore we iterate a first-order optimization framework and get the update rule \begin{equation} X^*_{t+1} \leftarrow \frac{1}{\left\|\mathbf{S} X^*_{t}\right\|_{2}} \mathbf{S} X^*_{t}. \end{equation} To calculate $\text { SX^* }$ we find the best candidate $X^*_{i,a}$ from the possible pairs of $i \in \mathbb{N}^{[0,n]}$ and $ai \in \mathbb{N}^{[0,k]}$ in the affinity matrix $S$. Heuristically, taking the argmax over all neighboring node pair affinities yields the best result. Other options are sum-pooling or average-pooling, which do not discard potentially irrelevant information, yet have shown to perform worse. Thus, using the max-pooling approach, we can pairwise calculate \begin{equation} \mathbf{Sx}_{i a}=X^*_{i a} \mathbf{S}_{i a ; i a}+\sum^{n}_{j = 0}} \max _{0 \leq b < k \mathbb{N}_{a}} X^*_{j b} \mathbf{S}_{i a ; j b}. \end{equation} Depending on the matrix size, the number of iterations are adjusted. The resulting similarity matrix $X*$ yields a normalized similarity score for every node pair. The next step if to converting it to a discrete permutation matrix. \subsubsection{Hungarian algorithm} \label{ssec3:hung} % Find shortest path Starting with the normalized similarity matrix $X^*$, we reformulate the aim of finding the discrete permutation matrix as a linear assignment problem. Simonovsky \textit{et al.} \cite{silver_mastering_2017}use for this purpose an optimization algorithm, the so called Hungarian algorithm. It original objective is to optimally assign $n$ resources to $n$ tasks, thus $k-n$ rows of the permutation matrix are left empty. The cost of assigning task $i \in \mathbb{N}^{[0,n]}$ to $a \in \mathbb{N}^{[0,k]}$ is stored in $x_{ia}$ of the cost matrix $C \in \mathbb{N}^{n \times k}$. By assuming tasks and resources are nodes and taking $C=1-X^*$ we get the continuous cost matrix $C$. This algorithm has a complexity of $O\left(n^{4}\right)$, thus is not applicable to complete KGs but only to subgraphs with limited number of nodes per graph \cite{date_gpu-accelerated_2016}. The core of the Hungarian algorithm consist of four main steps, initial reduction, optimality check, augmented search and update. The presented algorithm is a popular variant of the original algorithm and improves the complexity of the update step from $O\left(n^{2}\right)$ to $O\left(n\right)$ and thus, reduces the total complexity to $O\left(n^{3}\right)$. Since throughout this thesis $n=k$ we imply the reduction step and continue with a quadratic cost matrix $C$. The following notation is solely to derive the Hungarian algorithm and does not apply to our graph data. The algorithm takes as input a bipartite graph $G=(V, U, E)$ and the cost matrix $C \in \mathbb{R}^{n \times n}$. $G$ bipartite because it considers all possible edges in the cost matrix in one direction and no self-loops. $V \in \mathbb{R}^n$ and $U \in \mathbb{R}^n$ are the resulting sets of nodes and $E \in \mathbb{R}^{n}$ the set of edges between the nodes. The algorithm's output is a discrete matching matrix $M$. To avoid two irrelevant pages of pseudocode, the steps of the algorithm are presented in the following short summary \cite{mills-tettey_dynamic_nodate}. \begin{enumerate} \item Initialization: \\ \begin{enumerate} \item Initialize the empty matching matrix $M_{0}=\emptyset$. \item Assign $\alpha_i$ and $\beta_i$ as follows: \begin{align*} \forall v_{i} &\in V, \quad &&\alpha_{i}=0 \\ \forall u_{i} &\in U, \quad &&\beta_{j}=\min _{i}\left(c_{i j}\right) \end{align*} \end{enumerate} \item Loop $n$ times over the different stages: \begin{enumerate} \item Each unmatched node in $V$ is a root node for a Hungarian tree with completed results in an augmentation path. \item Expand the Hungarian trees in the equality subgraph. Store the indices $i$ of $v_i$ encountered in the Hungarian tree in the set $I*$ and similar for $j$ in $u_j$ and the set $J^*$. If an augmentation path is found, skip the next step. \item Update $\alpha$ and $\beta$ to add new edges to the equality subgraph and redo the previous step. \begin{align*} \theta&=\frac{1}{2} \min _{i \in I^{*}, j \notin J^{*}}\left(c_{i j}-\alpha_{i}-\beta_{j}\right) \\ \alpha_{i} &\leftarrow\left\{\begin{array}{ll} \alpha_{i}+\theta & i \in I^{*} \\ \alpha_{i}-\theta & i \notin I^{*} \end{array}\right \\ \beta_{j} &\leftarrow\left\{\begin{array}{ll} \beta_{j}-\theta & j \in J^{*} \\ \beta_{j}+\theta & j \notin J^{*} \end{array}\right \\ \end{align*} \item Augment $M_{k-1}$ by flipping the unmatched with the matched edges on the selected augmentation path. Thus $M_k$ is given by $\left(M_{k-1}-P\right) \cup\left(P-M_{k-1}\right)$ and $P$ is the set of edges of the current augmentation path. \end{enumerate} \item Output $M_n$ of the last and $n^{th}$ stage. \end{enumerate} \subsubsection{Graph Matching VAE Loss} \label{ssec3:GVAEloss} Coming back to our generative model, we now explain how the loss function needs to be adjusted to work with graphs and graph matching, which results in a permutation invariant graph VAE. The normal VAE maximizes the evidence lower-bound or, in a practical implementation, minimizes the upper-bound on negative log-likelihood. Using the notation of section \ref{ssection:VAE} the graph VAE loss is \begin{equation} \begin{array}{l} \mathcal{L}(\phi, \theta ; G)=\mathbb{E}_{q_{\phi}(\mathbf{z} \mid G)}\left[-\log p_{\theta}(G \mid \mathbf{z})\right]+\beta (\operatorname{KL}\left[q_{\phi}(\mathbf{z} \mid G) \| p(\mathbf{z})\right]). \end{array} \end{equation} The loss function $\mathcal{L}$ is a combination of reconstruction term and regularization term. The regularization term is the KL divergence between a standard normal distribution and the latent space distribution of $\mathbf{z}$. The change to graph data does not influence this term to graphs. The reconstruction term is the cross entropy between prediction and target, binary for the adjacency matrix and categorical for the edge and node attribute matrices. % A sigmoid with logits, E,F, softmax include translation X % r. Sigmoid activation function is used to compute % Ae, whereas edge- and node-wise softmax is applied to obtain % Ee and Fe, respectively. A The predicted output of the decoder is split in three parts and while $\tilde{A}$ is activated through sigmoid, $\tilde{E}$ and $\tilde{F}$ are activated via edge- and nodewise Softmax. For the case of $n<k$, the target adjacency is permuted $A^{\prime}=X A X^{T}$, so that the model can backpropagate over the full prediction. Since $E$ and $F$ are categorical, permuting prediction or target yields the same cross-entropy. Following Simonovsky's approach \cite{simonovsky_graphvae_2018} we permute the prediction, $\widetilde{F}^{\prime}=X^{T} \widetilde{F}$ and $\widetilde{E}_{\cdot, \cdot, l}^{\prime}=X^{T} \widetilde{E}_{\cdot, \cdot, l} X$. Let $l$ be the one-hot encoded edge attribute vector which is permuted. These permuted subgraphs are then used to calculate the maximum log-likelihood estimate \cite{simonovsky_graphvae_2018}: \begin{equation} \begin{split} \log p\left(A^{\prime} \mid \mathbf{z}\right) = &1 / k \sum_{a} A_{a, a}^{\prime} \log \widetilde{A}_{a, a}+\left(1-A_{a, a}^{\prime}\right) \log \left(1-\widetilde{A}_{a, a}\right)+ \\ & +1 / k(k-1) \sum_{a \neq b} A_{a, b}^{\prime} \log \widetilde{A}_{a, b}+\left(1-A_{a, b}^{\prime}\right) \log \left(1-\widetilde{A}_{a, b}\right) \end{split} \label{eq3:GAVElossA} \end{equation} \begin{align} \log p(F \mid \mathbf{z}) &=1 / n \sum_{i} \log F_{i, \cdot}^{T} \widetilde{F}_{i,}^{\prime} \\ \log p(E \mid \mathbf{z}) &=1 /\left(\|A\|_{1}-n\right) \sum_{i \neq j} \log E_{i, j,}^{T}, \widetilde{E}_{i, j, \cdot}^{\prime} \label{eq3:GAVElossEF} \end{align} The normalizing constant $1 / k(k-1)$ takes into account the no self-loops restriction, thus an edge-less diagonal. In the case of self loops this constant is $1 / k^2$. Similar $1 /\left(\|A\|_{1}-n\right)$ for $\log p(E \mid \mathbf{z})$ where $-n$ accounts for the edge-less diagonal and in case of self-loops is discarded, resulting in $1 /\left(\|A\|_{1}\right)$. \subsection{Ranger Optimizer} \label{sec3:ranger} Finalizing this chapter, we explain the novel deep learning optimizer Ranger. Ranger combines Rectified Adam (RAdam), lookahead and optionally gradient centralization. Let us briefly look into the different components. RAdam is based on the popular Adam optimizer. It improves the learning by dynamically rectifying Adam's adaptive momentum. This is done by reducing the variance of the momentum, which is especially large at the beginning of the training. Thus, leading to a more stable and accelerated start \cite{liu_variance_2020}. The Lookahead optimizer was inspired by recent advances in the understanding of loss surfaces of deep neural networks, thus proposes an approach where, a second optimizer estimates the gradients behavior for the next steps on a set of parallel trained weights, while the number of 'looks ahead' steps is a hyperparameter. This improves learning and reduces the variance of the main optimizer \cite{zhang_lookahead_2019}. The last and most novel optimization technique, Gradient Centralization, acts directly on the gradient by normalizing it to a zero mean. Especially on convolutional neural networks, this helps regularizing the gradient and boosts learning. This method can be added to existing optimizers and can be seen as constrain of the loss function \cite{yong_gradient_2020}. Concluding we can say that Ranger is a state of the art deep learning optimizer with accelerating and stabilizing properties, incorporating three different optimization methods, which synergize with each other. Considering that generative models are especially unstable during training, we see Ranger as a good fit for this research. % LookAhead was inspired by recent advances in the understanding of loss surfaces of deep neural networks, and provides a breakthrough in robust and stable exploration during the entirety of training.
{ "alphanum_fraction": 0.7613676777, "avg_line_length": 107.5573770492, "ext": "tex", "hexsha": "f113ffe7bef6367432d698dcb71cf85b09859bfc", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8c600c1a617406ff8e1ffb118b5dd6b1dbbe3097", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "3lLobo/Thesis", "max_forks_repo_path": "sections/section3.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8c600c1a617406ff8e1ffb118b5dd6b1dbbe3097", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "3lLobo/Thesis", "max_issues_repo_path": "sections/section3.tex", "max_line_length": 1329, "max_stars_count": 1, "max_stars_repo_head_hexsha": "8c600c1a617406ff8e1ffb118b5dd6b1dbbe3097", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "3lLobo/Thesis", "max_stars_repo_path": "sections/section3.tex", "max_stars_repo_stars_event_max_datetime": "2020-07-10T16:15:04.000Z", "max_stars_repo_stars_event_min_datetime": "2020-07-10T16:15:04.000Z", "num_tokens": 9907, "size": 39366 }
\chapter{State Estimation} \label{state_estimation} % \textcolor{red}{TODO: Rewrite sections that don't flow and give credit where appropriate.} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Ranging Setup and Calibration} \label{ranging} \label{txt:ranging} This section introduces the hardware and software setup for a set of wireless ranging modules to enable position tracking of the robot both as an as internal distance measurements (end cap to end cap) an in an external (world) reference frame. All MTRs of \SB{} are equipped with a DWM1000 ranging module from DecaWave Ltd. By employing ultra wideband technology, the low-cost DWM1000 modules provide wireless data transfer and highly accurate timestamps of transmitted and received packets. This allows the distance between two DWM1000 modules to be estimated by computing the time-of-flight of exchanged messages without the need for synchronized clocks. %We opted for this technology because it allows proprioceptive state estimation (distances between end caps), which %cannot be easily tracked directly via motor encoders.~\cite{ledergerber2015} Furthermore, we Using DWM1000 modules external to \SB{} as "fixed anchors" and placing them around the testing area, a world reference frame for ground truth and generation of a reward signal for the machine learning algorithms used for learning locomotion is obtained. %Our intention is that the fixed anchors will not be required in the final deployed %version of the robot, and are primarily for use during algorithm development. It is intended that the final deployed version of the robot and controller will not require fixed anchors, and they are primarily used during algorithm development. % We first introduce the basic sensor operation and our approach to efficiently estimate distances between a large number of sensor modules. % This is followed by a discussion of our ranging software and hardware setup. % Finally, we provide a calibration routine similar to a common motion capture system that allows for quick set up of the sensor network. \subsection{Sensor Operation} \subsubsection{Bidirectional Ranging} The DWM1000 modules are operated in the so-called \emph{symmetric double-sided two-way ranging} mode. In this mode, the modules exchange $3$ packets to estimate the time-of-flight between each other. While the time-of-flight of unsynchronized modules can be estimated with the exchange of only $2$ packets, the employed mode can significantly reduce measurement noise~\cite{decawave}. The basic ranging packet exchange is shown in Fig.~\ref{fig:bidirectional_ranging}. Module 1 sends out a \emph{poll} message containing an emission timestamp ($t_{SP}$) using its local clock. Module 2 receives this message and timestamps the time of arrival using its local clock ($t_{RP}$). Then, module 2 sends out a \emph{response} packet at time $t_{SR}$ (module 2's clock). Module 1 receives this packet at time $t_{RR}$ (module 1's clock). Module 1 now sends out a final message containing $t_{RR}$ and the emission time of the final message ($t_{SF}$, clock of module 1). Module 2 receives this information and timestamps it ($t_{RF}$). \begin{figure}[tpbh] \centering \includegraphics[width=.8\linewidth]{tex/img/bidirectional_ranging.pdf} \caption{Basic symmetric double-side two-way ranging packet exchange. Modules 1 and 2 exchange 3 packets (\emph{poll}, \emph{response}, and \emph{final}). Module 2 then estimates the distance between the modules based on the local timestamps.} \label{fig:bidirectional_ranging} \end{figure} Module 2 can now estimate the time-of-flight and the distance between itself and module 1 based on the 6 timestamps. The basic equations to estimate the distance between module $i$ and module $j$ (module $i$ initiates the ranging and module $j$ computes the distance) are given by: \begin{eqnarray} %ranging between i and j (j computes distance) a_{i} &=& t_{SF}^i-t_{SP}^i\\ b_{j,i} &=& t_{RF}^{j,i}-t_{RP}^{j,i}\\ %when j receives i c_{j,i} &=& t_{RF}^{j,i}-t_{SR}^j\\ d_{i,j} &=& t_{SF}^i-t_{RR}^{i,j} %i receives from j \end{eqnarray} \begin{eqnarray} {TOF}_{j,i} &\approx& \frac{1}{2}\left(c_{j,i}-d_{i,j}\frac{b_{j,i}}{a_i} \right)-\delta_{j,i}\\ \|\bm{N}_j - \bm{N}_i\| &\approx& \frac{1}{2\bm{C}}\left(c_{j,i}-d_{i,j}\frac{b_{j,i}}{a_i} \right)-o_{j,i} \label{eq:distance_estimation}\\ &\doteq& m_{j,i}-o_{j,i} . \end{eqnarray} The variables $a$, $b$, $c$, and $d$ are also visualized in Fig.~\ref{fig:bidirectional_ranging}. The time-of-flight calculation between two modules $i$ and $j$ ($TOF_{j,i}=TOF_{i,j}$) is hindered by a fixed measurement offset ($\delta_{j,i}=\delta_{i,j}$). This offset is due to antenna delays and other discrepancies between the timestamps and actual packet reception or emission. Whereas this offset is expected to be unique to each module, it was found that it is necessary to estimate this offset pairwise for closely located modules. The hypothesis is that the proximity of the robot's motors and the sensor's position near the end cap's metal structure influences the antenna characteristics between pairs of modules. Eq.~\ref{eq:distance_estimation} estimates the distances between the modules based on the time-of-flight calculation ($\bm{C}$ is the speed of light). Rewriting the time offset $\delta_{j,i}$ as a distance offset $o_{j,i}$ (with $o_{j,i}=o_{i,j}$). Here $\bm{N}_i$ and $\bm{N}_j$ refer to the positions of nodes $i$ and $j$ respectively (see Section~\ref{txt:ukf}). The variables $m_{j,i}$ represent the uncorrected distance estimates. %say offset symmetric %lessons learned: antenna effect, bidirectional offset, restart, rx rx, 1ns pulses The DWM1000 requires careful configuration for optimal performance. The main configuration settings are provided in Table~\ref{tbl:dwm1000}. The ranging modules tend to measure non line-of-sight paths near reflective surfaces (e.g. floor, computer monitors), which may cause filter instability. Using the DWM1000's built-in signal power estimator, such suspicious packets are rejected. In practice, between $30\%$ and $70\%$ of packets are rejected. \begin{table}[h] \centering \caption{DWM1000 configuration} \label{tbl:dwm1000} \begin{tabular}{llllll} {\bf bitrate} & {\bf channel} & {\bf preamble} & {\bf PRF} & {\bf preamble code} \\ \hline \SI{6.8}{\mega\bit\per\second} & 7 & 256 & \SI{64}{\mega\hertz} & 17 \end{tabular} \end{table} \subsubsection{Broadcast Ranging} Due to the large number of exchanged packets (3 per pair) bidirectional ranging between pairs of modules quickly becomes inefficient when the number of modules grows. An alternative approach was developed using timed broadcast messages that scales linearly in the number of modules (3 packets per module). In this setup one module periodically initiates a measurement sequence by sending out a \emph{poll} message. When another module receives this message it emits its own \emph{poll} message after a fixed delay based on its ID, followed by \emph{response} and \emph{final} messages after additional delays. Broadcast ranging is illustrated in Fig.~\ref{fig:broadcast_ranging}. \begin{figure}[tpbh] \centering \begin{turn}{270} \includegraphics[width=.6\linewidth]{tex/img/broadcast_ranging.pdf} \end{turn} \caption{Packet exchange between 4 modules for bidirectional pairwise and broadcast ranging. Timed broadcast messages allow for efficient ranging with a large number of modules. } \label{fig:broadcast_ranging} \end{figure} One disadvantage of the broadcasting approach is that the total measurement time between a pair of modules takes longer (up to \SI{60}{\milli\second} in the experimental setup) than a single pairwise bidirectional measurement (approx. \SI{3}{\milli\second}). However, broadcast ranging provides two measurements for each pair of modules per measurement iteration. Note that each module now needs to keep track of the \emph{poll} and \emph{final} packet reception times of all other modules. The \emph{final} packet becomes longer as each module needs to transmit the \emph{response} reception time ($t_{RR}$) of all other modules. %The simplified packet structures are: {%\small %\begin{bytefield}{22} % \begin{rightwordgroup}{poll}{ % \bitbox{7}{preamble} & % \bitbox{2}{0} & % \bitbox{2}{$i$} & % \bitbox{3}{$t_{SP}^i$} & % \bitbox{8}{checksum} % }\end{rightwordgroup} %\end{bytefield} %\begin{bytefield}{19} % \begin{rightwordgroup}{response}{ % \bitbox{7}{preamble} & % \bitbox{2}{1} & % \bitbox{2}{$i$} & % \bitbox{8}{checksum} % }\end{rightwordgroup} %\end{bytefield} %\begin{bytefield}{34} % \begin{rightwordgroup}{final}{ % \bitbox{7}{preamble} & % \bitbox{2}{2} & % \bitbox{2}{$i$} & % \bitbox{3}{$t_{SF}^i$} & % \bitbox{3}{$t_{RR}^{i,1}$} & % \bitbox{3}{\ldots} & % \bitbox{3}{$t_{RR}^{i,n}$} & % \bitbox{8}{checksum} % }\end{rightwordgroup} %\end{bytefield} %} \subsection{Ranging Setup} Each MTR of SUPERball was fitted with a DWM1000 module located approximately \SI{0.1}{\metre} from the end of the strut. To simplify the notation, the top of the MTRs (ends of the struts) and the position of the ranging sensor are assumed the same. In practice, this offset is taken into account in the output function of the filter (see Section~\ref{txt:ukf}). The broadcasting algorithm runs at \SI{15}{\hertz} and packet transmissions are spaced \SI{1}{\milli\second} apart. This allows for over $20$ modules to range. After one ranging iteration, each end cap transmits its measurements over WiFi to the ROS network. A ROS node then combines measurements from all MTRs, along with encoder and IMU data, into a single ROS message at \SI{10}{\hertz}. The fixed anchors operate in a similar way to the end caps, but are not connected to a ROS node and can not directly transmit data to the ROS network. This means that two measurements are obtained (one in each direction) for each pair of modules on the robot, but only a single measurement between the fixed anchors and the modules on the robot. \subsection{Calibration} \label{txt:calib} One of the design goals of this state estimation method is quick deployment in new environments without significant manual calibration. To achieve this, an automatic calibration procedure was implemented to jointly estimate the constellation of fixed modules (anchors, defining an external reference frame) and the pairwise sensor offsets ($o_{i,j}$). Calibration is performed - similar to common motion capture systems - by moving the robot around, while recording the uncorrected distance measurements ($m_{j,i}$). After recording a dataset, reconstruction error is minimized $L$ by optimizing over the offsets $\bm{o}$ ($o_{i,j}$ rearranged as a vector), the estimated anchor locations $\bm{N}^{est}$, and the estimated moving module locations $\bm{N}^{float}[1\ldots n_{samples}]$ (i.e. the module on the robot's end caps): \begin{align} \resizebox{.91\hsize}{!}{$L\left(i,j,t\right) = \left( \|\bm{N}^{anchor}_i - \bm{N}^{float}_{j}\left[t\right]\|-o_{j,i} - m_{i,j}\left[t\right] \right)^2$ \label{eq:l_single}}\\ \resizebox{.91\hsize}{!}{$L\left(\bm{o},\bm{N}^{anchor},\bm{N}^{float}[1\ldots n_{samples}]\right) = \sum_{i,j,t}\alpha_{j,t}L\left(i,j,t\right)$. \label{eq:l_full}} \end{align} The brackets in $\bm{N}^{float}[1\ldots n_{samples}]$ indicate the moving module locations (MTR positions) at a specific timestep. For example $\bm{N}^{float}[5]$ contains the estimated end cap positions at timestep 5 in the recorded dataset. In Eq.~\ref{eq:l_full}, $i$ iterates over anchors, $j$ iterates over moving nodes and $t$ iterates over samples. The indicator variables $\alpha_{j,t}$ are equal to $1$ when for sample $t$ there are at least $4$ valid measurements to the fixed module for moving module $j$ (i.e. the number of DOFs reduces). In practice, constraints are added on the bar lengths, which take the same form as Eq.~\ref{eq:l_single} with the offsets set to $0$. BFGS~\cite{battiti1990bfgs} is used to minimize Eq.~\ref{eq:l_full} with a dataset containing approximately $400$ timesteps selected randomly from a few minutes of movement of the robot. Although the algorithm works without prior knowledge, providing the relative positions of $3$ fixed nodes ($3$ manual measurements) significantly improves the success rate as there are no guarantees on global convergence. Once the external offsets (between the anchors and moving nodes) and the module positions are known, the offsets can be estimated between moving nodes in a straightforward way by computing the difference between the estimated internal distances and the uncorrected distance measurements. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Filter Design} \label{txt:ukf} Tensegrity systems are nonlinear and exhibit hybrid dynamics due to cable slack conditions and interactions with the environment that involve collision and friction. This warrants a robust filter design to track the robot's behavior. The commonly used Extended Kalman Filter (EKF) does not perform well on highly nonlinear systems where first-order approximations offer poor representations of the propagation of uncertainties. Additionally the EKF requires computation of time-derivatives through system dynamics and output functions which is challenging for a model with complex hybrid dynamics. The sigma point Unscented Kalman Filter (UKF) does not require derivatives through the system dynamics and is third order accurate when propagating Gaussian Random Variables through nonlinear dynamics~\cite{wan2000unscented}. The computational cost of the UKF is comparable to that of the EKF, but for tensegrity systems which commonly have a large range of stiffnesses and a high number of state variables the time-update of the sigma points dominates computational cost. As such, the methods used to reduce computational cost of dynamic simulation will be described, then in the following section the outline of the specific UKF implementation for the \SB{} prototype. \subsection{Dynamic Modeling} \label{sec:dynamic_modeling_sb} The UKF requires a dynamic model which balances model fidelity and computational efficiency since it requires a large number of simulations to be run in parallel. The model implemented for the tensegrity system is a spring-mass net and the following incomplete list of simplifying assumptions where used: \begin{itemize} \item Only point masses located at each node point \item All internal and external forces are applied at nodes \item Members exert only linear stiffness and damping \item Unilateral forcing in cables \item Flat ground at a known height with Coulomb friction \item No bar or string collision modeling \end{itemize} %This is a common approach for modeling tensegrity systems, and force density approaches for this problem are described in \textcolor{red}{cite}. Below we describe some careful manipulation of the equations within this force density framework which allowed us to run the parallel simulations while leaving computational bandwidth for other requisite operations such as communication and data visualization. For a tensegrity with $n$ nodes and $m$ members, the member force densities, $\boldsymbol{q}\in\mathbb{R}^{m}$, can be transformed into nodal forces, $\boldsymbol{F_m}\in\mathbb{R}^{n\times 3}$, by using the current Cartesian nodal positions, $\boldsymbol{N}\in\mathbb{R}^{n\times 3}$, and the connectivity matrix, $\boldsymbol{C}\in\mathbb{R}^{m\times n}$, as described in \cite{skelton2009tensegrity}. This operation is described by the equation: $$ \boldsymbol{F_m} = \boldsymbol{C}^{T} diag(\boldsymbol{q}) \boldsymbol{C} \boldsymbol{N}, $$ where $diag(\cdot)$ represents the creation of a diagonal matrix with the vector argument along its main diagonal. First, note that $\boldsymbol{C} \boldsymbol{N}$ produces a matrix $\boldsymbol{U}\in\mathbb{R}^{m\times 3}$ where each row corresponds to a vector that points between the $i$th and $j$th nodes spanned by each member. Therefore, this first matrix multiplication can be replaced with vector indexing as $\boldsymbol{U}_{k} = \boldsymbol{N}_{i} - \boldsymbol{N}_{j}$, where the notation $\boldsymbol{U}_{k}$ is used to denote the $k$th row of matrix $\boldsymbol{U}$. If one then computes $\boldsymbol{V}=\boldsymbol{C}\frac{d\boldsymbol{N}}{dt}$ with the same method %%%% NOTE This was the old sentence, but X_{l} didn't make sense... So I assumed it was suppose to be U_{k}. Is that correct? %where we use the notation $\boldsymbol{X}_{l}$ to denote the $l$th row of matrix $\boldsymbol{X}$. If we then compute $\boldsymbol{V}=\boldsymbol{C}\frac{d\boldsymbol{N}}{dt}$ with the same method as $\boldsymbol{U}$, one would obtain a matrix of relative member velocities. The matrices $\boldsymbol{U}$ and $\boldsymbol{V}$ are used to calculate member lengths as $L_k = |\boldsymbol{U}_k|_2$ and member velocities as $\frac{d}{dt}(L_k) = \frac{\boldsymbol{U}_k(\boldsymbol{V}_k)^T}{L_k}.$ Member force densities, $\boldsymbol{q}$, are then calculated using Hooke's law and viscous damping as: $$ \boldsymbol{q}_k = K_k(1 - \frac{L_{0k}}{L_k}) - \frac{c_k}{L_k} \frac{d}{dt}(L_k). $$ Here $K_k$ and $c_k$ denote the $k$th member's stiffness and damping constants, respectively. Note that cables require some additional case handling to ensure unilateral forcing. Scaling each $\boldsymbol{U}_k$ by $\boldsymbol{q}_k$ yields a matrix whose rows correspond to vector forces of the members. Denote this matrix as $\boldsymbol{U}^q\in\mathbb{R}^{m\times 3}$, and note that $\boldsymbol{U}^q = diag(\boldsymbol{q}) \boldsymbol{C} \boldsymbol{N}$. Thus this matrix of member forces can be easily applied to the nodes using: $$ \boldsymbol{F_m} = \boldsymbol{C}^{T} \boldsymbol{U}^q. $$ A method for computing nodal forces exerted by the members is now obtained, and only ground interaction forces need to be computed, which will be denoted as $\boldsymbol{F}_g$. Ground interaction forces were computed using the numerical approach in~\cite{yamane2006stable}. The nodal accelerations can then be written as: $$ \frac{d^2\boldsymbol{N}}{dt^2} = \boldsymbol{M}^{-1}(\boldsymbol{F}_m+ \boldsymbol{F}_g) - \boldsymbol{G}, $$ where $\boldsymbol{M}\in\mathbb{R}^{n\times n}$ is a diagonal matrix whose diagonal entries are the masses of each node and $\boldsymbol{G}\in\mathbb{R}^{n\times 3}$ is matrix with identical rows equal to the vector acceleration due to gravity. It is then straightforward to simulate this second order ODE using traditional numerical methods. Note also that it is possible to propagate many parallel simulations efficiently by concatenating multiple $\boldsymbol{N}$ matrices column wise to produce $\boldsymbol{N}_{\parallel}\in\mathbb{R}^{n\times 3l}$ for $l$ parallel simulations. The resultant vectorization of many of the operations yields significant gains in computational speed with some careful handling of matrix dimensions. \subsection{UKF Implementation} A traditional UKF was implemented as outlined in \cite{wan2000unscented} with additive Gaussian noise for state variables and measurements. Several parameters are defined for tuning the behavior of the UKF, namely $\alpha$, $\beta$ and $\kappa$, where $\alpha$ determines the spread of the sigma points generated by the unscented transformation, $\beta$ is used to incorporate prior knowledge of distribution, and $\kappa$ is a secondary scaling parameter. Hand tuning obtained these parameters to the values $\alpha = 0.0139$, $\beta = 2$ for Gaussian distributions and $\kappa = 0$ and found this to yield an adequately stable filter. Defining state variables as $\boldsymbol{N}$ and $\frac{d\boldsymbol{N}}{dt}$ stacked in a vector $\boldsymbol{y}\in\mathbb{R}^{L}$ where $L = 6n$ is the number of state variables. Also, independent state noise is assumed with variance $\lambda_y = 0.4$ thus with covariance $\boldsymbol{R} = \lambda_y\bm{I}_L$. \begin{figure}[tpbh] \centering \includegraphics[width=0.7\linewidth]{tex/img/flow_chartSB.pdf} \caption{Block diagram of data flow within the system. Red signals are passed as ROS messages and blue signals are passed using the ranging modules. Note that each rod contains two ranging sensors located at each end of the rod. The gray control strategy block represents a to-be-designed state-feedback control strategy.} \label{fig:UKFflowChart} \end{figure} %For measurements we take the minimum angle between each bar vector and the z-axis, $\theta\in\mathbb{R}^{b}$ where $b$ is the number of bar angles available at the given time step and all ranging measures, $\boldsymbol{r}\in\mathbb{R}^{a}$, where $a$ is the number of ranging measures available at a given time step. The measurement data used is estimated orientation data from the robot's IMUs using a gradient descent AHRS algorithm based on~\cite{madgwick2011estimation}, $\theta\in\mathbb{R}^{b}$ where $b$ is the number of bar angles available at the given time step and all ranging measures, $\boldsymbol{r}\in\mathbb{R}^{a}$, where $a$ is the number of ranging measures available at a given time step. Independent noise is again assumed and represented by $\lambda_\theta$ and $\lambda_r$. The measurement covariance matrix is then defined as: $$ \boldsymbol{Q} = \left[ \begin{array}{ccc} \lambda_\theta\bm{I}_b & \boldsymbol{0} \\ \boldsymbol{0} & \lambda_r\bm{I}_a \end{array} \right]. $$ These user defined variables are then used within the framework of the UKF to forward propagate both the current expected value of the state as well as its covariance. Fig.~\ref{fig:UKFflowChart} shows an overview of the complete state estimation setup. \section{Filter Evaluation} \subsection{Experimental Setup} \begin{figure}[tpbh] \centering \includegraphics[width=\linewidth]{tex/img/matlab_figure_ranging_b45.pdf} \caption{Visualization of the UKF output. \SB{} sits in the middle of the plot surrounded by 8 ranging base stations. Lines between the robot and the base stations indicate valid ranging measures during this timestep.} \label{fig:SUPERballMATLAB} \end{figure} To evaluate the performance of the UKF, eight "fixed anchor" ranging base stations are used and calibrated as detailed in Section~\ref{txt:calib}. Each end cap of \SB{} was then able to get a distance measurement to each base station. This information was sent over ROS along with IMU data (yaw,pitch,roll) and cable rest lengths to the UKF. The base stations were placed in a pattern to cover an area of approximately \SI{91}{\meter^2}. Each base station's relative location to each other may be seen in Fig.~\ref{fig:SUPERballMATLAB}. \SB{} and the base stations were then used to show the UKF tracking a local trajectory of end caps and a global trajectory of the robotic system. In each of these experiments, the UKF was allowed time to settle from initial conditions upon starting the filter. This ensured that any erroneous states due to poor initial conditioning did not affect the filter's overall performance. \subsection{Local Trajectory Tracking} \begin{figure}[tpbh] \centering \includegraphics[width=1\linewidth]{tex/img/Node_tracking_1-11.pdf} \caption{Position plotted through time for both end cap 1 and end cap 2. The thin line represents the position output measured by the camera tracking system, and the bold line represents the position output from the UKF filter. As expected, there is a time domain lag between the measured and estimated positions.} \label{fig:smalldisplacement} \end{figure} In order to track a local trajectory, \SB{} remained stationary while two of its actuators tracked phase shifted stepwise sinusoidal patterns. During the period of actuation, two end cap trajectories were tracked on \SB{} and compared to the trajectory outputs of the UKF. One end cap was directly connected to an actuated cable (end cap 2), while the other end cap had no actuated cables affixed to it (end cap 1). To obtain a ground truth for the position trajectory, a camera that measured the position of each end cap was positioned next to the robot. Both end caps started at the same relative height and the majority of movement of both fell within the plane parallel to the camera. Fig.~\ref{fig:smalldisplacement} shows the measured and UKF global positions of the two end caps through time. % For this experiment, the cables between end caps 1 and 11 and end caps 12 and 8 were actuated. % The UKF is able to track the end cap movements quite well with some displacement error in the Y position for end cap 1. % Upon further inspection of the input data to the UKF, there was a high packet loss between end cap 1 and the base stations. % This coupled with a mismatched base model, might be the cause for this error. \subsection{Global Trajectory Tracking} \begin{figure}[tpbh] \centering \includegraphics[width=\linewidth]{tex/img/top_view.pdf} \caption{Top down view of the triangular faces to which the robot transitions during the global trajectory tracking experiment for various setting of the state estimator. The small inset illustrates the movement of the robot. The line shows the estimated center of mass (CoM) using the \emph{full} settings. Finding the initial position (origin) is hard for all settings, and without the IMUs the estimator does not find the correct initial face. After a first roll, tracking becomes more accurate. The offsets $\bm{o}$ have a minimal impact, which indicates that the calibration routine is sufficiently accurate. } \label{fig:3roll_triangles} \end{figure} For global trajectory tracking, \SB{} was actuated to induce a transition from one base triangle rolling through to another base triangle as presented in \cite{sabelhaus2015system}. %The state of \SB{} was tracked using the UKF. Ground truth for this experiment was ascertained by marking and measuring the positions of each base triangle's end caps before and after a face transition. %Fig.~\ref{fig:3roll_perspective} shows a 3D plot of the UKF generated states for the beginning and end of the experiment. %Each colored triangle represents a base triangle and the robot implements two full transitions starting from the red triangle and ending on the blue. 4 settings of the state estimator were evaluated. \emph{Full}: The state estimator as described in Section~\ref{txt:ukf} with all IMU and ranging sensors. \emph{no IMU}: Only the ranging sensors are enabled. \emph{full w. cst. offset}: Same as \emph{full}, but the offsets $\bm{o}$ are set to a constant instead of optimized individually. \emph{4 base station ranging sensors}: 50\% of the base station ranging sensors are disabled. The results of this experiment are presented in Fig.~\ref{fig:3roll_triangles}~and~\ref{fig:3roll_xz_position}. \begin{figure}[tpbh] \centering \includegraphics[width=\linewidth]{tex/img/roll_x_z_zoom.pdf} \caption{X and Y position of end cap 12 as a function of time for the various estimator settings. The end cap was initially off the ground and touches the ground after the first roll. This is not tracked correctly when the IMUs are disabled. The system works as expected when 4 base stations ranging sensors are disabled, but with slower convergence and more noise on the robot's position. Around \SI{60}{s} there's a spurious IMU value from which the state estimator recovers. } \label{fig:3roll_xz_position} \end{figure}
{ "alphanum_fraction": 0.7598408104, "avg_line_length": 76.1432506887, "ext": "tex", "hexsha": "fc3f14c1726fdaee1c6e9f562609b48810e753f9", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "92c7f0bdaecde6bce2c6ee47d401e0335e449d6b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "JEB12345/Advancement_UCSC", "max_forks_repo_path": "Dissertation/tex/StateEstimation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "92c7f0bdaecde6bce2c6ee47d401e0335e449d6b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "JEB12345/Advancement_UCSC", "max_issues_repo_path": "Dissertation/tex/StateEstimation.tex", "max_line_length": 670, "max_stars_count": null, "max_stars_repo_head_hexsha": "92c7f0bdaecde6bce2c6ee47d401e0335e449d6b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "JEB12345/Advancement_UCSC", "max_stars_repo_path": "Dissertation/tex/StateEstimation.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7001, "size": 27640 }
\subsection{Class Function} \definedin{CFG.h} The Function class represents the protion of the program CFG that is reachable through intraprocedural control flow transfers from the function's entry block. Functions in the ParseAPI have only a single entry point; multiple-entry functions such as those found in Fortran programs are represented as several functions that ``share'' a subset of the CFG. Functions may be non-contiguous and may share blocks with other functions. \begin{center} \begin{tabular}{ll} \toprule FuncSource & Meaning \\ \midrule RT & recursive traversal (default) \\ HINT & specified in CodeSource hints \\ GAP & speculative parsing heuristics \\ GAPRT & recursive traversal from speculative parse \\ ONDEMAND & dynamically discovered at runtime \\ \bottomrule \end{tabular} \end{center} \apidesc{Return type of function \code{src()}; see description below.} \begin{center} \begin{tabular}{ll} \toprule FuncReturnStatus & Meaning \\ \midrule UNSET & unparsed function (default) \\ NORETURN & will not return \\ UNKNOWN & cannot be determined statically \\ RETURN & may return \\ \bottomrule \end{tabular} \end{center} \apidesc{Return type of function \code{retstatus()}; see description below.} \begin{apient} typedef std::vector<Block*> blocklist typedef std::set<Edge*> edgelist \end{apient} \apidesc{Containers for block and edge access. Library users \emph{must not} rely on the underlying container type of std::set/std::vector lists, as it is subject to change.} \begin{tabular}{p{1.25in}p{1.125in}p{3.125in}} \toprule Method name & Return type & Method description \\ \midrule name & string & Name of the function. \\ addr & Address & Entry address of the function. \\ entry & Block * & Entry block of the function. \\ parsed & bool & Whether the function has been parsed. \\ blocks & blocklist \& & List of blocks contained by this function sorted by entry address. \\ callEdges & edgelist \& & List of outgoing call edges from this function. \\ returnBlocks & blocklist \& & List of all blocks ending in return edges. \\ exitBlocks & blocklist \& & List of all blocks that end the function, including blocks with no out-edges. \\ hasNoStackFrame & bool & True if the function does not create a stack frame. \\ savesFramePointer & bool & True if the function saves a frame pointer (e.g. \%ebp). \\ cleansOwnStack & bool & True if the function tears down stack-passed arguments upon return. \\ region & CodeRegion * & Code region that contains the function. \\ isrc & InstructionSource * & The InstructionSource for this function. \\ obj & CodeObject * & CodeObject that contains this function. \\ src & FuncSrc & The type of hint that identified this function's entry point. \\ restatus & FuncReturnStatus * & Returns the best-effort determination of whether this function may return or not. Return status cannot always be statically determined, and at most can guarantee that a function \emph{may} return, not that it \emph{will} return. \\ getReturnType & Type * & Type representing the return type of the function. \\ \bottomrule \end{tabular} \begin{apient} Function(Address addr, string name, CodeObject * obj, CodeRegion * region, InstructionSource * isource) \end{apient} \apidesc{Creates a function at \code{addr} in the code region specified. Insructions for this function are given in \code{isource}.} \begin{apient} std::vector<FuncExtent *> const& extents() \end{apient} \apidesc{Returns a list of contiguous extents of binary code within the function.} \begin{apient} void setEntryBlock(block * new_entry) \end{apient} \apidesc{Set the entry block for this function to \code{new\_entry}.} \begin{apient} void set_retstatus(FuncReturnStatus rs) \end{apient} \apidesc{Set the return status for the function to \code{rs}.} \begin{apient} void removeBlock(Block *) \end{apient} \apidesc{Remove a basic block from the function.}
{ "alphanum_fraction": 0.7503187962, "avg_line_length": 40.4226804124, "ext": "tex", "hexsha": "e3a1c4930ae9a28279041f106c545cfc8df692ac", "lang": "TeX", "max_forks_count": 18, "max_forks_repo_forks_event_max_datetime": "2021-10-14T10:17:39.000Z", "max_forks_repo_forks_event_min_datetime": "2015-11-04T03:44:22.000Z", "max_forks_repo_head_hexsha": "9879a85c7ba56b443aeccde730778dcb6d55a39d", "max_forks_repo_licenses": [ "BSD-2-Clause" ], "max_forks_repo_name": "Vtech181/Path_Armor", "max_forks_repo_path": "Dyninst-8.2.1/parseAPI/doc/API/Function.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9879a85c7ba56b443aeccde730778dcb6d55a39d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-2-Clause" ], "max_issues_repo_name": "Vtech181/Path_Armor", "max_issues_repo_path": "Dyninst-8.2.1/parseAPI/doc/API/Function.tex", "max_line_length": 430, "max_stars_count": 47, "max_stars_repo_head_hexsha": "9879a85c7ba56b443aeccde730778dcb6d55a39d", "max_stars_repo_licenses": [ "BSD-2-Clause" ], "max_stars_repo_name": "Vtech181/Path_Armor", "max_stars_repo_path": "Dyninst-8.2.1/parseAPI/doc/API/Function.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-18T11:23:59.000Z", "max_stars_repo_stars_event_min_datetime": "2015-10-14T23:12:32.000Z", "num_tokens": 973, "size": 3921 }
\filetitle{rmse}{Compute RMSE for given observations and predictions}{tseries/rmse} \paragraph{Syntax}\label{syntax} \begin{verbatim} [Rmse,Pe] = rmse(Obs,Pred) [Rmse,Pe] = rmse(Obs,Pred,Range,...) \end{verbatim} \paragraph{Input arguments}\label{input-arguments} \begin{itemize} \item \texttt{Obs} {[} tseries {]} - Input data with observations. \item \texttt{Pred} {[} tseries {]} - Input data with predictions (a different prediction horizon in each column); \texttt{Pred} is typically the outcome of the Kalman filter, \href{model/filter}{\texttt{model/filter}} or \href{VAR/filter}{\texttt{VAR/filter}}, called with the option \texttt{'ahead='}. \item \texttt{Range} {[} numeric \textbar{} \texttt{Inf} {]} - Date range on which the RMSEs will be evaluated; \texttt{Inf} means the entire possible range available. \end{itemize} \paragraph{Output arguments}\label{output-arguments} \begin{itemize} \item \texttt{Rmse} {[} numeric {]} - Numeric array with RMSEs for each column of \texttt{Pred}. \item \texttt{Pe} {[} tseries {]} - Prediction errors, i.e.~the difference \texttt{Obs - Pred} evaluated within \texttt{Range}. \end{itemize} \paragraph{Description}\label{description} \paragraph{Example}\label{example}
{ "alphanum_fraction": 0.7112341772, "avg_line_length": 27.4782608696, "ext": "tex", "hexsha": "5b777938d573ac2210a08e1ef98566888153d325", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2022-01-17T07:06:39.000Z", "max_forks_repo_forks_event_min_datetime": "2022-01-17T07:06:39.000Z", "max_forks_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_forks_repo_path": "-help/tseries/rmse.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_issues_repo_issues_event_max_datetime": "2020-09-02T10:40:25.000Z", "max_issues_repo_issues_event_min_datetime": "2017-03-28T08:13:20.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_issues_repo_path": "-help/tseries/rmse.tex", "max_line_length": 87, "max_stars_count": 1, "max_stars_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_stars_repo_path": "-help/tseries/rmse.tex", "max_stars_repo_stars_event_max_datetime": "2017-12-06T13:38:38.000Z", "max_stars_repo_stars_event_min_datetime": "2017-12-06T13:38:38.000Z", "num_tokens": 387, "size": 1264 }
\subsubsection{Advection vs. Diffusion Sensitivity \textsc{Cyder} Results} Some of the radionuclide transport models in \Cyder depend on the advective velocity as well as the diffusion characteristics of the medium. By evaluating the sensitivity to the advective velocity and reference diffusivity of the radionuclide transport in the Mixed Cell model, trends similar to those found in the \gls{GDSM} were found with the \Cyder tool. Specifically, increased advection and increased diffusion lead to greater release. Also, when both are varied, a boundary between diffusive and advective regimes can be seen. An example of these results are shown in Figure \ref{fig:dr_adv_diff}. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{./results/images/adv_vel_diff.eps} \caption[Advection vs. Diffusion Sensitivity in \textsc{Cyder}]{Dual advective velocity and reference diffusivity sensitivity for a non-sorbing, infinitely soluble nuclide.} \label{fig:dr_adv_diff} \end{figure}
{ "alphanum_fraction": 0.8138832998, "avg_line_length": 55.2222222222, "ext": "tex", "hexsha": "d3b959fd3ab9197e1f1eb646dcd6adead4e244ec", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "cfb06a9a2e744914e7f3d088014db7a71a68c39d", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "katyhuff/2017-huff-rapid", "max_forks_repo_path": "results/adv_vel_diff_results.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "cfb06a9a2e744914e7f3d088014db7a71a68c39d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "katyhuff/2017-huff-rapid", "max_issues_repo_path": "results/adv_vel_diff_results.tex", "max_line_length": 153, "max_stars_count": null, "max_stars_repo_head_hexsha": "cfb06a9a2e744914e7f3d088014db7a71a68c39d", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "katyhuff/2017-huff-rapid", "max_stars_repo_path": "results/adv_vel_diff_results.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 249, "size": 994 }
\section{Syntactic Extensions: Functions and Feature Terms} \subsection{Functions} \subsection{Feature Terms}
{ "alphanum_fraction": 0.7844827586, "avg_line_length": 11.6, "ext": "tex", "hexsha": "51dbe25e55d21d37e3285b4a69fd7cee7719d983", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2019-02-13T14:18:49.000Z", "max_forks_repo_forks_event_min_datetime": "2015-10-18T11:11:44.000Z", "max_forks_repo_head_hexsha": "f7f834bd219759cd7e8b3709801ffe26082c766d", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "leuschel/ecce", "max_forks_repo_path": "www/CiaoDE/ciao/doc/tutorial/SRC/func.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f7f834bd219759cd7e8b3709801ffe26082c766d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "leuschel/ecce", "max_issues_repo_path": "www/CiaoDE/ciao/doc/tutorial/SRC/func.tex", "max_line_length": 59, "max_stars_count": 10, "max_stars_repo_head_hexsha": "f7f834bd219759cd7e8b3709801ffe26082c766d", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "leuschel/ecce", "max_stars_repo_path": "www/CiaoDE/ciao/doc/tutorial/SRC/func.tex", "max_stars_repo_stars_event_max_datetime": "2020-08-10T18:17:26.000Z", "max_stars_repo_stars_event_min_datetime": "2015-10-16T08:23:29.000Z", "num_tokens": 25, "size": 116 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Methods % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Modeling}\label{modeling} In order to understand how signatures of the heating frequency are manifested in the emission measure slope and time lag, we predict the emission over the entire \AR{} as observed by SDO/AIA for a range of nanoflare heating frequencies. To do this, we have constructed an advanced forward modeling pipeline through a combination of magnetic field extrapolations, field-aligned hydrodynamic simulations, and atomic data\footnote{Our forward modeling pipeline, called synthesizAR, is modular and flexible and written entirely in Python. The complete source code, along with installation instructions and documentation, are available here: \href{https://github.com/wtbarnes/synthesizAR}{github.com/wtbarnes/synthesizAR}}. In the following section, we discuss each step of our pipeline in detail. %spell-checker: disable \begin{pycode}[manager_methods] manager_methods = texfigure.Manager( pytex, './', python_dir='python', fig_dir='figures', data_dir='data' ) \end{pycode} %spell-checker: enable %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Field Extrapolation %%%%%%%%%%%%%%%%%%%% \subsection{Magnetic Field Extrapolation}\label{field} %spell-checker: disable \begin{pycode}[manager_methods] from sunpy.instr.aia import aiaprep from sunpy.physics.differential_rotation import diffrot_map #################################################### # Data Prep # #################################################### aia_map = Map(manager_methods.data_file('aia_171_observed.fits')) hmi_map = Map(manager_methods.data_file('hmi_magnetogram.fits')) # AIA aia_map = diffrot_map(aiaprep(aia_map), time=hmi_map.date, rot_type='snodgrass') aia_map = aia_map.submap( SkyCoord(-440, -375, unit=u.arcsec, frame=aia_map.coordinate_frame), SkyCoord(-140, -75, unit=u.arcsec, frame=aia_map.coordinate_frame), ) # HMI hmi_map = hmi_map.rotate(order=3) hmi_map = aiaprep(hmi_map).submap( aia_map.bottom_left_coord, aia_map.top_right_coord) #################################################### # Plot # #################################################### fig = plt.figure(figsize=texfigure.figsize( pytex, scale=1 if is_onecolumn() else 2, height_ratio=0.5, figure_width_context='columnwidth' )) plt.subplots_adjust(wspace=0.03) ### HMI ### ax = fig.add_subplot(121, projection=hmi_map) hmi_map.plot( title=False,annotate=False, norm=matplotlib.colors.SymLogNorm(50, vmin=-7.5e2, vmax=7.5e2), cmap='better_RdBu_r', ) ax.grid(alpha=0) # HPC Axes lon,lat = ax.coords[0],ax.coords[1] lat.set_ticklabel(rotation='vertical') lon.set_axislabel(r'Helioprojective Longitude',) lat.set_axislabel(r'Helioprojective Latitude',) # HGS Axes hgs_lon,hgs_lat = aia_map.draw_grid(axes=ax,grid_spacing=10*u.deg,alpha=0.5,color='k') hgs_lat.set_axislabel_visibility_rule('labels') hgs_lon.set_axislabel_visibility_rule('labels') hgs_lat.set_ticklabel_visible(False) hgs_lon.set_ticklabel_visible(False) hgs_lat.set_ticks_visible(False) hgs_lon.set_ticks_visible(False) ### AIA ### ax = fig.add_subplot(122, projection=aia_map,) # Plot image aia_map.plot( title=False,annotate=False, norm=ImageNormalize(vmin=0,vmax=5e3,stretch=AsinhStretch(0.1))) # Plot fieldlines ar = synthesizAR.Field.restore(os.path.join(manager_methods.data_dir, 'base_noaa1158'), lazy=False) for l in ar.loops[::10]: c = l.coordinates.transform_to(aia_map.coordinate_frame) ax.plot_coord(c, '-', color='w', lw=0.5, alpha=0.25) ax.grid(alpha=0) # HMI Contours hmi_map.draw_contours( u.Quantity([-5,5], '%'), axes=ax, colors=[seaborn_deep[0], seaborn_deep[3]], linewidths=0.75 ) # HPC Axes lon,lat = ax.coords[0],ax.coords[1] lon.set_ticks(color='w') lat.set_ticks(color='w') lat.set_ticklabel_visible(False) lon.set_axislabel('') lat.set_axislabel_visibility_rule('labels') # HGS Axes hgs_lon,hgs_lat = aia_map.draw_grid(axes=ax,grid_spacing=10*u.deg,alpha=0.5,color='w') hgs_lat.set_axislabel_visibility_rule('labels') hgs_lon.set_axislabel_visibility_rule('labels') hgs_lat.set_ticklabel_visible(False) hgs_lon.set_ticklabel_visible(False) hgs_lat.set_ticks_visible(False) hgs_lon.set_ticks_visible(False) #################################################### # Save figure # #################################################### fig_aia_hmi_lines = manager_methods.save_figure('magnetogram',) fig_aia_hmi_lines.caption = r'Active region NOAA 1158 on 12 February 2011 15:32:42 UTC as observed by HMI (left) and the 171 \AA{} channel of AIA (right). The gridlines show the heliographic longitude and latitude. The left panel shows the LOS magnetogram and the colorbar range is $\pm750$ G on a symmetrical log scale. In the right panel, 500 out of the total 5000 field lines are overlaid in white and the red and blue contours show the HMI LOS magnetogram at the $+5\%$ (red) and $-5\%$ (blue) levels.' fig_aia_hmi_lines.figure_env_name = 'figure*' fig_aia_hmi_lines.figure_width = r'\columnwidth' if is_onecolumn() else r'2\columnwidth' fig_aia_hmi_lines.placement = '' fig_aia_hmi_lines.fig_str = fig_str \end{pycode} \py[manager_methods]|fig_aia_hmi_lines| %spell-checker:enable We choose \AR{} NOAA 1158, as observed by the Helioseismic Magnetic Imager \citep[HMI,][]{scherrer_helioseismic_2012} on 12 February 2011 15:32:42 UTC, from the list of active regions studied by \citet{warren_systematic_2012}. The line-of-sight (LOS) magnetogram is shown in the left panel of \autoref{fig:magnetogram}. We model the geometry of \AR{} NOAA 1158 by computing the three-dimensional magnetic field using the oblique potential field extrapolation method of \citet{schmidt_observable_1964} as outlined in \citet[Section 3]{sakurai_greens_1982}. The extrapolation technique of \citeauthor{schmidt_observable_1964} is well-suited for our purposes due to its simplicity and efficiency though we note it is only applicable on the scale of an \AR{}. We include the oblique correction to account for the fact that the \AR{} is off of disk-center. The HMI LOS magnetogram provides the lower boundary condition of the vector magnetic field (i.e. $B_z(x,y,z=0)$) for our field extrapolation. We crop the magnetogram to an area of 300\arcsec-by-300\arcsec centered on $(\py[manager_methods]|f'{ar.magnetogram.center.Tx.value:.2f}'|\arcsec,\py[manager_methods]|f'{ar.magnetogram.center.Ty.value:.2f}'|\arcsec)$ and resample the image to 100-by-100 pixels to reduce the computational cost of the field extrapolation. Additionally, we define our extrapolated field to have a dimension of 100 pixels and spatial extent of $0.3R_{\sun}$ in the $z-$direction such that each component of our extrapolated vector magnetic field $\vec{B}$ has dimensions $(100,100,100)$. %spell-checker: disable \begin{pycode}[manager_methods] fig = plt.figure(figsize=texfigure.figsize( pytex, scale=0.5 if is_onecolumn() else 1, height_ratio=1.0, figure_width_context='columnwidth' )) ax = fig.gca() vals,bins,_ = ax.hist( [l.full_length.to(u.Mm).value for l in ar.loops[:]], bins='scott', color='k', histtype='step', lw=plt.rcParams['lines.linewidth']) ax.set_xlabel(r'$L$ [Mm]'); ax.set_ylabel(r'Number of Loops'); ax.set_ylim(-100,1300) ax.set_xlim(-1,260) # Spines ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) ax.spines['left'].set_bounds(ax.get_yticks()[1], ax.get_yticks()[-2]) ax.spines['bottom'].set_bounds(ax.get_xticks()[1], ax.get_xticks()[-2]) fig_loop_dist = manager_methods.save_figure('loops',) fig_loop_dist.caption = r'Distribution of footpoint-to-footpoint lengths (in Mm) of the 5000 field lines traced from the field extrapolation computed from the magnetogram of NOAA 1158.' fig_loop_dist.figure_width = r'0.5\columnwidth' if is_onecolumn() else r'\columnwidth' fig_loop_dist.placement = '' fig_loop_dist.fig_str = fig_str \end{pycode} \py[manager_methods]|fig_loop_dist| %spell-checker:enable After computing the three-dimensional vector field from the observed magnetogram, we trace $5\times10^3$ field lines through the extrapolated volume using the streamline tracing functionality in the yt software package \citep{turk_yt_2011}. \added{We choose $5\times10^3$ lines in order to balance computational cost with the need to make the resulting emission approximately volume filling. We place the seed points for the field line tracing at the lower boundary ($z=0$) of the extrapolated vector field in areas of strong, positive polarity in $B_z$.} Furthermore, we keep only closed field lines in the range $20<L<300$ Mm, where $L$ is the full length of the field line. The right panel of \autoref{fig:magnetogram} shows a subset of the traced field lines overlaid on the observed AIA 171 \AA{} image of NOAA 1158. Contours from the observed HMI LOS magnetogram are shown in red (positive polarity) and blue (negative polarity). A qualitative comparison between the extrapolated field lines and the loops visible in the AIA 171 \AA{} image reveals that the field extrapolation and line tracing adequately capture the three-dimensional geometry of the \AR{}. \autoref{fig:loops} shows the distribution of footpoint-to-footpoint lengths for all of the traced field lines. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Loop Hydrodynamics %%%%%%%%%%%%%%%%%%%%%% \subsection{Hydrodynamic Modeling}\label{loops} Due to the low-$\beta$ nature of the corona, we can treat each field line traced from the field extrapolation as a thermally-isolated strand. We use the Enthalpy-based Thermal Evolution of Loops model \citep[EBTEL,][]{klimchuk_highly_2008,cargill_enthalpy-based_2012,cargill_enthalpy-based_2012-1}, specifically the two-fluid version of EBTEL \citep{barnes_inference_2016}, to model the thermodynamic response of each strand. The two-fluid EBTEL code solves the time-dependent, two-fluid hydrodynamic equations spatially-integrated over the corona for the electron pressure and temperature, ion pressure and temperature, and density. The two-fluid EBTEL model accounts for radiative losses in both the transition region and corona, thermal conduction (including flux limiting), and binary Coulomb collisions between electrons and ions. The time-dependent heating input is configurable and can be deposited in the electrons and/or ions. A detailed description of the model and a complete derivation of the two-fluid EBTEL equations can be found in Appendix B of \citet{barnes_inference_2016}. For each of the $5\times10^3$ strands, we run a separate instance of the two-fluid EBTEL code for $3\times10^4$ s of simulation time to model the time-dependent, spatially-averaged coronal temperature and density. For each simulation, the loop length is determined from the field extrapolation. We include flux limiting in the heat flux calculation and use a flux limiter constant of 1 \citep[see Equations 21 and 22 of][]{klimchuk_highly_2008}. Additionally, we choose to deposit all of the energy into the electrons \added{though we note that preferentially energizing one species over another will not significantly impact the cooling behavior of the loop as the two species will have had sufficient time to equilibrate \citep{barnes_inference_2016,barnes_inference_2016-1}}. To map the results back to the extrapolated field lines, we assign a single temperature and density to every point along the strand at each time step. Though EBTEL only computes spatially-averaged quantities in the corona, its efficiency allows us to calculate time-dependent solutions for many thousands of strands in a few minutes. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Heating %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Heating Model}\label{heating} We parameterize the heating input in terms of discrete heating pulses on a single strand with triangular profiles of duration $\tau_{\textup{event}}=200$ s. For each event $i$, there are two parameters: the peak heating rate $q_i$ and the waiting time prior to the event $\twait[,i]$. We define the waiting time such that $\twait[,i]$ is the amount of time between when event $i-1$ ends and event $i$ begins. Following the approach of \citet{cargill_active_2014}, we relate the waiting time and the event energy such that $\twait[,i]\propto q_i$. The physical motivation for this scaling is as follows. In the nanoflare model of \citet{parker_nanoflares_1988}, random convective motions continually stress the magnetic field rooted in the photosphere, leading to the buildup and eventual release of energy. If the field is stressed for a long amount of time without relaxation, large discontinuities will have time to develop in the field, leading to a dramatic release of energy. Conversely, if the field relaxes quickly, there is not enough time for the field to become sufficiently stressed and the resulting energy release will be relatively small. In this work we explore three different heating scenarios: low-, intermediate-, and high-frequency nanoflares. We define the heating frequency in terms of the ratio between the fundamental cooling timescale due to thermal conduction and radiation, $\tau_{\textup{cool}}$, and the average waiting time of all events on a given strand, $\langle \twait\rangle$, \begin{equation}\label{eq:heating_types} \varepsilon = \frac{\langle \twait\rangle}{\tau_{\textup{cool}}} \begin{cases} < 1, & \text{high frequency},\\ \sim1, & \text{intermediate frequency}, \\ > 1, & \text{low frequency}. \end{cases} \end{equation} We choose to parameterize the heating in terms of the cooling time rather than an absolute waiting time as $\tau_{\textup{cool}}\sim L$ \citep[see appendix of][]{cargill_active_2014}. While a waiting time of 2000 s might correspond to low-frequency heating for a 20 Mm strand, it would correspond to high-frequency heating in the case of a 150 Mm strand. By parameterizing the heating in this way, we ensure that all strands in the \AR{} are heated at the same frequency relative to their cooling time. \autoref{fig:hydro-profiles} shows the heating rate, electron temperature, and density as a function of time, for a single strand, for the three heating scenarios listed above. % spell-checker: disable % \begin{pycode}[manager_methods] fig,axes = plt.subplots( 3, 1, sharex=True, figsize=texfigure.figsize( pytex, scale=0.5 if is_onecolumn() else 1, height_ratio=1.25, figure_width_context='columnwidth' ) ) plt.subplots_adjust(hspace=0.) colors = heating_palette() i_loop=680 heating = ['high_frequency', 'intermediate_frequency','low_frequency'] loop = ar.loops[i_loop] for i,h in enumerate(heating): loop.parameters_savefile = os.path.join(manager_methods.data_dir, f'{h}', 'loop_parameters.h5') with h5py.File(loop.parameters_savefile, 'r') as hf: q = np.array(hf[f'loop{i_loop:06d}']['heating_rate']) axes[0].plot(loop.time, 1e3*q, color=colors[i], label=h.split('_')[0].capitalize(),) axes[1].plot(loop.time, loop.electron_temperature[:,0].to(u.MK), color=colors[i],) axes[2].plot(loop.time, loop.density[:,0]/1e9, color=colors[i],) # Legend axes[0].legend(ncol=3,loc="lower center", bbox_to_anchor=(0.5,1.02),frameon=False,) # Labels and limits axes[0].set_xlim(0,3e4) axes[0].set_yticks([5,15,25]) axes[1].set_ylim(0.1,8) axes[1].set_yticks([2,4,6,8]) axes[2].set_ylim(0,2) #axes[2].set_yticks([0.5,1,1.5]) axes[0].set_ylabel(r'$Q$ [10$^{-3}$ erg$/$cm$^{3}$$/$s]') axes[1].set_ylabel(r'$T$ [MK]') axes[2].set_ylabel(r'$n$ [10$^9$ cm$^{-3}$]') axes[2].set_xlabel(r'$t$ [s]') # Spines axes[0].spines['bottom'].set_visible(False) axes[0].spines['top'].set_visible(False) axes[0].spines['right'].set_visible(False) axes[0].tick_params(axis='x',which='both',bottom=False) axes[1].spines['top'].set_visible(False) axes[1].spines['bottom'].set_visible(False) axes[1].spines['right'].set_visible(False) axes[1].tick_params(axis='x',which='both',bottom=False) axes[2].spines['top'].set_visible(False) axes[2].spines['right'].set_visible(False) fig_hydro_profiles = manager_methods.save_figure('hydro-profiles') fig_hydro_profiles.caption = r'Heating rate (top), electron temperature (middle), and density (bottom) as a function of time for the three heating scenarios for a single strand. The colors denote the heating frequency as defined in the legend. The strand has a half length of $L/2\approx40$ Mm and a mean field strength of $\bar{B}\approx30$ G.' fig_hydro_profiles.figure_width = r'0.5\columnwidth' if is_onecolumn() else r'\columnwidth' fig_hydro_profiles.fig_str = fig_str \end{pycode} \py[manager_methods]|fig_hydro_profiles| % spell-checker: enable % For a single impulsive event $i$ with a triangular temporal profile of duration $\tau_{\textup{event}}$, the energy density is $E_i=\tau_{\textup{event}}q_i/2$. Summing over all events on all strands that comprise the \AR{} gives the total energy flux injected into the \AR{}, \begin{equation} F_{AR} = \frac{\tau_{\textup{event}}}{2}\frac{\sum_l^{N_{\textup{strands}}}\sum_i^{N_l} q_iL_l}{t_\textup{total}} \end{equation} where $t_\textup{total}$ is the total simulation time, $N_\textup{strands}$ is the total number of strands comprising the \AR{}, and $N_l=(t_\textup{total} + \langle\twait\rangle)/(\tau + \langle\twait\rangle)$ is the total number of events occurring on each strand over the whole simulation. Note that the number of events per strand is a function of both $\varepsilon$ and $\tau_{\textup{cool}}$. For each heating frequency, we constrain the total flux into the \AR{} to be $F_{\ast}=10^7$ erg cm$^{-2}$ s$^{-1}$ \citep{withbroe_mass_1977} such that $F_{AR}$ must satisfy the condition, \begin{equation}\label{eq:energy_constraint} \frac{| F_{AR}/N_\textup{strands} - F_{\ast} |}{F_{\ast}} < \delta, \end{equation} where $\delta\ll1$. For each strand, we choose $N_l$ events each with energy $E_i$ from a power-law distribution with slope $-2.5$ and fix the upper bound of the distribution to be $\bar{B}_l^2/8\pi$, where $\bar{B}_l$ is the spatially-averaged field strength along the strand $l$ as derived from the field extrapolation. This is the maximum amount of energy made available by the field to heat the strand. We then iteratively adjust the lower bound on the power-law distribution for $E_i$ until we have satisfied \autoref{eq:energy_constraint} within some numerical tolerance. We note that the set of $E_i$ we choose for each strand may not uniquely satisfy \autoref{eq:energy_constraint}. We use the field strength derived from the potential field extrapolation to constrain the energy input to our hydrodynamic model for each strand. While the derived potential field is already in its lowest energy state and thus has no energy to give up, our goal here is only to understand how the distribution of field strength may be related to the properties of the heating. In this way, we use the potential field as a proxy for the non-potential component of the coronal field, with the understanding that we cannot make any quantitative conclusions regarding the amount of available energy or the stability of the field itself. \begin{deluxetable}{lcc} \tablecaption{All three heating models plus the two single-event control models. In the single-event models, the energy flux is not constrained by \autoref{eq:energy_constraint}.\label{tab:heating}} \tablehead{\colhead{Name} & \colhead{$\varepsilon$ (see Eq.\ref{eq:heating_types})} & \colhead{Energy Constrained?}} \startdata high & 0.1 & yes \\ intermediate & 1 & yes \\ low & 5 & yes \\ cooling & 1 event per strand & no \\ random & 1 event per strand & no \enddata \end{deluxetable} In addition to these three multi-event heating models, we also run two single-event control models. In both control models every strand in the \AR{} is heated exactly once by an event with energy $\bar{B}_l^2/8\pi$. In our first control model, the start time of every event is $t=0$ s such that all strands are allowed to cool uninterrupted for $t_\textup{total}=10^4$ s. In the second control model, the start time of the event on each strand is chosen from a uniform distribution over the interval $[0, 3\times10^4]$ s, such that the heating is likely to be out of phase across all strands. In these two models, the energy has not been constrained according to \autoref{eq:energy_constraint} and the total flux into the \AR{} is $(\sum_{l}\bar{B}_l^2L_l)/8\pi t_\textup{total}$. From here on, we will refer to these two models as the ``cooling'' and ``random'' models, respectively. All five heating scenarios are summarized in \autoref{tab:heating}. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Forward Modeling %%%%%%%%%%%%%%%%%%%%%% \subsection{Forward Modeling}\label{forward} \subsubsection{Atomic Physics}\label{atomic} For an optically-thin, high-temperature, low-density plasma, the radiated power per unit volume, or \textit{emissivity}, of a transition $\lambda_{ij}$ of an electron in ion $k$ of element $X$ is given by, \begin{equation} \label{eq:ppuv} P(\lambda_{ij}) = \frac{n_H}{n_e}\mathrm{Ab}(X)N_j(X,k)f_{X,k}A_{ji}\Delta E_{ji}n_e, \end{equation} where $N_j$ is the fractional energy level population of excited state $j$, $f_{X,k}$ is the fractional population of ion $k$, $\mathrm{Ab}(X)$ is the abundance of element $X$ relative to hydrogen, $n_H/n_e\approx0.83$ is the ratio of hydrogen and electron number densities, $A_{ji}$ is the Einstein coefficient, and $\Delta E_{ji}=hc/\lambda_{ij}$ is the energy of the emitted photon \citep[see][]{mason_spectroscopic_1994,del_zanna_solar_2018}. To compute \autoref{eq:ppuv}, we use version 8.0.6 of the CHIANTI atomic database \citep{dere_chianti_1997,young_chianti_2016}. We use the abundances of \citet{feldman_potential_1992} as provided by CHIANTI. For each atomic transition, $A_{ji}$ and $\lambda_{ji}$ can be looked up in the database. To find $N_j$, we solve the level-balance equations for ion $k$, including the relevant excitation and de-excitation processes as provided by CHIANTI \citep[see Section 3.3 of][]{del_zanna_solar_2018}. The ion population fractions, $f_{X,k}$, provided by CHIANTI assume ionization equilibrium (i.e. the ionization and recombination rates are always in balance). However, in the rarefied solar corona, where the plasma is likely heated impulsively, it is not guaranteed that the ionization timescale is less than the heating timescale such that the ionization state may not be representative of the electron temperature \citep{bradshaw_explosive_2006,reale_nonequilibrium_2008,bradshaw_numerical_2009}. To properly account for this effect, we compute $f_{X,k}$ by solving the time-dependent ion population equations for each element using the ionization and recombination rates provided by CHIANTI. The details of this calculation are provided in \autoref{nei}. \subsubsection{Instrument Effects}\label{instrument} % spell-checker: disable % \begin{pycode}[manager_methods] em = EmissionModel.restore(os.path.join(manager_methods.data_dir, 'base_emission_model.json')) data = {'Element': [], 'Number of Ions': [], 'Number of Transitions': [],} for i in em: if not hasattr(i.transitions, 'wavelength'): continue data['Element'].append(i.atomic_symbol) data['Number of Ions'].append(1) data['Number of Transitions'].append(i.transitions.wavelength.shape[0]) df = pd.DataFrame(data=data).groupby('Element').sum().reset_index() z = df['Element'].map(plasmapy.atomic.atomic_number) df = df.assign(z = z).sort_values(by='z', axis=0).drop(columns='z') caption = r"Elements included in the calculation of \autoref{eq:intensity}. For each element, we include all ions for which CHIANTI provides sufficient data for computing the emissivity.\label{tab:elements}" with io.StringIO() as f: ascii.write(Table.from_pandas(df), format='aastex', caption=caption, output=f) table = f.getvalue() \end{pycode} \py[manager_methods]|table| % spell-checker: enable % We combine \autoref{eq:ppuv} with the wavelength response function of the instrument to model the intensity as it would be observed by AIA, \begin{equation}\label{eq:intensity} I_c = \frac{1}{4\pi}\sum_{\{ij\}}\int_{\text{LOS}}\mathrm{d}hP(\lambda_{ij})R_c(\lambda_{ij}) \end{equation} where $I_c$ is the intensity for a given pixel in channel $c$, $P(\lambda_{ij})$ is the emissivity as given by \autoref{eq:ppuv}, $R_c$ is the wavelength response function of the instrument for channel $c$ \citep[see][]{boerner_initial_2012}, $\{ij\}$ is the set of all atomic transitions listed in \autoref{tab:elements}, and the integration is along the LOS. Note that when computing the intensity in each channel of AIA, we do not rely on the temperature response functions computed by SolarSoft \citep[SSW,][]{freeland_data_1998} and instead use the wavelength response functions directly. This is because the response functions returned by \texttt{aia\_get\_response.pro} assume both ionization equilibrium and constant pressure. \autoref{effective_response_functions} provides further details on our motivation for recomputing the temperature response functions. We compute the emissivity according to \autoref{eq:ppuv} for all of the transitions in \autoref{tab:elements} using the temperatures and densities from from our hydrodynamic models for all $5\times10^3$ strands. We then compute the LOS integral in \autoref{eq:intensity} by first converting the coordinates of each strand to a helioprojective (HPC) coordinate frame \citep[see][]{thompson_coordinate_2006} using the coordinate transformation functionality in Astropy \citep{the_astropy_collaboration_astropy_2018} combined with the solar coordinate frames provided by SunPy \citep{sunpy_community_sunpypython_2015}. This enables us to easily project our simulated \AR{} along any arbitrary LOS simply by changing the location of the observer that defines the HPC frame. Here, our HPC frame is defined by an observer at the position of the SDO spacecraft on 12 February 2011 15:32:42 UTC (i.e. the time of the HMI observation of NOAA 1158 shown in \autoref{fig:magnetogram}). Next, we use these transformed coordinates to compute a weighted two-dimensional histogram, using the integrand of \autoref{eq:intensity} at each coordinate as the weights. We construct the histogram such that the bin widths are consistent with the spatial resolution of the instrument. For AIA, a single bin, representing a single pixel, has a width of 0.6\arcsec-per-pixel. Finally, we apply a gaussian filter to the resulting histogram to emulate the point spread function of the instrument. We do this for each time step, using a cadence of 10 s, and for each channel. For every heating scenario, this produces approximately $6(3\times10^4)/10\approx2\times10^4$ separate images.
{ "alphanum_fraction": 0.7431399343, "avg_line_length": 87.6278317152, "ext": "tex", "hexsha": "8bdb57f4e5049a0935170b24b5d79155230c27bc", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "80f68bceb7ecbcd238c196e3cc07d19e88617720", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "rice-solar-physics/synthetic-observables-paper-models", "max_forks_repo_path": "paper/sections/methods.tex", "max_issues_count": 8, "max_issues_repo_head_hexsha": "80f68bceb7ecbcd238c196e3cc07d19e88617720", "max_issues_repo_issues_event_max_datetime": "2021-10-19T19:51:00.000Z", "max_issues_repo_issues_event_min_datetime": "2019-06-11T10:32:49.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "rice-solar-physics/synthetic-observables-paper-models", "max_issues_repo_path": "paper/sections/methods.tex", "max_line_length": 1276, "max_stars_count": null, "max_stars_repo_head_hexsha": "80f68bceb7ecbcd238c196e3cc07d19e88617720", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "rice-solar-physics/synthetic-observables-paper-models", "max_stars_repo_path": "paper/sections/methods.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7012, "size": 27077 }
\documentclass[10pt]{article} \usepackage{amsmath, amsfonts, amsthm, amssymb} \usepackage{mathtools} \usepackage{enumerate} \usepackage[a4paper, total={6in, 9in}]{geometry} \usepackage[x11names, rgb]{xcolor} \usepackage{tikz} \usepackage{graphicx} % \usepackage{times} % use other fonts if needed \graphicspath{ {./imgs/} } \setlength{\parindent}{0pt} \setlength{\parskip}{4pt plus 1pt} \pagestyle{empty} \def\indented#1{\list{}{}\item[]} \let\indented=\endlist % ----- Identifying Information ----------------------------------------------- \newcommand{\myclass}{15-213 Intro to Computer Systems} \newcommand{\myhwname}{Malloc Lab} \newcommand{\myname}{Andrew Carnegie} \newcommand{\myandrew}{[email protected]} % ----------------------------------------------------------------------------- \begin{document} \begin{center} {\Large \myclass{}} \\ {\large{\myhwname}} \\ \myname \\ \myandrew \\ \today \end{center} % ----- Main content begins ----------------------------------------------- \section{Background} \section{First section} \begin{enumerate} \setcounter{enumi}{0} \item (5 points) How many hosts configure their IP addresses using DHCP? What are the IP address(es) of the DHCP server(s)? \textbf{Answer: } \begin{center} \includegraphics[scale=0.3]{cmu.png}\\ \small{Fig 1.1 First report image} \end{center} The IP address of the DHCP server is \verb|192.168.0.1|. \end{enumerate} \subsection{Itemized section} \begin{itemize} \item { \verb|192.168.0.10-2.0.0.1|: the MAC addresses are from \verb|94:c6:91:a0:75:cf| to \verb|00:50:b6:e2:0f:bb|. } \item { \verb|3.0.0.2-2.0.0.1 #1|: the MAC addresses are from \verb|b8:27:eb:e2:66:9f| to \verb|00:50:b6:bc:87:08|. } \item { \verb|3.0.0.2-2.0.0.1 #2|: the MAC addresses are from \verb|b8:27:eb:c4:73:d4| to \verb|00:50:b6:e2:10:0c|. } \item { \verb|3.0.0.2-2.0.0.1 #3|: the MAC addresses are from \verb|b8:27:eb:c7:01:e6| to \verb|00:50:b6:bc:80:27|. } \item { \verb|3.0.0.2-2.0.0.1 #4|: the MAC addresses are from \verb|b8:27:eb:79:49:98| to \verb|b8:27:eb:40:51:78|. } \end{itemize} \subsection{Sub 2 \& subsubs} \subsubsection{Subsub 1} \subsubsection{Subsub 2} \begin{center} \includegraphics[scale=0.2]{cmu} \end{center} \subsubsection{Sub 3} - \textbf{foo}: bar\\ \section{Conclusion} % ----- Main content ends ----------------------------------------------- \end{document}
{ "alphanum_fraction": 0.6114754098, "avg_line_length": 24.8979591837, "ext": "tex", "hexsha": "181caa8e1897346fc091ef2629cceaa0995a53b5", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "6ad95a77ed4b96df15f00d9dc0deb633163833e3", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "foreseaz/latex-cmu", "max_forks_repo_path": "template.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6ad95a77ed4b96df15f00d9dc0deb633163833e3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "foreseaz/latex-cmu", "max_issues_repo_path": "template.tex", "max_line_length": 79, "max_stars_count": null, "max_stars_repo_head_hexsha": "6ad95a77ed4b96df15f00d9dc0deb633163833e3", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "foreseaz/latex-cmu", "max_stars_repo_path": "template.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 856, "size": 2440 }
\declareIM{a}{1}{2021-04-23}{ComCam Image Capture and Archive}{CC Capture/Archive} \completeIM{\thisIM}{2021-05-01} Executive Summary: Run ComCam from notebooks, with generation and certification of ComCam calibrations. \textbf{Completion of exercise maps to \JIRA{SUMMIT}{2983}{COMP: ComCam re-Verification Complete}} \subsection{Goals of IM} \begin{itemize} \item Taking ComCam images in Chile using nublado \textbf{\JIRA{SUMMIT}{2979}{(and \JIRA{SUMMIT}{2980}{}) - ComCam CCS - OCS Command ICD Bench Testing}} \item Taking calibration and other images using the \gls{scriptQueue} \item Automatic ingestion into a gen3 butler in Chile \textbf{\JIRA{SUMMIT}{2982}{(and \JIRA{SUMMIT}{2869}{}) ComCam DAQ - DMS ICD Bench Testing}} \item Transfer over \gls{DBB Buffer manager} and the \gls{LHN} to a gen3 repo on the \gls{RSP} \item Human generation and availability of master calibrations in Chile \end{itemize} \subsection{Prerequisites} \begin{itemize} \item{ComCam on summit} \begin{itemize} \item cold and functional \item incoherent light source available to take flats \end{itemize} \item{gen3 butler ingestion for ComCam} \item{Nublado running in Chile} \end{itemize} \subsection{Procedure} The following procedure is to be executed by a general commissioning team member. The script creation and scriptQueue requires a minor amount of training and may require assistance. \begin{enumerate} \item Following a procedure, instantiate the OCS bridge \item Using Nublado, bring to enabled state using Notebook \item Using Nublado and the \href{https://ts-observatory-control.lsst.io/py-api/lsst.ts.observatory.control.maintel.ComCam.html}{ComCam class}: \begin{enumerate} \item Take a single OBJECT, BIAS, FLAT, DARK image \item For each image, monitor event for CCCamera completion, monitor \gls{OODS} event saying image is ready, use butler to grab image, display image locally using Firefly/DS9 or camera display tool \end{enumerate} \item In a notebook, create cells to take bias, dark, flat, and PTC calibration data \item Convert Nublado cells to \gls{scriptQueue}, creating a ``standard Calibration'' script and execute them \item From the Commissioning Cluster at the base: \begin{enumerate} \item Display one of each image type locally using Firefly/DS9 or camera display tool \item Run gen3 \gls{cp_pipe} by hand from a Nublado terminal \begin{enumerate} \item Create master biases, flats, darks using ``auto-certify'' mode which assumes that the derived products are good \footnote{This results in the calibrations being available for use} \item Copy images to summit and include in summit Butler repo. \end{enumerate} \end{enumerate} \item Take further exposures with structured illumination, preferably different from what was used to generate the flat. \item From the summit, run \gls{ISR} processing, display images and confirm new calibs are being applied \item Once data is synced to NCSA via the \gls{LHN}, Repeat the generation of calibration images using the \gls{RSP} \item Re-verify that \gls{RSP} generated calibration images can be used on the summit. \end{enumerate} \subsection{Status} \begin{description} \item[2021-02-08] Tested using AuxTel \begin{itemize} \item Images taken using nublado and \gls{scriptQueue} (no demonstration of calibrations with \gls{scriptQueue}) \item Automatic ingestion into gen3 butler in Chile (no demonstration of functional rerun/calibrations) \item Transfer to NCSA and ingestion in gen3 butler visible from \gls{RSP} \item Calibs (bias, dark) generated in Chile and used with gen2 butler (exposing a per-day lookup bug in gen2 butler) \end{itemize} \end{description}
{ "alphanum_fraction": 0.7360575687, "avg_line_length": 51.88, "ext": "tex", "hexsha": "5d18dc05430f00c0bee915da3ad5b0fa2aba925c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b8d07b89110d868d8e6640e59911d74f1619846e", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "lsst-sitcom/sitcomtn-006", "max_forks_repo_path": "IMa.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b8d07b89110d868d8e6640e59911d74f1619846e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "lsst-sitcom/sitcomtn-006", "max_issues_repo_path": "IMa.tex", "max_line_length": 200, "max_stars_count": null, "max_stars_repo_head_hexsha": "b8d07b89110d868d8e6640e59911d74f1619846e", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "lsst-sitcom/sitcomtn-006", "max_stars_repo_path": "IMa.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1044, "size": 3891 }
% honours thesis, main .tex file % initial comments are from Rose Ahlefeldt \documentclass[onecolumn,12pt,a4paper,openany,oneside]{book} \usepackage{anuthesis} % Style file for thesis formatting \usepackage{fullpage} \usepackage{layouts} % Global formatting packages \usepackage[T1]{fontenc} % For fonts. Fixes some pdf bugs \usepackage{lmodern} % fixes some font scaling problem \usepackage[nottoc]{tocbibind} %Add bibliography/index/contents to Table of Contents \usepackage{appendix} % To label appendix chapters as "Appendix A, B, C..." instead of "Chapters 7, 8 ,9" \usepackage{fancyhdr} % To allow more control of headers/footers \usepackage{hyperref} % hyperlinks for \ref and \cite commands, super useful. % Maths packages \usepackage{amsmath,amssymb, amsfonts} % General maths % Some optional packages \usepackage{mathtools} % Builds on amsmath, more formatting options \usepackage{siunitx} %Offers consistent formatting of in-text numbers and units \usepackage{physics} % bunch of handy physics things, like bra-ket notation % Tables \usepackage{booktabs} % Nice table formatting \usepackage[table,xcdraw]{xcolor} % Figures \usepackage{graphicx} % Required for figures \graphicspath{{./figs/}} % Bibliography \usepackage[numbers]{natbib} %Offers more sophisticated bibliographic options % \bibliographystyle{unsrtnat} % bibliography in order of appearance, comment out to use own style (.bst) file % Markup %\usepackage{showkeys} %Turn this on to list citation keys, reference keys in text \usepackage[mathlines,pagewise]{lineno} % line numbers % \usepackage[usenames,dvipsnames]{xcolor} % \newcommand{\jam}{\textcolor{magenta}} \newcommand{\jam}[1]{} % \newcommand{\jam}[1]{\ignorespaces} \newcommand{\code}[1]{\texttt{#1}} \usepackage{comment} % If you never plan to print and bind your thesis as a book, you can make the margins even (setlength commands below) you will also need to add the option "openany" to your documentclass %\setlength\oddsidemargin{\dimexpr(\paperwidth-\textwidth)/2 - 1in\relax} %\setlength\evensidemargin{\oddsidemargin} \setlength{\parindent}{0pt} % Here are the parameters for the page layout. If you modify the template, do not decrease: the font size, the line spacing (1.5), the margin size.\\ % \printinunitsof{mm}{\pagevalues} % \verb|\marginparwidth|: \printinunitsof{mm}\prntlen{\marginparwidth} % \pagediagram \interfootnotelinepenalty=10000 \usepackage[bottom]{footmisc} % to make footnotes appear on same page, but will mess with floats \usepackage{etoolbox} \makeatletter \patchcmd\end@float{\@cons\@currlist\@currbox} {\@cons\@currlist\@currbox \global\holdinginserts\@ne} {}{} \apptocmd\@specialoutput{\global\holdinginserts\z@} \makeatother %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} \begin{titlepage} \hypersetup{pageanchor=false} \title{\textbf{Improving future gravitational-wave detectors using nondegenerate internal~squeezing}\\[2cm]} % ``Advanced quantum-mechanical techniques for future gravitational-wave detectors'' is too similar to: https://link.springer.com/article/10.1007/s41114-019-0018-y ? \author{\textbf{James W. Gardner}\\[6cm] \textbf{A thesis submitted for partial fulfilment of the degree of}\\ \textbf{Bachelor of Philosophy (Honours) with Honours in Physics at} \\ \textbf{The Australian National University}\\[1cm]} \date{\textbf{October 2021}} \maketitle \end{titlepage} \pagenumbering{roman} \hypersetup{pageanchor=true} \newpage \sloppy \chapter*{Declaration} \addcontentsline{toc}{chapter}{Declaration} This thesis is an account of research undertaken between February 2021 and October 2021 at the Centre for Gravitational Astrophysics, Research School of Physics and Research School of Astronomy and Astrophysics, The Australian National University, Canberra, Australia. Except where acknowledged in the customary manner, the material presented in this thesis is, to the best of my knowledge, original and has not been submitted in whole or part for a degree at any other university. The research presented in this thesis was completed on the lands of the Ngunnawal and Ngambri people, the traditional owners of the land of The Australian National University's Canberra campus. I acknowledge that this land was stolen and sovereignty was never ceded, and I pay my respects to the elders past, present, and emerging. % surely present=emerging? \vspace{20mm} \hspace{80mm}\rule{40mm}{.15mm}\par \hspace{80mm} James W. Gardner\par \hspace{80mm} October 2021 % include statements effectively insert the contents of the named file. they always start a new page. \include{acknowledgements} \include{abstract} % \renewcommand{\baselinestretch}{1.}\normalsize \tableofcontents % \renewcommand{\baselinestretch}{1.5}\normalsize %\cleardoublepage % Make sure next section starts on a right hand page if using uneven margins %\linenumbers % This will add linenumbers, which are useful for people proofreading. % how to remove the "chapter 1" headings on the first page of a chapter: In your chapter itself, instead of "\chapter{First Chapter} use: \chapter*{First Chapter} \chaptermark{First Chapter} % then here, uncomment the below line (which must be before your include statement) %\addcontentsline{toc}{chapter}{First chapter} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % (background chapters) \include{introduction} % motivation: kHz gravitational wave detection could lead to new physics % the problem: cannot increase power but want to increase HF sensitivity without sacrificing LF sensitivity % aims % outline \include{background_theory} % Mizuno limit % cannot increase power % Squeezing to improve quantum noise--limited sensitivity % mechanism of squeezing % optical loss % external squeezing % remaining problem of how to further increase kHz sensitivity % should this background chapter be broken up? GWD's separately then theory of quantum noise? \include{existing_proposals} % literature review %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % (research chapters) \include{nIS_analytics} \include{science_case} \include{idler_readout} \include{conclusions} % return to aims % limitations % future work %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % add appendices only if they add to the story \begin{appendices} \include{appendixA} \end{appendices} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % \nocite{*} \bibliographystyle{myunsrt} \fontsize{12pt}{10pt} \selectfont \bibliography{thesis} \end{document}
{ "alphanum_fraction": 0.7516168771, "avg_line_length": 35.8784530387, "ext": "tex", "hexsha": "c6e25748045d66ef72e398ebbb787d2e51e3db15", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "570598b59ac8c70dee6387088698ed768a2d1247", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "daccordeon/nondegDog", "max_forks_repo_path": "thesis/thesis_main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "570598b59ac8c70dee6387088698ed768a2d1247", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "daccordeon/nondegDog", "max_issues_repo_path": "thesis/thesis_main.tex", "max_line_length": 358, "max_stars_count": 2, "max_stars_repo_head_hexsha": "570598b59ac8c70dee6387088698ed768a2d1247", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "daccordeon/nondegDog", "max_stars_repo_path": "thesis/thesis_main.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-24T23:42:29.000Z", "max_stars_repo_stars_event_min_datetime": "2021-02-22T05:25:14.000Z", "num_tokens": 1713, "size": 6494 }
\chapter{Abstract} This should be a 1-page (maximum) summary of your work in English.
{ "alphanum_fraction": 0.7415730337, "avg_line_length": 14.8333333333, "ext": "tex", "hexsha": "167528926ac84d586af67647f792c165f7968c7b", "lang": "TeX", "max_forks_count": 62, "max_forks_repo_forks_event_max_datetime": "2022-03-29T15:56:50.000Z", "max_forks_repo_forks_event_min_datetime": "2016-10-27T20:16:59.000Z", "max_forks_repo_head_hexsha": "925b5f9d9ee8c29d461d1682bd3efac29059dca7", "max_forks_repo_licenses": [ "RSA-MD" ], "max_forks_repo_name": "LordAxerion/HagenbergThesis", "max_forks_repo_path": "documents/HgbThesisEN/front/abstract.tex", "max_issues_count": 100, "max_issues_repo_head_hexsha": "925b5f9d9ee8c29d461d1682bd3efac29059dca7", "max_issues_repo_issues_event_max_datetime": "2021-12-19T08:07:21.000Z", "max_issues_repo_issues_event_min_datetime": "2016-06-25T07:29:11.000Z", "max_issues_repo_licenses": [ "RSA-MD" ], "max_issues_repo_name": "LordAxerion/HagenbergThesis", "max_issues_repo_path": "documents/HgbThesisEN/front/abstract.tex", "max_line_length": 66, "max_stars_count": 154, "max_stars_repo_head_hexsha": "925b5f9d9ee8c29d461d1682bd3efac29059dca7", "max_stars_repo_licenses": [ "RSA-MD" ], "max_stars_repo_name": "LordAxerion/HagenbergThesis", "max_stars_repo_path": "documents/HgbThesisEN/front/abstract.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-18T14:08:58.000Z", "max_stars_repo_stars_event_min_datetime": "2016-06-25T07:30:35.000Z", "num_tokens": 22, "size": 89 }
% % Annual Cognitive Science Conference % Sample LaTeX Paper -- Proceedings Format % % Original : Ashwin Ram ([email protected]) 04/01/1994 % Modified : Johanna Moore ([email protected]) 03/17/1995 % Modified : David Noelle ([email protected]) 03/15/1996 % Modified : Pat Langley ([email protected]) 01/26/1997 % Latex2e corrections by Ramin Charles Nakisa 01/28/1997 % Modified : Tina Eliassi-Rad ([email protected]) 01/31/1998 % Modified : Trisha Yannuzzi ([email protected]) 12/28/1999 (in process) % Modified : Mary Ellen Foster ([email protected]) 12/11/2000 % Modified : Ken Forbus 01/23/2004 % Modified : Eli M. Silk ([email protected]) 05/24/2005 % Modified : Niels Taatgen ([email protected]) 10/24/2006 % Modified : David Noelle ([email protected]) 11/19/2014 %% Change ''letterpaper'' in the following line to ''a4paper'' if you must. \documentclass[10pt,letterpaper]{article} \usepackage{cogsci} \usepackage{pslatex} \usepackage{apacite} \usepackage{amsmath,amssymb} \usepackage{graphicx} \usepackage{color} \usepackage{url} \usepackage{todonotes} \usepackage{mathtools} \usepackage{stmaryrd} \usepackage{booktabs} \usepackage{array} \newcommand{\den}[2][]{ \( \left\llbracket\;\text{#2}\;\right\rrbracket^{#1} \) } %\newcommand{\url}[1]{$#1$} \definecolor{Blue}{RGB}{0,0,255} \newcommand{\jd}[1]{\textcolor{Blue}{[jd: #1]}} \definecolor{Red}{RGB}{255,0,0} \newcommand{\red}[1]{\textcolor{Red}{#1}} \definecolor{Green}{RGB}{10,200,100} \newcommand{\ndg}[1]{\textcolor{Green}{[ndg: #1]}} \definecolor{Red}{RGB}{255,0,0} \newcommand{\caroline}[1]{\textcolor{Red}{#1}} \newcommand{\denote}[1]{\mbox{ $[\![ #1 ]\!]$}} \newcommand{\subsubsubsection}[1]{{\em #1}} \newcommand{\eref}[1]{(\ref{#1})} \newcommand{\tableref}[1]{Table \ref{#1}} \newcommand{\figref}[1]{Fig.~\ref{#1}} \newcommand{\appref}[1]{Appendix \ref{#1}} \newcommand{\sectionref}[1]{Section \ref{#1}} \title{Animal, dog, or dalmatian? Level of abstraction in nominal referring expressions} %Animal, dog, or dalmatian? Contextual informativeness, utterance length and utterance frequency affect choice of referring expressions. \author{{\large \bf Caroline Graf, Judith Degen, Robert X.D. Hawkins, Noah D.~Goodman} \\ [email protected], \{jdegen,rxdh,ngoodman\}@stanford.edu\\ Department of Psychology, 450 Serra Mall \\ Stanford, CA 94305 USA} \begin{document} \maketitle \begin{abstract} Nominal reference is very flexible---the same object may be called \emph{a dalmatian}, \emph{a dog}, or \emph{an animal} when all are literally true. What accounts for the choices that speakers make in how they refer to objects? The addition of modifiers (e.g.~\emph{big dog}) has been extensively explored in the literature, but fewer studies have explored the choice of noun, including its level of abstraction. We collected freely produced referring expressions in a multi-player reference game experiment, where we manipulated the object's context. We find that utterance choice is affected by the contextual informativeness of a description, its length and frequency, and the typicality of the object for that description. Finally, we show how these factors naturally enter into a formal model of production within the Rational Speech-Acts framework, and that the resulting model predicts our quantitative production data. %\red{The choice of referring expressions is highly context dependent. Whether speakers choose to refer to an object as a ``dalmatian'', a ``dog'', or an ``animal'' is determined partly by the features of the other objects present in the scene, as well as features of the utterance alternatives. In the first step, an exploratory analysis within a reference game setting demonstrates that speakers' choice of referring expression is dependent on a rich interplay of expressions' degree of contextual informativeness, their relative length combined with their relative frequency, as well as on the degree to which the referent is typical of a specific expression. If a competitor object of the same category (e.g., dog) as the target object is present in the display, speakers choose a more specific subcategory (e.g., ``dalmatian'') to refer to the target object. In contrast, when the competitor objects do not share the same (super-) category as the target, speakers are less constrained in choosing an appropriate referring expression. In this event, shorter expressions are preferred over longer ones, even more so when frequency of the expression is high. Moreover, the better the referent fits to the meaning of a referential expression due to being more typical of it, the more this expression is preferred. In a second step, quantitative effects are analyzed by a production model couched in the Rational Speech Act framework, which predicts the pattern of results by modelling a speaker as balancing an utterance's contextual informativeness with its cost relative to its alternatives while taking into account soft semantics of the meanings of expressions due to typicality effects.} \textbf{Keywords:} referential expressions, levels of reference, basic level, experimental pragmatics, computational pragmatics \end{abstract} %\section{\bf Introduction} %Reference is ubiquitous in human communication. Unsurprisingly, a wealth of literature has been devoted to how speakers choose referring expressions from the many options available to them with the goal of distinguishing one object from its surroundings \red{cite cite cite deutsch pechmann sedivy gatt}. Referring to objects is a core function of human language, and a wealth of research has explored how speakers choose referring expressions \cite{herrmann1976, Pechmann1989, VanDeemter2012}. However, most of this literature has focused on the addition of modifiers \cite<as in the choice between ``the dog'', ``the brown dog'', and ``the big brown dog'', e.g.,>{sedivy2003a, Koolen2011}. Here we investigate how speakers choose a simple nominal referring expression---what governs the choice of calling a particular object ``the dalmatian'', ``the dog'', or ``the animal'' when all are literally true? That is, what governs the choice of the taxonomic level at which an object is referred to? Noun choice can be seen as the most basic decision in forming a referring expression. Like modification, these choices differ in their specificity; unlike modification, the number of words used does not differ---in English, \emph{some} noun must be chosen. In this paper we provide experimental evidence from a coordination game regarding the flexible choice of nominal referring expressions and explain this data with a probabilistic model of pragmatic production. %they differs in that %Given the taxonomic relations that hold between dalmatians, dogs, and animals, this choice shares with the choice between modified expressions that the three expressions differ in how much information about the intended referent is provided. Yet it differs in that the choice is not in adding additional adjectives, but in choosing between different nouns in the first place. The question is what governs this choice. Previous evidence about the generation of referring expressions suggests that choice of reference level will depend on the interplay of several factors. Grice's Maxim of Quantity \cite{grice1975} implies a pressure for speakers to be sufficiently \emph{informative}. For instance, a speaker who is trying to distinguish a dalmatian from a German Shepherd would be expected to avoid the insufficiently specific term ``dog'' \cite{brennan1996}. On the other hand, recent work in experimental pragmatics has shown that the choice of referring expression depends on the \emph{cost} of utterance alternatives \cite{rohde2012, degenfrankejaeger2013}; sometimes, speakers are willing to produce a cheap ambiguous utterance rather than a costly (e.g.~long or difficult-to-retrieve) unambiguous one. %That is, there is some evidence that speakers trade off an utterance's contextual informativeness and cost in systematic ways. Finally, classic work on concepts suggests that \emph{typicality} of a referent within its category affects the choice of reference \cite{RoschEtAl76_BasicLevel}. In particular, speakers will generally choose to refer at the \emph{basic level} (e.g. ``dog''), but may become more specific for objects that are atypical for the basic level term. %\todo[inline]{rdh: If we're going to touch on it at the end, I think we should say something here about how we're going to try to account for basic-level biases through these other factors, rather than building it in -- like ``how can we push around preferences for the basic level by manipulating context and cost?''} %\caroline{I agree! I moved some things around and added a sentence on how we aimed to account for the bl bias by these factors } %Finally, classic work on concepts has shown that objects within a category vary in \emph{typicality} \caroline{(cite Rosch's prototype theory from 1973?)}. %Speakers do not always use basic level terms to refer. % %In certain cases, the basic level does not suffice for unique reference, e.g., when the speaker is trying to distinguish a dalmatian from a German Shepherd. In this case, the pressure to be sufficiently informative \cite<as in the Maxim of Quantity,>{grice1975} overrides the preference for the basic level and speakers typically choose a subcategory level term \cite{brennan1996} \red{cite others before them}. %%In other cases, as a subcategory convention forms, speakers continue to refer at that level, despite not being necessary for unique reference \cite{brennan1996}. % %Informativeness and a preference for the basic level do not fully explain the variability in speakers' reference choices. Recent work in experimental pragmatics has shown that the choice of referring expression depends on its cost relative to alternative utterances \cite{rohde2012, degenfrankejaeger2013}; sometimes, speakers are willing to produce a cheap ambiguous utterance rather than a costly (e.g., long or difficult-to-retrieve) unambiguous one. That is, there is some evidence that speakers trade off an utterance's contextual informativeness and cost in systematic ways. %\ndg{come back to the question of basic level in discussion, rather than intro?} %%There is a well-documented preference for use of basic level terms (e.g., ``dog'' \red{cite}). %In a series of classic experiments on the structure of concepts, Rosch and colleagues established that there is a maximally informative level of abstraction in category taxonomies. Dogs have a large number of features in common---four legs, wagging tail, loyal companion----which differentiate them from other animals, but there are fewer features that distinguish dalmatians from German Shepherds. She called this level of abstraction (e.g.~``dog'') the \emph{basic level} \cite{RoschEtAl76_BasicLevel}. Importantly for us, participants in a free-response naming task were much more likely to use the basic level than the superordinate or subordinate level, even though they knew all three terms. % %Finally, the picture is further complicated by \red{typicality effects that have also gone under the label of salience but nobody knows whether those are speaker-internal effects or audience design effects. refs in brennan \& clark, mitchell et al 2013, westerbeek et al 2015.} That is, speakers may sometimes add modifiers to a referring expression simply because the property denoted by the modifier is atypical of the referent, and hence, surprising. Conversely, speakers might sometimes use subcategory instead of basic level terms to refer to an object, if the object is not a very typical instance of the basic level. For example, a panda bear might be a less typical bear than a grizzly bear, so speakers might prefer to refer to the panda as ``panda'' but to the grizzly as ``bear''. %\jd{i don't know how much to foreshadow here about the utility of `overinformative' sub level use, because i don't know how the typicality model does; but this is the place to foreshadow that stuff, eg by reusing some of caroline and robert's prose commented out right after this.} %Including more attributes than are necessary to identify a target object may at first seem to contradict the Maxim of Quantity by Grice (1975), namely to make one's contribution as informative than is required and not more, but what does it actually mean for an utterance to be more informative than *required*? Does it mean that this utterance mentions attributes redundantly irrespective of their utility? What if the so called ``redundant'' attributes (e.g. Arts, 2004; Arts, Maes, Jansen, \& Noordman, 2011) actually have an utility after all by influencing the processing of an utterance? Pechmann (1985) has argued that the excessive use of color adjectives in referential expressions does in fact contribute to the informativeness of the utterance, namely by excluding some (but not all) distractors. As a matter of fact, other studies have demonstrated that overspecified utterances give rise to shorter identification times compared to minimally specified expressions (Deutsch, 1976; Sonnenschein, 1982). Thus, ``overinformative'' behavior must not necessarily be irrational behavior. %A similar argument can be made for the the use of a level of reference less abstract than necessary for disambiguation and the dominance for the the basic level in the production of referential expressions. The basic level of reference, which Rosch et. al (1976) characterize as the level of abstraction at which the most basic category cuts are made, seems to have a special status in human categorization behavior: Basic level categoriy labels are not only the first labels used for categorization during perception of the environment, but also the labels which lead to fastest category verification. Basic levels also carry the most information and possess the highest category cue validity (compared with the sublevel ``dalmatian'' or superlevel ``animal''). Therefore it can be argued that being more specific than necessary has a positive effect on the processing of the utterance for the listener and thus ``overinformativeness'' contributes to the informativeness of the utterance. %It is clear that the choice of level of reference depends on a rich interplay between at least the factors discussed in the previous paragraphs: the contextual informativeness of referring at that level, the utterance's cost (in terms of length and frequency), and the typicality of the referent's properties. What is unknown is how these different factors trade off, and indeed, how a speaker who tries to maximize the communicative efficiency of their utterances, \emph{should} trade off these different factors. To evaluate the impact of these factors on nominal reference we constructed a two-player online game (\figref{fig:procedure}). Participants saw a shared context of objects, one of which was indicated as the referent only to the speaker. The speaker was asked to communicate this object to the listener, who then chose among the objects. Critically, the speaker and listener communicated by free use of a chat window, allowing us to gather relatively natural referring expressions. We manipulated the category of distractor objects and used items that varied in utterance complexity and object typicality. This allowed us to evaluate whether each factor influences the referring expressions generated by participants. We expect that speakers will (1) tend to avoid longer or less frequent terms, and (2) will pragmatically prefer more specific referring expressions when the target and distractor(s) belong to the same higher-level taxonomic category or when distractors are more typical members of that category level. \begin{figure}[tb] \centering \includegraphics[width=.5\textwidth]{graphs/procedure} \caption{Screenshots from speakers' and listeners' points of view, showing role names and short task descriptions, the chatbox used for communication and a display of three pictures of objects. The referent was identified to the speaker by a green box.} \label{fig:procedure} \end{figure} %the choice of level of reference depends on a rich interplay between at least the factors discussed in the previous paragraphs: the contextual informativeness of referring at that level, the utterance's cost (in terms of length and frequency), and the typicality of the referent's properties. What is unknown is how these different factors trade off, and indeed, how a speaker who tries to maximize the communicative efficiency of their utterances, \emph{should} trade off these different factors. A promising modeling approach for capturing the quantitative details of human language use is the Rational Speech-Acts (RSA) framework \cite{frank2012, goodmanstuhlmueller2013}. The RSA framework has been applied to many language interpretation tasks \cite<e.g.>{goodmanstuhlmueller2013,kao2014}, but relatively rarely to production data \cite<but see>{franke2014, Orita2015}. We describe an RSA model of nominal reference that includes informativeness, cost, and typicality effects. A speaker in RSA is treated as an approximately optimal decision maker who chooses which utterance to use to communicate to a listener. The speaker has a utility which includes terms for the cost of producing an utterance (in terms of length or frequency) and the informativeness of the utterance for a listener. The listener is treated as a literal Bayesian interpreter who updates her beliefs given the truth of the utterance. These truth values are usually treated as deterministic (an object either is a ``dog'' or it is not); here we relax this formulation in order to incorporate typicality effects. That is, we elicit typicality ratings in a separate experiment, and model the listener as updating her beliefs by weighting the possible referents according to how typical each is for the description used. We evaluate the quantitative model predictions against our production data. The model also allows us to evaluate the need for each extra component---typicality, length, frequency---and determine whether the empirical bias toward reference at the basic level \cite{RoschEtAl76_BasicLevel} can be accounted for without building it in as a separate factor. % %\ndg{put somewhere} %Moreover, previous research examining the choice of level of reference in nominal expressions found there exists a privileged level of abstraction called the \emph{basic-level} towards which speakers tend to gravitate in free-response naming tasks \cite{RoschEtAl76_BasicLevel}. We aimed to find out to what extent this bias can be accounted for by the three above mentioned factors---contextual informativeness, utterance cost, and typicality. \section{Experiment: nominal reference game} %This experiment investigates the effect of informativeness, cost, and typicality on the choice of nominal referring expression by varying the visual context of a target object. %In our reference game, the participants' visual context consists of a set of two distractor objects. We vary the category level of the distractor objects (same basic level as the target, same superordinate level, different superordinate level) as well as the cost of the subordinate level term compared to the basic level term. %In order to analyze the effect of cost we collect corpus frequencies and length data from participants' utterances; to incorporate typicality we norm the items with a separate group of participants. % We take both an utterance's length and its frequency to contribute to its overall cost. We also include the typicality norms collected in the previous experiment as predictors of utterance choice. \subsection{Methods} %\ndg{i think we want procedure and design figure(s). the procedure fig should show a screen shot from speaker and listener points of view. the design fig should illustrate the dist12, etc notation used later with a concrete domain.} \paragraph{Participants and materials} We recruited 56 self-reported native speakers of English over Mechanical Turk. Participants completed the experiment in pairs of two, yielding 28 speaker-listener pairs. %\paragraph{\bf Materials} Stimuli were selected from nine distinct domains, each corresponding to distinct basic level categories such as ``dog.'' For each domain, we selected four subcategories to form our target set (e.g. ``dalmatian'', ``pug'', ``German Shepherd'' and ``husky''). Each domain also contained an additional item which belonged to the same basic level category as the target (e.g. ``greyhound'') and items which belonged to the same supercategory but not the same basic level (e.g. ``elephant'' or ``squirrel''). The latter items were used as distractors. %\paragraph{\bf Design} Each trial consisted of a display of three images, one of which was designated as the target object. Every pair of participants saw every target exactly once, for a total of 36 trials per pair. These target items were randomly assigned distractor items which were selected from four different context conditions, corresponding to different communicative pressures (see Fig. \ref{fig:design}). We refer to these conditions with pairs of numerals specifying which levels of the taxonomy are present in the distractors: (a) \textbf{item12}: one distractor of the same basic level and one distractor of the same superlevel (e.g. target: ``dalmatian'', distractor 1: ``greyhound'', distractor 2: ``squirrel''), (b) \textbf{item22}: two distractors of the same superlevel, (c) \textbf{item23}: one distractor of the same superlevel and one unrelated item and (d) \textbf{item33}: two unrelated items. % \todo[inline]{rdh: would be nice to use the TeX \emph{description} environment to format this if we have room} Each pair saw nine trials in each condition. %\todo[inline]{rdh: maybe it should already be clear from this, but how did we assign contexts to the 36 items? Totally randomly (with this 9 trials/condition constraint)? Or did the four targets within a domain get a one-to-one mapping with the four conditions?} (caroline: Yes it was totally random, except for 9 trials/condition constraint and every one of the 36 targets used once.) Furthermore, the experiment contained 36 filler items, in which participants were asked to produce referential expressions for objects which differed only in size and color. Images from filler trials were not reused on target trials. Trial order was randomized. \begin{figure}[bt!] \centering \includegraphics[width=.5\textwidth]{graphs/design} \caption{The four context conditions, exemplified by the \textit{dog} domain. The target is outlined in green; the types of distractors differ with condition (see text). %: in the \textbf{item12} condition, a distractor of the same basic level as the target (i.e. a distractor class 1) and a distractor of the same super level as the target (i.e. a distractor class 2) is presented. In \textbf{item22}, both distractors are class 2 distractors. In \textbf{item23}, one distractor is a class 2 distractor and the other is an artifact which neither shares the basic level nor super level with the target (i.e. a distractor class 3). Finally, there are two class 3 distractors in \textbf{item33}. %\ndg{we call these item12 etc elsewhere. should adjust.} } \label{fig:design} \end{figure} \paragraph{Procedure} Pairs of participants were connected through a real-time multi-player interface \cite{Hawkins15_RealTimeWebExperiments}, with one member of each pair assigned the speaker role and the other to the listener role. Participants kept their allotted roles for the entire experiment. The setup for both the speaker and the listener is shown in \figref{fig:procedure}. Each saw the same set of three images, but positions were randomized to rule out trivial position-based references like ``the middle one.'' The target object was identified by a green square surrounding it for the speaker (but not listener). Players used a chatbox to send text messages to each other. The task was for the speaker to get the listener to select the target object. %, and for the listener to select the right object based on the information provided by the speaker. \paragraph{Annotation} To determine the level of reference for each trial, we followed the following procedure. First, trials on which the listener selected the wrong referent were excluded, leading to the elimination of 1.2\% of trials. Then, speakers' and listeners' messages were parsed automatically; the referential expression used by the speaker was extracted for each trial and checked for whether it contained the current target's correct sub, basic or super level term using a simple grep search. In this way, 66.2\% of trials were labelled as mentioning a pre-coded level of reference. In the next step, remaining utterances were checked manually to determine whether they contained a correct level of reference term which was not detected by the parsing algorithm due to typos or grammatical modification of the expression. In this way, meaning-equivalent alternatives such as ``doggie'' for ``dog'', or contractions such as ``gummi'',``gummies'' and ``bears'' for ``gummy bears'' were counted as containing a level of reference term. This caught another 13.8\% of trials. A total of 20.0\% of correct trials were excluded because the utterance consisted only of an \emph{attribute} of the superclass (``the living thing'' for ``animal''), of the basic level (``can fly'' for ``bird''), of the subcategory (``barks'' for ``dog'') or of the particular instance (``the thing facing left'') rather than a category noun. These kinds of attributes were also sometimes mentioned in addition to the noun in the trials which were included in the analysis---4.0\% of sub level terms, 12.6\% of basic level terms, and 46.2\% of super level terms contained an additional modifier. %By making use of attributes, speakers could generate unambiguous referential expressions despite using a level of reference which would by itself be insufficient for disambiguation, e.g. by referring to a dalmatian as a ``spotted dog'' (in the context of another dog being present, for instance a pug which is not spotted). On 0.5\% of trials two different levels of reference were mentioned; in this case the more specific level of reference was counted as being mentioned in this trial. %Additional processing also established the number of correct trials where a determiner (``the'' or ``a''/``an'') or an indefinite referent such as ``one'', ``thing'' or ``object'' was used, how many correct trials consisted of complete sentences and the number of correct trials where level of reference terms were contracted (namely 2.7\%, 0.9\%, 1.0\% and 5.3\% respectively). \jd{do we need the last bit of info, since we don't actually go on to do anything with it?} %\todo[inline]{rdh: i don't think these numbers add up to 100... 1.2 were incorrect and excluded, (66.2 + 13.8) were correct and included, 10\% of the total correct ones were excluded $= 91.2$ at most. Or maybe I'm counting wrong, in which case we should maybe find a less confusing way of expressing these probabilities?} \caroline{Sorry! very embarassing typo. Of all trials, 1.2\% were incorrect and thus excluded. Of the remaining *correct* trials, 66.2\% were labelled in the first automatic parse, then another 13.8\% were added manually, so 80.0\% of correct trials were labelled as mentioning a level of reference. The remaining 20.0\% of correct trials mentioned an attribute and were thus excluded (not 10.0\%!!) (resolved)} \paragraph{Typicality norms} To examine the influence of typicality on speaker behavior, we obtained typicality estimates in a separate norming study. 240 participants were recruited through Mechanical Turk. On each trial, we presented participants with an image from the main experiment and asked them ``How typical is this for X?'', where X was a category label at the sub-, basic-, or super- level. They then adjusted a slider bar ranging from \emph{not at all typical} to \emph{very typical}. Due to the large number of possible combinations of objects, we only collected norms for certain combinations of objects and descriptions: for each target (e.g., dalmatian), we collected typicality at all three levels (``dalmatian,'' ``dog,'' and ``animal''). For each distractor of the same superclass as the target (\emph{distsamesuper}, e.g., a kitten), we collected typicality at all three levels of the \emph{target}. For each distractor of a different superclass (\emph{distdiffsuper}, e.g., a basketball) we only collected typicality at the super- level of the target (``animal'') and assumed lowest typicality at the other levels. This resulted in the following distribution of 745 norms: \emph{target-sub} (36), \emph{target-basic} (36), \emph{target-super} (36), \emph{distdiffsuper-super} (168), \emph{distsamesuper-sub} (331), \emph{distsamesuper-basic} (93), and \emph{distsamesuper-super} (45). Each participant provided typicality ratings for 7 \emph{target}, 10 \emph{distdiffsuper}, and 28 \emph{distsamesuper} cases (randomly sampled from the total set of items). Each case received between 6 and 27 ratings. Raw slider values ranged from 0 (not typical) to 1 (very typical); average slider values were used as the typicality values throughout our results. \subsection{\bf Results} Proportions of sub, basic, and super level utterance choices in the different context conditions are shown in the top row of \figref{fig:qualitativemodel}. The sub level term was preferred where it was necessary for unambiguous referent identification, i.e., when a distractor of the same basic level category as the target was present in the scene (item12, e.g. target: dalmatian, distractor: greyhound). Where it was not necessary (i.e., when there was no other object of the same basic level category present, as in conditions item22, item23 and item33), there was a clear preference for the basic level term. The super level term was strongly dispreferred overall, though it was used on some trials, especially where informativeness constraints on utterance choice were weakest (item33). % %\begin{figure}[ht!] %\centering %\includegraphics[width=.5\textwidth]{graphs/results-collapsed} %\caption{Proportion of utterance choice by condition. Error bars indicate bootstrapped 95\% confidence intervals.} %\label{fig:results1} %\end{figure} To test for the independent effects of informativeness, length, frequency, and typicality on sub-level mention, we conducted a mixed effects logistic regression. Frequency was coded as the difference between the sub and the basic level's log frequency, as extracted from the Google Books Ngram English corpus ranging from 1960 to 2008. Length was coded as the ratio of the sub to the basic level's length.\footnote{We used the mean empirical lengths in characters of the utterances participants produced. For example, the minivan, when referred to at the subcategory level, was sometimes called ``minivan'' and sometimes ``van'' leading to a mean empirical length of 5.64. This is the value that was used, rather than 7, the length of ``minivan''.} That is, a higher frequency difference indicates a \emph{lower} cost for the sub level term compared to the basic level, while a higher length ratio reflects a \emph{higher} cost for the sub level term compared to the basic level.\footnote{We replicate the well-documented negative correlation between length and log frequency ($r = -.53$ in our dataset).} Typicality was coded as the ratio of the target's sub to basic level label typicality. That is, the higher the ratio, the more typical the object was for the sub level label compared to the basic level. %; or in other words, a higher ratio indicated that the object was relatively atypical for the basic label compared to the sub label. For instance, the panda was relatively atypical for its basic level ``bear'' (mean rating 0.75) compared to the sub level term ``panda bear'' (mean rating 0.98), which resulted in a relatively \emph{high} typicality ratio. \begin{figure}[bt] \centering %\includegraphics[width=.5\textwidth]{graphs/collapsed-pattern} \includegraphics[width=.5\textwidth]{graphs/qualitativepattern} \caption{Empirical utterance probabilities (top row) and model posterior predictive MAP estimates (bottom row) by condition, collapsed across targets and domains. Error bars indicate bootstrapped 95\% confidence intervals.} \label{fig:qualitativemodel} \end{figure} Condition was coded as a three-level factor: \emph{sub necessary}, \emph{basic sufficient}, and \emph{super sufficient}, where item22 and item23 were collapsed into \emph{basic sufficient}. Condition was Helmert-coded: two contrasts over the three condition levels were included in the model, comparing each level against the mean of the remaining levels (in order: \emph{sub necessary}, \emph{basic sufficient}, \emph{super sufficient}). This allowed us to determine whether the probability of type mention for neighboring conditions were significantly different from each other, as suggested by \figref{fig:qualitativemodel}.\footnote{Adding terms that code the ratio of the sub vs super level frequency and length did not lead to an improvement of model fit.} The model included random by-speaker and by-domain intercepts. A summary of results is shown in \tableref{tab:modelresults}. The log odds of mentioning the sub level term was greater in the \emph{sub necessary} condition than in either of the other two conditions, and greater in the \emph{basic sufficient} condition than in the \emph{super sufficient} condition, suggesting that the contextual informativeness of the sub level mention has a gradient effect on utterance choice.\footnote{Importantly, model comparison between the reported model and one that subsumes basic and super under the same factor level revealed that the three-level condition variable is justified ($\chi ^2 (1) = 5.7$, $p < .05$), suggesting that participants don't simply revert to the basic level unless contextually forced not to.} There was also a main effect of typicality, such that the sub level term was preferred for objects that were more typical for the sub level compared to the basic level description (\figref{fig:lengthtypicality}). In addition, there was a main effect of length, such that as the length of the sub level term increased compared to the basic level term (``chihuahua''/``dog'' vs.~``pug''/``dog''), the sub level term was dispreferred (``chihuahua'' is dispreferred compared to ``pug'', \figref{fig:lengthtypicality}). Finally, while there was no main effect of frequency, we observed a significant length by frequency interaction, such that there was a frequency effect for the relatively shorter but not the relatively longer sub level cases: for shorter sub level terms, relatively high-frequency sub level terms were more likely to be used than relatively low-frequency sub level terms. \begin{table}[tbp] \caption{Mixed effects model summary.} \begin{center} \begin{tabular}{lrrl} \toprule \multicolumn{1}{l}{}&\multicolumn{1}{c}{Coef $\beta$}&\multicolumn{1}{c}{SE($\beta$)}&\multicolumn{1}{c}{$p$}\tabularnewline \midrule Intercept&$-0.30$&$0.35$&\textgreater0.4\tabularnewline Condition sub.vs.rest&$ 2.46$&$0.24$&\textbf{\textless.0001}\tabularnewline Condition basic.vs.super&$ 0.52$&$0.23$&\textbf{\textless.05}\tabularnewline Length&$-0.52$&$0.14$&\textbf{\textless.001}\tabularnewline Frequency&$-0.02$&$0.08$&\textgreater0.78\tabularnewline Typicality&$ 4.17$&$0.84$&\textbf{\textless.0001}\tabularnewline Length:Frequency&$-0.30$&$0.11$&\textbf{\textless.01}\tabularnewline \bottomrule \end{tabular}\end{center} \label{tab:modelresults} \end{table} %\begin{figure}[ht!] %\centering %%\includegraphics[width=.5\textwidth]{graphs/lengthRatio} %\includegraphics[width=.5\textwidth]{graphs/length-effect} %\caption{Probability of using sub, basic and super level terms when the sub length is relatively short (.67,2] or long [2,4.67) compared to the basic level term length.} % \label{fig:lengtheffect} %\end{figure} % %\begin{figure}[ht!] %\centering %\includegraphics[width=.5\textwidth]{graphs/freq-length-interaction} %\caption{Proportion of sub level mentions as a function of the sub level term's relative length and frequency compared to the basic level. Length bins reflect the sub/basic length ratio intervals (0, 1] (short), (1, 2] (mid), (2, 4.67] (long). Frequency bins reflect the sub/basic log frequency difference intervals (-11.2,-5.19] (low), (-5.19,-0.65] (high).} %\label{fig:lengthfreqinteraction} %\end{figure} Unsurprisingly, there was also significant by-participant and by-domain variation in the log odds of sub level term mention. %\figref{fig:bigscatterplot} shows the by-domain variation in utterance choice. For instance, mentioning the subclass over the basic level term was preferred more in some domains (e.g. in the ``candy'' domain) than in others. Likewise, some domains had a greater preference for basic level terms (e.g. the ``shirt'' domain). Using the superclass term also ranged from hardly being observable (e.g. the ``flower'' domain) to being used more frequently (e.g. in the ``bird'' domain). Nevertheless, mentioning the sub level term was always the most frequent choice where a distractor of the same basic level was displayed. Furthermore, it was the case in all domains that the sub level term was mentioned most frequently and the basic level least frequently in just this condition, compared to the other three conditions. %These results suggest that the choice of level of reference depends in a gradient manner on both the informativeness of the reference level as well as on the fit of the object to a specific label due to typicality and the cost of the corresponding utterance (in terms of length combined with frequency) compared to the alternative utterances. [caroline: I moved this to the conclusion section] \begin{figure}[bt] \centering %\includegraphics[width=.5\textwidth]{graphs/lengthRatio} %\includegraphics[width=.5\textwidth]{graphs/typicality-effect} \includegraphics[width=.5\textwidth]{graphs/length-typicality} \caption{Probability of using sub, basic and super level terms. Left: when the sub length is relatively short (.67,2] or long [2,4.67) compared to the basic level term length. Right: when the target object was relatively more [1.06,1.91) or less (.88,1.06] typical for the sub compared to the basic level term.} \label{fig:lengthtypicality} \end{figure} \section{\bf Modeling level of reference} %To show that we can account for these context effects purely through communicative pressures, we formulated a simple probabilistic model of basic level reference. We formulated a probabilistic model of reference level selection that integrates contextual informativeness, utterance cost, and typicality. As in earlier Rational Speech-Acts (RSA) models \cite{frank2012, goodmanstuhlmueller2013}, the speaker seeks to be informative with respect to an internal model of a literal listener. This listener updates her beliefs to rule out possible worlds that are inconsistent with the meaning of the speaker's utterance. Rather than assuming that words have deterministic truth conditions, as has usually been done in the past, we account for typicality by allowing each label a graded meaning. For instance, the word ``dog'' describes a dalmatian better than a grizzly bear, but it also describes a grizzly bear better than a tennis ball. The speaker also seeks to be parsimonious: the speaker utility includes both informativeness and word cost; cost includes both length and frequency. Formally, we start by specifying a literal listener $L_0$ who hears a word $l$ at a particular level of reference in the context of some set of objects $\mathcal{O}$ and forms a distribution over the referenced object, $o \in \mathcal{O}$ : $$P_{L_0}(o | l) \propto \denote{l}(o).$$ Here $\denote{l}(o)$ is the lexical meaning of the word $l$ when applied to object $o$. We take this to be a real number indicating the degree of acceptability of object $o$ for category $l$. We relate this to our empirically elicited typicality norms via an exponential relationship: $\denote{l}(o)=\exp(\text{typicality}(o,l))$.\footnote{Cases where typicality was not elicited were assumed to have typicality $0$.} This relationship is motivated by considering the effect of a small difference in typicality on choice probability: in our elicitation experiment a small difference in rating should mean the same thing at the top and bottom of the scale (it is visually equivalent on the slider that participants used). In order for a small difference in typicality rating to have a constant effect on relative choice probability (which is a ratio), the relationship must be exponential. %The parameter $\gamma$ controls the dynamic range of typicality---how good (bad) is it to choose an object which is very (a)typical of the label heard? Next, we specify a speaker $S_1$ who intends to refer to a particular object $o \in \mathcal{O}$ and chooses among possible nouns $l \in {\mathcal L}(o)$. We take ${\mathcal L}(o)$ to be the three labels for $o$ at sub, basic, and super level. The speaker chooses among these nouns in a way that is influenced by informativeness of the noun for the literal listener ($\ln P_{L_0}(o | l)$), the frequency ($\hat{c}_f$) and the length ($\hat{c}_l$), each weighted by a free parameter: $$P_{S_1}(l | o) \propto \exp(\lambda \ln P_{L_0}(o | l) + \beta_f \hat{c}_f + \beta_l \hat{c}_l)$$ Length cost $\hat{c}_l$ was defined as the empirical mean number of characters used to refer at that level and frequency cost $\hat{c}_f$ was the log frequency in the Google Books corpus from 1960 to the present. We performed Bayesian data analysis to generate model predictions, conditioning on the observed production data (coded into sub, basic, and super labels as described above) and integrating over the three free parameters. We assumed uniform priors for each parameter: $\lambda \sim Unif(0,20)$, $\beta_f \sim Unif(0,5)$, $\beta_l \sim Unif(0,5)$. We implemented both the cognitive and data-analysis models in the probabilistic programming language WebPPL \cite{GoodmanStuhlmuller14_DIPPL}. Inference for the cognitive model was exact, while we used Markov Chain Monte Carlo (MCMC) to infer posteriors for the three free parameters. % using Markov Chain Monte Carlo (MCMC), conditioning on the production data from the experiment reported above. More specifically, for every iteration of the chain, we generated the probability of a speaker using each level of reference (i.e .``sub'', ``basic'', or ``super'') for every \emph{context} that participants could encounter in the task, and then computed the likelihood of the actual expressions that participants used. \begin{figure}[t!] \centering \includegraphics[width=.5\textwidth]{graphs/scatterplot} \caption{Mean empirical production data for each level of reference against the MAP of the model posterior predictive at the by-target level.} \label{fig:scatterplot} \end{figure} Point-wise maximum a posteriori (MAP) estimates of the model's posterior predictives at the target level (collapsing across distractors for each target, within each condition) are compared to empirical data in Fig. \ref{fig:scatterplot}. On the by-target level the model achieves a correlation of $r = .79$. Looking at results on the by-domain level (collapsing across targets) and on the by-condition level (further collapsing across domains, as in \figref{fig:qualitativemodel}) yields correlations of .88 and .96, respectively. The model does a good job of capturing the quantitative patterns in the data, especially considering the sparsity of our data at the by-target level. One clear flaw is that the model predicts greater use of the super level label than people exhibit. Further systematic deviation appears likely for specific items. On examination, candy items like ``gummy bears'' or ``jelly beans'' were particularly problematic, being referred to primarily by their sub level term in all contexts. \begin{figure} \includegraphics[width=.49\textwidth]{graphs/parameterposteriors.pdf} \caption{Posterior distribution over model parameters. Maximum a posteriori (MAP) $\lambda = 10.8$, 95\% highest density interval (HDI) $= [9.7, 12.8]$; MAP $\beta_l = 2.5$, HDI $= [1.9, 3.1]$; MAP $\beta_f = 1.3$, HDI $= [0.8, 1.8]$.} \label{fig:paramposteriors} \end{figure} Parameter posteriors are presented in Fig. \ref{fig:paramposteriors}. Informativeness is weighted relatively strongly, while length is weighted somewhat more strongly than frequency. Note that the 95\% highest density intervals (HDIs) for all three weight parameters exclude zero, indicating that some contribution of each is useful in explaining the data. In order to ascertain whether typicality was indeed contributing to the explanatory power of the model, we ran an additional Bayesian data analysis with an added typicality weight parameter $\beta_t \in [0,1]$. This parameter interpolated between empirical typicality values (when $\beta_t {=} 1$) and deterministic (i.e. $0$ or $1$) \emph{a priori} values based on the true taxonomy (when $\beta_t {=} 0$). %truth-conditional function returning one if the object was a member of the label category and zero otherwise. %The posterior distribution of this parameter allows for Bayesian model selection: if $\beta_t = 0$ is excluded from the HDI, and the distribution skews high, then some influence of typicality is necessary to account for the data. We found a MAP estimate for $\beta_t$ of $.94$, HDI $= [0.88,1]$, strongly indicating that it is useful to incorporate empirical typicality values. Finally, we ran a model including a parameter weighting the \emph{product} of frequency and cost, corresponding to the interaction term in our regression analysis. Its posterior distribution was strongly peaked at 0, indicating that any contribution of the interaction is already captured by other aspects of the model. %: one that included only an informativeness term (\tableref{tab:bestparams}, first row and \figref{fig:qualitativemodel}, second row) and one that included informativeness and the cost terms but no typicality (\tableref{tab:bestparams}, second row and \figref{fig:qualitativemodel}, third row). %The model without typicality performed significantly worse, as \red{can be seen in the lower correlations of the simpler models . % Interestingly, as model complexity increases, informativeness is given more weight (as can be seen in the increasing best $\lambda$ value), the extreme effects of which are balanced out by cost and typicality.} \section{\bf Discussion and conclusion} %\ndg{things to say: we got naturalistic data of nominal reference. this was affected by cost (length), context, and typicality. these factors fit naturally into an RSA model. this predicts basic level bias without building it in, and interactions between these factors. future work will need to explore: the item effects where perhaps visual salience plays a role (or something else?); the interaction of nominal and modifier choice; the role of typicality in RSA models. connect to rosch, the dutch guys, naomi's student.} The choice speakers make of how to refer to an object is influenced by a rich variety of factors. In this paper, we specifically investigated the choice of level of reference in nominal referring expressions. In an interactive reference game task in which speakers freely produced referring expressions, utterance choice was affected by utterance cost (in terms of length and frequency), contextual informativeness (as manipulated via distractor objects), and object typicality. % The interplay of these factors is naturally modeled within the RSA framework, where speakers are treated as choosing utterances by soft-maximizing utterance utility, which includes terms for informativeness and cost. In previous formulations of RSA models, informativeness was determined by a deterministic semantics; here we ``softened'' the semantics by allowing nouns to apply to objects to the extent that those objects were rated as typical for the nouns. %\jd{mention connection to prototype theory? [caroline:] Maybe like this: } %We also incorporated a parsimony goal by including the effect of utterance cost. %showed that more complex cost functions can account for naturalistic data. In addition, The resulting model provided a good fit to speakers' empirical utterance choices, both qualitatively and quantitatively. %By means of gathering naturalistic data of nominal reference production we have shown that choice of level of reference depends in a gradient manner on both the informativeness of the reference level as well as on the fit of the object to a specific label due to typicality and the cost of the corresponding utterance (in terms of length) compared to the alternative utterances. These factors of cost, context and typicality fit naturally into the framework of an RSA model, which as we demonstrated can predict the interactions between these factors as well as the classical basic level bias, without actually building it in. %The preference for the use of basic level terms (e.g. ``car'') is a well documented psychological phenomenon. In a series of classical experiments on the structure of concepts, Rosch and colleagues (1974) established that there is a maximally informative level of abstraction in category taxonomies. Cars have a large number of features in common---drive on roads, have 4 wheels, are engine powered, move independently---which differentiate them from other vehicles, but there are fewer features that distinguish SUVs from minivans. At the same time, cars share only very few attributes with other vehicles, i.e. cars, bicyles and trains all function as a mode of transportation but there are not many other commonalities. Thus, a multitude of properties of an object can be predicted by naming this level of abstraction (e.g. ``car'', opposed to ``vehicle''), while simultaneously minimizing the amount of information which is likely to be unnecessary for disambiguation (i.e. in many contexts a sub level term such as ``SUV'' will be an irrelevant differentiation), whereby ensuring cognitive economy. Rosch called this level of abstraction the \emph{basic level} \cite{RoschEtAl76_BasicLevel}. %Our data also supports the existence of a basic level: If the context allowed it, participants were far more likely to use basic level terms than either sub or super level expressions. Interestingly however, our results as to what level constitutes a basic level are slightly different from Rosch et al.'s. That is, while categories like ``table'' and ``shirt'' were considered both by Rosch and by us as basic levels (with corresponding super levels ``furniture'' and ``clothing'' and sub levels such as ``coffee table'' and ``dress shirt''), we had a quite different conception of animal domains. In our experiment, ``fish'' was a basic level with ``animal'' as a super level and ``catfish'' as a sub level, whereas for Rosch ``fish'' was a super level with ``bass'' being the basic level and ``striped bass'' a sub level term. Perhaps the reason for this discrepancy is that basic levels are not rigid levels of categorization, but rather flexible as they also incorporate world knowledge. Many researchers agree that world knowledge plays an important part in human categorization behavior (e.g. Jolicoeur et al., 1984; Murphy and Medin, 1985) and a recent study by Orita et al. (2015) furthermore demonstrates that discourse affects the choice of referring expression. More evidence for the impact of world knowledge and discourse is provided by Tanaka and Taylor (1991), who showed that domain-specific ``expert'' knowledge can diminish the preference effect of the basic level to an extent that subordinate level terms are chosen as often as basic level terms to refer to objects. Considering that Rosch's experiments were conducted 40 years ago, it seems interesting to investigate whether changes in society or in the zeitgeist may have had an effect on the relevance of certain aspects of world knowledge, which causes a shift in what is regarded and used as a basic level of reference. The model predicts a well-documented preference for speakers to refer to objects at the basic level when not constrained by contextual considerations \cite{RoschEtAl76_BasicLevel}. In our model, this preference emerges naturally from cost considerations: basic-level labels tend to be shorter and more frequent than sub and super level terms. However, speakers did not always use the basic level term, even when unconstrained by context. In certain cases where object typicality was relatively high for the sub level term compared to the basic level term, that term was preferred (as was the case for ``panda bear''), suggesting an interesting interplay between typicality and level of description. While our results show that a model can capture several basic-level phenomena through frequency, length, and typicality features, it leaves open the origin and causal role of these linguistic regularities. Future research will be needed to determine how linguistic regularities are related to conceptual regularities and why. %Of course it is impossible to assert whether the notion of basic levels emerged from there being a level of abstraction having particularly beneficial cost and typicality features or whether the conceptual basic-level is primary, and efficient language structures evolved to mirror this preference. %Together, these observations suggest that language users need not learn or encode the basic level preference as an additional rule. Of course, it leaves open how the relevant cost and typicality features emerged in the first place: it is plausible that the conceptual basic-level is primary, and efficient language structures evolved to mirror this preference. %\todo[inline]{rdh: this paragraph bothers me a bit: a theory evoking underlying conceptual structure is more parsimonious, and our results about cost in language seem kind of circular at that level of explanation... it needs more than a single paragraph in the discussion to do justice to the issues there... Tried to put in this last sentence to feel less conflicted about it, but maybe that just makes it worse?} %\caroline{I think as long as we acknowledge the circularity, this is a nice point to make. But you're right that the phrasing should be more careful. Maybe like this? (see above)} % %\ndg{put somewhere? This captures aspects of prototype theory, which proposes a graded form of categorization whereby category membership is determined by how prototypical an object is for a category.} % %\todo[inline]{rdh: need a better transition here} %A possible reason for this may be the salience of the sub level construal of these objects. Salience has been found to be a factor contributing to utterance choice in other areas of referring expression production; for example, atypical features of objects are mentioned more often than typical features, which has been argued to be due to the greater salience of atypical features \cite{westerbeek2015}. This is in line with our results that objects which are \emph{atypical} for the basic level are more likely to be referred to with the sub level term. %The argument might also explain the high sub level use in the candy domain, as all sub level items of candy feature high-contrast colors with a high visual saliency. In the case of ``eagle'', a form of ``cultural saliency'' may also be imaginable, since the eagle is a very prominent and salient part of world knowledge in American culture. %Thus, many aspects of what constitutes basic levels should be investigated more. Especially the exploration of the saliency of features which may or may not be shared across category members is an interesting topic to be take up by future research. An interesting analogy can be drawn from choosing a noun to choosing a set of adjectives; that is, between selection of a level of reference in simple nominal referring expressions and selection of a set of features to include in modified referring expressions. For the latter, a much discussed phenomenon is that of \emph{overinformative} modifier use \cite{Gatt2014}---for example, saying ``big blue'' when all objects in the context are blue. The preference for the basic level in the \emph{super sufficient} condition and the still substantial use of sub level terms in the \emph{basic sufficient} condition can also be considered overinformative. However, we showed that a Rational Speech-Acts model using non-deterministic semantics, derived from typicality estimates, predicts that speakers \emph{should} use these more specific descriptions. The extent to which similar considerations may apply to modified referring expressions should be explored. Future research should also examine the interaction of these choices: circumstances under which speakers choose a modifier and how nominal and modifier choice interact. %There was, however, some item-wise variation that the model left unexplained. This may reflect improper calibration of our typicality or frequency measures, or it may reflect additional factors such as visual salience, which could for instance arise for objects characterized by high-contrast features \cite{Westerbeek2015}. %Concluding, this work delivers insights into the tradeoff between contextual constraints and features of utterance alternatives in speakers' choice of reference level in nominal referring expressions produced in natural dialog. A basic level preference naturally emerges from the formalization of the choice speakers face. %\ndg{fancy conclusion?} \section{\bf Acknowledgments} \small This work was supported by ONR grant N00014-13-1-0788 and a James S. McDonnell Foundation Scholar Award to NDG and an SNF Early Postdoc.~Mobility Award to JD. RXDH was supported by the Stanford Graduate Fellowship and the National Science Foundation Graduate Research Fellowship under Grant No. DGE-114747. \bibliographystyle{apacite} \setlength{\bibleftmargin}{.125in} \setlength{\bibindent}{-\bibleftmargin} \bibliography{bibs} \end{document}
{ "alphanum_fraction": 0.7893289611, "avg_line_length": 131.8716216216, "ext": "tex", "hexsha": "a505c533c55923686fc9a00cfcc2754821cf02d5", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2017-03-17T21:51:18.000Z", "max_forks_repo_forks_event_min_datetime": "2015-11-25T09:53:20.000Z", "max_forks_repo_head_hexsha": "d20b66148c13af473b57cc4d1736191a49660349", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "thegricean/overinformativeness", "max_forks_repo_path": "writing/2016/cogsci/refexp.tex", "max_issues_count": 9, "max_issues_repo_head_hexsha": "d20b66148c13af473b57cc4d1736191a49660349", "max_issues_repo_issues_event_max_datetime": "2020-04-21T01:26:05.000Z", "max_issues_repo_issues_event_min_datetime": "2015-11-30T21:44:31.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "thegricean/overinformativeness", "max_issues_repo_path": "writing/2016/cogsci/refexp.tex", "max_line_length": 1897, "max_stars_count": 1, "max_stars_repo_head_hexsha": "d20b66148c13af473b57cc4d1736191a49660349", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "thegricean/overinformativeness", "max_stars_repo_path": "writing/2016/cogsci/refexp.tex", "max_stars_repo_stars_event_max_datetime": "2016-10-27T18:41:57.000Z", "max_stars_repo_stars_event_min_datetime": "2016-10-27T18:41:57.000Z", "num_tokens": 13393, "size": 58551 }
\chapter{Cryptographic schemes} Cryptographic schemes conceived for \zclaim are defined in the following pages. On the other hand, schemes taken from Sapling to which reference is made in this work were introduced in~\cref{ch:prelims} and will not be reproduced here. \section{Commitment schemes} We define a commitment scheme as in~\cite[Section 4.1.7]{hopwood2016zcash}. \subsection{Nonce commitment scheme} \label{app:nncm} The nonce commitment scheme \nncm may be instantiated as a Windowed Pedersen commitment scheme as defined in~\cite[Section 5.4.7.2]{hopwood2016zcash} in a similar fashion to Sapling's note commitment scheme \ncm, since the homomorphic properties required when hiding the note value are not necessary. It is defined as follows: \begin{flalign*} &\nncm_{\rcn}(\nlock) := \wpc_{\rcn}(\nlock)& \end{flalign*} \section{Signature schemes} We use the definition of a signature scheme in~\cite[Section 4.1.6]{hopwood2016zcash}. \subsection{Minting signature} \label{app:mas} \mas may be instantiated as \redjj as defined in~\cite[Section 5.4.6]{hopwood2016zcash} without key re-randomisation and with generator \begin{flalign*} &\mathcal{P}_{\G} = \fgh(\text{``\zclaimm''}, \text{``''})& \end{flalign*} \subsection{Vault signature} \label{app:vaultsig} \vaultsig may be instantiated as \redjj as defined in~\cite[Section 5.4.6]{hopwood2016zcash} without key re-randomisation and with generator \begin{flalign*} &\mathcal{P}_{\G} = \divhash(\dvf)& \end{flalign*} where \dvf is the diversifier associated with the vault in the vault registry. \section{SIGHASH transaction hashing} \label{app:sighash} We use the \sighash transaction hash as defined in~\cite{ZIP243}, not associated with an input and using the \sighash type \sighashall, to which we add two new fields: \begin{itemize} \item $\hashmints : \B^{[256]}$ is 0 if the transaction does not contain a Mint transfer, otherwise it is the \blakezclaim hash of the serialization of the Mint transfer (in its canonical transaction serialization format) with the personalisation field set to ``$\mathtt{ZclaimMintHash}$''. \item $\hashburns : \B^{[256]}$ is 0 if the transaction does not contain a Burn transfer, otherwise it is the \blakezclaim hash of the serialization of the Burn transfer (in its canonical transaction serialization format) with the personalisation field set to ``$\mathtt{ZclaimBurnHash}$''. \end{itemize}
{ "alphanum_fraction": 0.7626491156, "avg_line_length": 51.7234042553, "ext": "tex", "hexsha": "e18f51188c9cedb22b7847cbbdd3c8ce2d8aab23", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "727b74ded4373c76e7e649b884d4c5ce650838e7", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "alxs/zclaim", "max_forks_repo_path": "sections/appendix.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "727b74ded4373c76e7e649b884d4c5ce650838e7", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "alxs/zclaim", "max_issues_repo_path": "sections/appendix.tex", "max_line_length": 300, "max_stars_count": 3, "max_stars_repo_head_hexsha": "727b74ded4373c76e7e649b884d4c5ce650838e7", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "alxs/zclaim", "max_stars_repo_path": "sections/appendix.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-30T07:42:21.000Z", "max_stars_repo_stars_event_min_datetime": "2021-11-18T16:33:27.000Z", "num_tokens": 696, "size": 2431 }
\documentclass[../main.tex]{subfiles} \begin{document} \section{Conclusion} \label{sec:conclusion} \end{document}
{ "alphanum_fraction": 0.7631578947, "avg_line_length": 19, "ext": "tex", "hexsha": "0d72fb188970517d2fddcd087626b4d9f942e780", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7b9fb5548be745e23af842eed40cacebdefd8098", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "JiahuaWU/fundus-imaging", "max_forks_repo_path": "latex_report/sections/conclusion.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7b9fb5548be745e23af842eed40cacebdefd8098", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "JiahuaWU/fundus-imaging", "max_issues_repo_path": "latex_report/sections/conclusion.tex", "max_line_length": 37, "max_stars_count": 4, "max_stars_repo_head_hexsha": "7b9fb5548be745e23af842eed40cacebdefd8098", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "JiahuaWU/fundus-imaging", "max_stars_repo_path": "latex_report/sections/conclusion.tex", "max_stars_repo_stars_event_max_datetime": "2020-06-07T07:36:50.000Z", "max_stars_repo_stars_event_min_datetime": "2020-03-02T03:00:02.000Z", "num_tokens": 34, "size": 114 }
\chapter{S.S. Liki} \restartlist{enumerate} \liteversiondetermination{Exclude}{% \begin{enumerate} \item \cs[2:00], walk up to \yuna, \sd, walk back to \wakka, \sd, walk back up to \yuna, \cs + 4 \skippablefmv[4:20], \sd\ from `Sin!' \end{enumerate} } \begin{battle}[2000]{Sin Fin} \begin{itemize} \tidusf Defend \switch{\yuna}{\lulu} \luluf Thunder the Sin Fin \kimahrif Lancet the Sin Fin \enemyf Moves \tidusf Defend \kimahrif Lancet the Sin Fin \luluf Thunder the Sin Fin \switch{\tidus}{\yuna} \summon{\valefor} \valeforf Energy Blast \od\ on Sin Fin \end{itemize} \end{battle} \liteversiondetermination{Exclude}{% \begin{enumerate}[resume] \item \fmv+\cs[1:40] \end{enumerate} } \begin{battle}[2000]{Sinspawn Echuilles} \begin{itemize} \tidusf Spiral Cut as soon as it is available, then spam attacks for the rest of the fight \wakkaf Dark Attack \wakkaf If anybody is below 200HP potion them, otherwise Attack \enemyf Blender \wakkaf Dark Attack \wakkaf If anybody is below 200HP potion them, otherwise Attack \enemyf Blender \wakkaf Attacks \end{itemize} \end{battle} \liteversiondetermination{Exclude}{% \begin{enumerate}[resume] \item \skippablefmv+\cs[1:30], \sd\ during \tidus\ monologue. \end{enumerate} }
{ "alphanum_fraction": 0.7197149644, "avg_line_length": 28.7045454545, "ext": "tex", "hexsha": "5780dfc4a1b2be60518036ae7c35f94c3a9b68a3", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-07-28T03:02:16.000Z", "max_forks_repo_forks_event_min_datetime": "2021-07-28T03:02:16.000Z", "max_forks_repo_head_hexsha": "8045824bbe960721865ddb9c216fe4e2377a2aae", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "HannibalSnekter/Final-Fantasy-Speedruns", "max_forks_repo_path": "Final Fantasy X/Chapters_NSG/004_ssliki.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8045824bbe960721865ddb9c216fe4e2377a2aae", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "HannibalSnekter/Final-Fantasy-Speedruns", "max_issues_repo_path": "Final Fantasy X/Chapters_NSG/004_ssliki.tex", "max_line_length": 135, "max_stars_count": 1, "max_stars_repo_head_hexsha": "8045824bbe960721865ddb9c216fe4e2377a2aae", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "HannibalSnekter/Final-Fantasy-Speedruns", "max_stars_repo_path": "Final Fantasy X/Chapters_NSG/004_ssliki.tex", "max_stars_repo_stars_event_max_datetime": "2021-11-04T01:45:47.000Z", "max_stars_repo_stars_event_min_datetime": "2021-11-04T01:45:47.000Z", "num_tokens": 467, "size": 1263 }
\documentclass{abstract_hutech} \begin{document} \thispagestyle{firstpage} \twocolumn[ \begin{@twocolumnfalse} \vspace*{20pt} \begin{flushleft} \fontsize{20}{0}\selectfont{\textbf{제목}} \vspace{32pt}\par \fontsize{10}{12}\selectfont{\textbf{(Abstract) This is a template of Extended Abstract for HumanTech Paper Award. The recommended volume is 2 pages with 2-column format. Titles do not exceed two lines and abstracts do not exceed 15 lines.\\ Papers should be written in Times New Roman font with the font size of 20pt in bold for the title, 10pt in bold for abstracts, 11pt in bold for the titles within text, 10pt for the text, 9pt in bold for the titles of figures and tables and 9pt for the references. For the fairness of the review, Name, major, the school/university name, the school/university logo, and teacher/professors name of author should not be included in the abstract and paper. }} \end{flushleft} \vspace{20pt} \end{@twocolumnfalse} ] \section{INTRODUCTION} HumanTech Paper Award was established in 1994 with the purpose of encouraging Korean students to do research in science and technology. High school students and university students (Undergraduate \& Graduate) having Korean nationality and foreign student attending universities in Korea are eligible to submit papers to the HumanTech Paper Award. Paper should not be published any journal including online prior to full paper submission. HumanTech Paper Award has three stages of evaluation to select the awardees. The first stage will be done with an extended abstract, the second, with a full paper and the third, with an oral presentation. The submitted abstracts will be screened and the writers of the selected will be required to submit a full paper. Reviewers will be experts in each field. The objectives, scope, results, importance, and originality of the study should be described in the submitted abstracts. \section{WRITING STYLE} Abstracts and Papers should be written in A4 sized paper(21cm$\times$29.7cm) with margins of 3cm on the top, 2.5cm on the bottom, 1.5cm on the left and right, 2cm on the header, and 1cm on the footer. ``22nd HumanTech Paper Award'' should be written in the header. The header of first page is 12pt (bold), others are 9pt. The headers of even number pages should be left-justified, and those of odd number pages should be right-justified. Page numbers should be written in the footer. The footer of even number pages should be left-justified, and those of odd number pages should be right-justified. The recommended volume is 2 pages with 2-column format. Titles do not exceed two lines and abstracts do not exceed 15 lines. Papers should be written in English or Korean. Papers should be written in Times New Roman font for English, '바탕체' for Korean with the font size of 20pt in bold for the title, 10pt in bold for abstracts, 11pt in bold for the titles within text, 10pt for the text, 9pt in bold for the titles of figures and tables and 9pt for the references. For the fairness of the review, Name, major, the school/university name, the school/university logo, and teacher/professors name of author should not be included in the abstract and paper. The main text page should be divided into two columns vertically with the margin of 0.5cm between the two columns. If you insert titles, tables, graphs, or formulas in the main text, please insert one blank line before and after them. The numbering of contents in the main text should use the following format: chapters are 1. , 2. , 3. et al., and paragraphs are 1.1. , 2.1. et al. List and number all bibliographical references at the end of the document. When referring to them in the text, type the corresponding reference numbers in square brackets \cite{True00}. The titles of tables, graphs, and pictures should be started with a capital letter. Tables’ titles should be written in the top of tables, pictures’ titles should be written in the bottom of pictures. \begin{figure}[t] \begin{center} \includegraphics[width=6.23cm]{example-image-a} \end{center} \caption{\bf HumanTech Logo}\label{Fig01} \end{figure} Only SI units should be used and abbreviations should be spelled out when they appear first in the text. If a non-standard abbreviation is used first, it should be clearly defined. References should be written to the following formats: \noindent Authors should be listed surname first, followed by a comma and initials of given names. Titles of articles cited in reference lists should be in upright, not italic text; the first word of the title is capitalized, the title written exactly as it appears in the work cited, ending with a full stop. Book titles are italic with all main words capitalized. Journal titles are italic and abbreviated according to common usage. Volume numbers are bold. The publisher and city of publication are required for books cited. References to web-only journals should give authors, article title and journal name as above, followed by URL in full or DOI if known. References to websites should give authors if known, title of cited page, URL in full, and year of posting in parentheses. The year of publication (posting) should written in parentheses. \section{CONCLUSION} Abstracts and Papers should be written according to the following order: ①Title ②Abstract ③Main text ④References. \begin{thebibliography}{99} \bibitem{True00} True, H. L. and Lindquist, S. L. A. A yeast prion provides a mechanism for genetic variation and phenotypic diversity. {\it Nature} {\bf 407}, 477--483 (2000) \bibitem{Schluter00} Schluter, D. {\it The Ecology of Adaptive Radiation} (Oxford Univ. Press, 2000) \bibitem{Plazzo11} Plazzo, A. P. et al. Bioinformation and mutational analysis of channelrhodopsin-2 cation conducting pathway. {\it J. Biol. Chem.} http://dx.doi.org/10.1074/jbc.M111.326207 (2011) \end{thebibliography} \end{document}
{ "alphanum_fraction": 0.7886054422, "avg_line_length": 84, "ext": "tex", "hexsha": "a395bdba88b88a7864ddee5c26d726f02a658bcd", "lang": "TeX", "max_forks_count": 12, "max_forks_repo_forks_event_max_datetime": "2021-11-18T10:56:31.000Z", "max_forks_repo_forks_event_min_datetime": "2018-06-27T23:57:45.000Z", "max_forks_repo_head_hexsha": "2ee93c0dfd5f991d076e558f248931dcc8786506", "max_forks_repo_licenses": [ "LPPL-1.3c" ], "max_forks_repo_name": "gshslatexintro/gshslatexintro-mirror", "max_forks_repo_path": "Humantech_Paper_Award/abstract_hutech_korean.tex", "max_issues_count": 18, "max_issues_repo_head_hexsha": "2ee93c0dfd5f991d076e558f248931dcc8786506", "max_issues_repo_issues_event_max_datetime": "2017-01-26T07:27:25.000Z", "max_issues_repo_issues_event_min_datetime": "2015-08-26T01:43:29.000Z", "max_issues_repo_licenses": [ "LPPL-1.3c" ], "max_issues_repo_name": "gshslatexintro/gshslatexintro-mirror", "max_issues_repo_path": "Humantech_Paper_Award/abstract_hutech_korean.tex", "max_line_length": 918, "max_stars_count": 27, "max_stars_repo_head_hexsha": "2ee93c0dfd5f991d076e558f248931dcc8786506", "max_stars_repo_licenses": [ "LPPL-1.3c" ], "max_stars_repo_name": "gshslatexintro/gshslatexintro-mirror", "max_stars_repo_path": "Humantech_Paper_Award/abstract_hutech_korean.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-14T07:45:01.000Z", "max_stars_repo_stars_event_min_datetime": "2018-08-13T00:07:43.000Z", "num_tokens": 1432, "size": 5880 }
\documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2020} \usepackage{times} \usepackage{tabularx} \usepackage{latexsym} \usepackage{graphicx} \usepackage{subfig} \usepackage{commath} \renewcommand{\UrlFont}{\ttfamily\small} \newcommand{\spacemanidol}[1]{\textcolor{orange}{\bf \small [#1 --dc]}} \newcommand{\hayley}[1]{\textcolor{pink}{\bf \small [#1 --h]}} \setlength\titlebox{5cm} \newcommand\BibTeX{B\textsc{ib}\TeX} \usepackage{microtype} \aclfinalcopy \title{LING 573: Document Summarization Project Report} \author{Daniel Campos, Sicong Huang, Shunjie Wang, Simola Nayak, \and Hayley Luke \\ University of Washington \\ {\tt\{dacamp, huangs33, shunjiew, simnayak, jhluke\}@uw.edu}} \begin{document} \maketitle \begin{abstract} We design and implement Mockingbird, a topic-focused multi-document extractive summarization system. Building on the LexRank graph algorithm our system uses sentence similarity and topic-sentence bias to produce candidates. Next, the ranked sentences are selected to limit redundancy and stay under 100 words. Our system outperforms the LEAD and MEAD baseline by a fair margin. Future work will focus on forms of text representation and processing along with more complex selection and sorting can improve system performance \end{abstract} \section{Introduction} Topic oriented document clusters like AQUAINT \cite{Graff2002TheAC} and ACQUAINT-2 have been used as a starting point to explore various methods for document summarization. More specifically, these corpora have been used to study extractive multi-document topic summarization. These corpora have been the focus of study in TAC(Text Analytics Conference) \cite{Dang2008OverviewOT} summarization shared task. In the formalization of this task give a topic and a set of news wire documents a competitive system should create a high quality summary of the topic using sentences from the documents. Systems are expected to produce summaries up to 100 words and summaries should be coherent and not contain duplication. Once summaries are generated for all the topics being studied, methods are evaluated and compared using the standard ROUGE metric \cite{Lin2004ROUGEAP}. Our exploratory system, called Mockingbird (MB), is based on the Biased LexRank graph approach \cite{Otterbacher2009BiasedLP}. In this approach, we provide a ranking of all candidate sentences by combining a matrix which represents sentence similarity with a topic and sentence similarity. After a ranking is produced sentences are selected to maximize their LexRank score but minimize duplication of content. Our method relies on word vectors \cite{Mikolov2013DistributedRO} from the spaCy library \footnote{https://spacy.io/} to represent sentences and we experiment with the effects of evaluating complete sentences, sentences without stop words, and only the nouns in the sentences. After we have a set of candidate sentences we sort them in reverse chronological order and then realize the content to match the TAC output format. To understand our system performance we compare our systems ROUGE-1 and ROUGE-2 scores when compared to LEAD, and MEAD baselines. Our system across the board performs favorably as we beat the two baselines(MEAD and LEAD). However, our system lags behind the top system from TAC 2010 by a significant margin. \section{System Overview} Mockingbird has been designed as a simple end-to-end pipeline with the goal of producing a structure we can continue to tweak and modify to understand the effect of various changes have on downstream performance. The pipeline, represented in Figure \ref{fig:overview}, broadly has 4 steps: document input and processing, content selection, information ordering, content realization. \begin{figure}[h] \includegraphics[width=\linewidth]{doc/overview.png} \caption{Overview Of Mockingbird's Architecture} \label{fig:overview} \end{figure} \section{Approach} In this section we will describe the in detail each of the steps our system takes to summarize topics. \subsection{Data Input and Processing} Documents come from the ACQUAINT and ACQUAINT-2 corpus and are a mixture of HTML and XML. The document pipeline takes as an input a configuration file (XML) which details a series of topics and associates the them with a group of document ID's. Using this configuration file, we compile a list of topics, and use the document ID's to determine the path to the relevant corpus file on disk. We then search the file for the information relevant to our specific document ID, and extract the text, the date, and the title. We clean the data, converting the date to a usable format, stripping excess white space and symbols in the text, and normalizing quotation marks. We then break the text into sentences using spaCy's sentence tokenization, and remove sentences that appear to be questions or bylines. \subsection{Content Selection} Next, there is the content selection pipeline. In this step we consume the structured information from the processing pipeline and produce a candidate set of sentences. Our system selects content using a modified version of the Biased LexRank Graph approach. This method computes a intrasentence and topic-sentence similarity which is used to create a similarity matrix and a bias vector. In this method, each node of the graph represents a sentence and edges between nodes are represented by their intrasentence similarity. This representation allows ranking of sentences to be based on their similarity to the topic and their centrality in the topic document clustering. Our method differs from the original implementation of Lex Rank \cite{otterbacher-etal-2005-using} as we leverage word embeddings instead of their tf-idf implementation. For each topic we first assemble all sentences in scope of the topic that have at least 4 words. Then, for each sentence, we create a sentence representation. We explore using average word embeddings, IDF-weighted word embeddings, and transformer based sentence embeddings \cite{reimers-2019-sentence-bert}. For our sentence representations we use spaCy to create a vector representation for each word in the sentence. Then we average all the embeddings in the sentence to produce a de-facto sentence embedding. We explore using this method to create word representations where we drop stop words, nouns only, and use the whole sentence. This method leverages the word embeddings known as GloVe \cite{Pennington2014GloveGV} for word embeddings. IDF-weighted word embedding uses the same GloVe embedding; however, instead of averaging across all words, we take the weighted average using each word's IDF value as weight. For our transformer based sentence embedding we use a Sentence-BERT which relies on a Siamese sentence representation fine tuned model. Using these representations, we follow equation \ref{equation:1} to we generate a two dimension matrix representing the cosine similarity between all the sentences related to the topic. Additionally, we create a bias 1d vector which represents the cosine similarity between the topic and each sentence. For inter-sentential cosine similarity, we use a minimum threshold of 0.3 as we find word similarity tends to be higher then tf-idf similarity. We now use sentence embeddings based on \cite{reimers-2019-sentence-bert} \begin{equation} \label{equation:1} similarity(u,v) = cos(u,v) = \frac{u * v}{\norm{u}\norm{v}} \end{equation} Using the bias vector and the inter sentence similarity matrix we compute a LexRank for each sentence using the equation \ref{equation:2} where $s$ is a sentence, $q$ is the topic string, $d$ is the weight bias for the topic(set to 0.8), and $S$ represents all sentence in the topic. \begin{equation} bias(s|q_ = \frac{cos(s,q)}{\sum_{a \in S} cos(a,q) } \end{equation} \begin{equation} sentsim(s,S) = \sum_{s_1 \in S} \frac{cos(s,s_1)}{\sum_{s_2 \in S} cos(s_1,s_2) } \end{equation} \begin{equation} \label{equation:2} lr(s|q) = d * bias(s|q) + (1-d) * sentsim(s,S) \end{equation} In our system, we turn our 2d inter-sentential similarity matrix ($IS$) and bias vector ($BV$) into a matrix $M$ using equation \ref{equation:3}. Then, we implement the power method, as described in equation \ref{equation:4}, to produce a converged LexRank values for each sentence. As LexRank represents sentence probabilities we initial have a uniform distribution (all sentences are equally likely to show up) and we use the power method until the method converges or an $\epsilon$ of 0.3. \begin{equation} \label{equation:3} M = [d * IS + (1-d) * BV]^T \end{equation} \begin{equation} \label{equation:4} P_t = M * p_t-1 \end{equation} Once we have assembled sampling probabilities for each sentence we select sentences until we have either used all candidate sentences or reached the target length of 100 words. Sentences are initially ranked by LexRank score and we select candidates that have less than a 0.6 cosine similarity to all other selected sentence. This filter is applied to minimize sentence redundancy. \subsection{Information Ordering} Following content selection we run our information ordering system. The method used is based on the entity grid approach proposed in \citet{barzilay-lapata-2008-modeling}. In order to build an entity grid for a document, we first use spaCy pre-trained models to perform POS tagging and dependency parsing on the sentences, in order to identify all the nouns and label them accordingly as $S, O, X$ according to their dependency tags. Each column in the grid represents one noun, and each row represents one sentence. The cells are the labels above indicating the dependency tag of the noun in the sentence. We then vectorize each entity grid to a feature vector. The features are bigram transitions in $\{S,O,X,-\}^2$. Thus, there are 16 features in total. A transition is defined as how symbols in each column change from one to another in adjacent sentence rows. The value for each feature is the probability of the particular bigram transition occurring among all transitions. We treat sentence ordering as a ranking problem, and our goal is to learn a ranking function such that given a set of candidate sentence orderings, the function ranks the candidates and the best is kept as the optimal sentence ordering. We train an SVM model for this purpose using $SVM^{rank}$ \citep{joachims2006training}. The training data used is sampled from TAC 2009 human-created model summaries. We randomly chose 80 documents in the set and build entity grids and feature vectors for them. Then, for each of the 80 documents, we generate up to 20 random sentence permutations of the document and their corresponding feature vectors. We pair each random permutation with the gold ordering, which is up to 1600 pairs in total, and we label gold sentence with a higher rank than the random permutation, so the model should favor the gold sentence which is more coherent. Once the ranking model is trained, we can use it for predicting rank of new instances. Given selected sentences from the content selection component, we generate a number of sentence permutations as candidate orderings. One of the candidate is the chronologically ordered sentence. We then build entity grid and feature vectors for all candidates, and predict the rankings of the candidates and keep the best according to our SVM ranking model's prediction. \subsection{Content Realization} Following information ordering we run our content realization system. In addition to cleaning with regular expressions, largely as a redundancy for the cleanup done during preprocessing. For this deliverable, two naive techniques to improve readability were attempted but proved to add computational overhead and worsened ROUGE-2: splitting large sentences in half and trimming gratuitous nodes from the resulting sentences. As a result, we had to revert to the previous iteration of our content realization system. \subsubsection{Attempted Improvement} A three-stage pipeline was introduced: cleanup followed by splitting sentences followed by trimming gratuitous modifiers. Splitting the large sentences, consisting of anything larger than 7 or 8 tokens, was a largely naive procedure applied strictly to sentences that contained multiple nodes labeled as "ROOT" in the parse. The procedure found the first node labeled as 'cc' by the SpaCy parse and split the sentence at that index creating two new sentences. The impact of such an approach included contrast discourse relations (e.g. those from the conjunction but) potentially being lost. An example of the result of such a split is as follows, with changed areas underlined:\\ \begin{tabular}{l p{55mm}} \textbf{Original} & Next week the Security Council is to review the sanctions imposed against Baghdad following the invasion of \underline{Kuwait, and western} diplomats are predicting that the four-year-old embargo will remain in force.\\ \textbf{Split} & Next week the Security Council is to review the sanctions imposed against Baghdad following the invasion of \underline{Kuwait. Western} diplomats are predicting that the four-year-old embargo will remain in force. \end{tabular}\\ \\ \\ \\ Similarly, long sentences extracted from newspapers were also split at semicolons. Both approaches to splitting long sentences improved individual sentence readability- as determined by humans; however, the small sentences led to discourse simplified beyond the degree typical of such a summary.\\ \\ After splitting sentences, the SpaCy parser was reused in order to parse the split sentences for modifier removal. This step removed both adjectives and adverbs alike. Because of the naive nature of such deletions, informative modifiers were removed in addition to gratuitous ones, reducing readability. Such deletions meant sentences with degree modifiers ended up with less information, and this was particularly problematic for phrases like "high risk" and "low probability." Due to such word deletions, ROUGE-1 and ROUGE-2 recall scores dropped significantly; conversely, the conciseness-motivated revisions meant ROUGE-1 precision increased. \subsection{Evaluation} We use both automatic (ROUGE) and human evaluation to assess our system performance. To run the ROUGE evaluation script, we generate a configuration file that associates our system's output summaries and the provided gold standard human summaries. For easy comparison, we only consider the recall scores of ROUGE-1 and ROUGE-2. A human evaluation on readability is performed on a subset of the output of the final iteration of our system. Each annotator is asked to assign a score between 1 to 5 to a system summary. Criteria is as follows: \vspace{-0.2cm} \begin{enumerate} \itemsep -0.1cm \item poor: confusing, bizarre, or incoherent; unrecoverable contexts \item fair: several readability problems, but most are recoverable \item ok: mostly understandable; a few minor readability or flow issues \item good: fully coherent; minor stylistic problems \item excellent: idiomatically correct, readable, and cohesive \end{enumerate} \section{Results} To evaluate our system performance we ran our system on the 2010 TAC shared evaluation task. The 2010 TAC task has 43 systems including baselines. The 1st baseline (LEAD), created summaries using the first 100 words from the most recent document. The 2nd baseline was the output of the MEAD summarization system \cite{Radev2003MEADRM}. We have also included system 43 and system 22 as benchmarks because they had the best performance in the shared task(eval and dev set respectively). We evaluate models according to their ROUGE-2 and ROUGE-1 Recall scores on the devtest portion of the 2010 TAC task. Our system has variants that explore using stop words removal for word vectors (NS), IDF, and a transformer model. Each implementation includes the two best tuned runs. Looking at Table 1, we can see that our system, Mockingbird outperforms the LEAD baseline but is slightly worse than MEAD and say behind system 22. We find there is some variation across sentence representation method but results are mostly similar. Looking at Table 2, we explore the effect of our hyperparameters have on model performance and see there is higher variability using the same model architecture with different parameters than using different model architecture. Human readability scores as described in section 3.5 were obtained from a sample of six different summaries generated by our system. The scores are as follows: \begin{table}[h] \centering \begin{tabular}{|c|c|} \hline \textbf{Summary ID} & \textbf{Score}\\ \hline 1105 & 3.52 \\ \hline 1106 & 3.82\\ \hline 1107 & 3.53\\ \hline 1108 & 3.23\\ \hline 1109 & 3.51\\ \hline 1110 & 3.50\\ \hline \textbf{Avg} & \textbf{3.52}\\ \hline \end{tabular} \label{table:1} \caption{The human readability scores for six different summaries.} \end{table} \begin{table}[h] \begin{tabular}{|l|l|l|} \hline \textbf{System Name} & \textbf{R-1} & \textbf{R-2}\\ \hline LEAD & - & 0.05376 \\ \hline MEAD & - & 0.05927 \\ \hline \textbf{MB}(NS-0.7-0.0-0.1) & 0.213 & 0.046 \\ \hline \textbf{MB}(NS-0.8-0.3-0.3) & 0.219 & 0.049 \\ \hline \textbf{MB}(IDF-0.2-0.0-0.1) & 0.214 & 0.050 \\ \hline \textbf{MB}(IDF-0.7-0.0-0.1) & 0.216 & 0.054 \\ \hline \textbf{MB}(IDF-0.8-0.3-0.3) & 0.217 & 0.055 \\ \hline \textbf{MB}(Transformer-0.7-0.0-0.1) & 0.231 & 0.057 \\ \hline \textbf{MB}(Transformer-0.2-0.0-0.1) & 0.237 & 0.058 \\ *D3 Configuration* & & \\ \hline \textbf{MB}(Sentence Clipping) & 0.229 & 0.043\\ \hline \textbf{MB}(Final Devtest) & \textbf{0.231} & \textbf{0.058}\\ \hline \textbf{MB}(Final Evaltest) & \textbf{0.265} & \textbf{0.063}\\ \hline System 43 & - & 0.01154\\ \hline System 22 & - & \textbf{0.09574} \\ \hline \end{tabular} \label{table:1} \caption{R-1 and R-2 Represent ROUGE-1 and ROUGE-2. NS represents No Stop Words. Numbers that follow represent hyperparameter values of Bias Value, sentence similarity threshold, epsilon value. IDF represents IDF-weighted word embedding.} \end{table} \begin{table}[h] \centering \begin{tabular}{|l|l|l|l|l|} \hline \textbf{B} & \textbf{T} & \textbf{e} & \textbf{R-1} & \textbf{R-2}\\ \hline 0.0 & 0.0 & 0.1 & 0.20981 & 0.04281 \\ \hline 0.0 & 0.1 & 0.1 & 0.20154 & 0.04048 \\ \hline 0.0 & 0.2 & 0.1 & 0.20625 & 0.04214 \\ \hline 0.0 & 0.3 & 0.1 & 0.19241 & 0.04359 \\ \hline 0.0 & 0.4 & 0.1 & 0.19431 & 0.05530 \\ \hline 0.0 & 0.5 & 0.1 & 0.19399 & 0.05522 \\ \hline 0.0 & 0.6 & 0.1 & 0.19399 & 0.05522 \\ \hline 0.0 & 0.7 & 0.1 & 0.19399 & 0.05522\\ \hline 0.0 & 0.8 & 0.1 & 0.19399 & 0.05522 \\ \hline 0.0 & 0.9 & 0.1 & 0.19399 & 0.05522 \\ \hline 0.0 & 1.0 & 0.1 & 0.20981 & 0.04281 \\ \hline 0.1 & 0.0 & 0.1 & 0.22791 & 0.05073 \\ \hline 0.2 & 0.0 & 0.1 & \textbf{0.23691} &\textbf{0.05809} \\ \hline 0.3 & 0.0 & 0.1 & 0.23180 & 0.05667 \\ \hline 0.4 & 0.0 & 0.1 & 0.23151 & 0.05466 \\ \hline 0.5 & 0.0 & 0.1 & 0.22822 & 0.05448 \\ \hline 0.6 & 0.0 & 0.1 & 0.22689 & 0.05438 \\ \hline 0.7 & 0.0 & 0.1 & 0.23141 & 0.05736 \\ \hline 0.8 & 0.0 & 0.1 & 0.22735 & 0.05722 \\ \hline 0.9 & 0.0 & 0.1 & 0.22738 & 0.05781 \\ \hline 1.0 & 0.0 & 0.1 & 0.22607 & 0.05470 \\ \hline \end{tabular} \label{table:2} \caption{Hyperparameters tuning for transformer system. R-1 and R-2 represent ROUGE-1 and ROUGE-2. B represents Query-Topic Bias, T represents sentence similarity threshold and e represents epsilon value.} \end{table} \section{Discussion} We were surprised at the little impact that introducing sentence representation based on pre-trained language model had on our final model performance. Prior to implementation we were expecting a huge discrepancy between text representation methodology but saw little. We saw a large variability in scores with different hyperparameters, so we will continue to tweak and measure them as we improve model performance. However, one aspect that likely improved human readability was preprocessing to get rid of stray tags in the corpus. Analyzing typical errors made by our system, we found out that document parsing continues to be an issue as selected sentences often contains unwanted punctuation such as quotation marks. The biased LexRank also seemed to favor first sentence of the document, which usually contains irrelevant information such as the city where the story happened and the name of the news agency. In general, the system is inclined to pick long sentences and sentences with quotes. Likewise, the information ordering step treated sentence ordering as a ranking problem based on a set of vectorized candidate orderings using $SVM^{rank}$ for all possible permutations. Orphaned references at the beginning of a summary, such as pronouns before antecedents, were controlled for due to the bigram transitions in the entity grid. The model of sentence coherence in the corpus likely ensured coreference compatibility. However, coupled with the long quotes and sentences from the content selection step, the model of cohesion dealt with information-dense sentences that functioned as a single discourse unit. Instead of breaking sentences down at the content realization stage, the selection step could have broken down the sentences into simpler units. That way, fusion would have been more appropriate for the final stage. Content realization could have taken on the additional task of implementing sentence fusion by common subtrees \cite{BarzilayMckeown05}. Sentences could have been interleaved with one another by substituting the root of a less-informative subtree with the root of a more-informative subtree with the root and at least one child in common; such a step could have taken advantage of the entity grid from the information ordering step. Additionally, coreference replacement could have taken place based on degrees of information "freshness." \section{Conclusion and Future Work} Mockingbird (MB) is a topic-based multi-document extractive text summarization that leverages word vectors to select salient sentences. MB is a end to end summarization system that has been implemented in a simple and expandable fashion with the goal of allowing quick tweaks for exploration of model performance. MB uses a modified version of the LexRank similarity graph method which uses inter-sentence similarity and topic sentence similarity to produce a sampling probability for each sentence in a topic. Then, MB select greedily from ranked candidate sentences to reach the 100 word summarization target producing relevant summaries that have little redundancy. Then MB uses a SVM-based ranking model for selecting an optimal sentence ordering for a coherent summary.\\ MB outperforms both MEAD and LEAD baselines for the 2010 TAC summarization track on the ROUGE-2 scores on the devtest set. In future iterations of MB we will tune our hyperparameters, implement more text normalization, and implement more complex and complete information ordering and content realization systems. With respect to content realization, we intend to replace simple sentence splitting and node removal with a transformer-based system to improve sentence fluency; or, under time constraints, take advantage of the entity grid created during the information ordering step to substitute appropriate coreferences based on recency of information. \bibliography{acl2020} \bibliographystyle{acl_natbib} \end{document}
{ "alphanum_fraction": 0.7770522388, "avg_line_length": 113.9323671498, "ext": "tex", "hexsha": "5502f299e297079ab9a4e2542f737a06761f20b3", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-12-26T01:32:30.000Z", "max_forks_repo_forks_event_min_datetime": "2020-12-26T01:32:30.000Z", "max_forks_repo_head_hexsha": "b7bc935af2482f1cb329c4f6d54d5d7773fa1978", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "spacemanidol/LING573SP20", "max_forks_repo_path": "doc/acl2020.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b7bc935af2482f1cb329c4f6d54d5d7773fa1978", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "spacemanidol/LING573SP20", "max_issues_repo_path": "doc/acl2020.tex", "max_line_length": 1502, "max_stars_count": null, "max_stars_repo_head_hexsha": "b7bc935af2482f1cb329c4f6d54d5d7773fa1978", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "spacemanidol/LING573SP20", "max_stars_repo_path": "doc/acl2020.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5948, "size": 23584 }
\documentclass{article} \usepackage{setspace} \begin{document} \centerline{\sc \large How can I help others understand my standards?} \vspace{.5pc} \centerline{\sc Sunday School course 17--18} \centerline{\sc 27 September 2015} \vspace{.5pc} \centerline{\scriptsize (see https://www.lds.org/youth/learn/ss/commandments/standards)} \vspace{6pc} \section*{References} \begin{itemize} \item 2 Nephi 8:7 \item Romans 1:16 \item 2 Timothy 1:7--8 \item 1 Timothy 4:12 \item 3 Nephi 11:29 \item D\&C 11:21; 84:85; 100:5--8 \item Answering Gosepl Questions (https://www.lds.org/topics/answering-gospel-questions) \end{itemize} \section*{Story} Working in the naval program at Oak Ridge, Tennessee, Elder Scott completed the equivalent of a doctorate in nuclear engineering. Because the field was top secret, a degree could not be awarded. The naval officer who invited young Richard Scott to join the nuclear program was Hyman Rickover, a pioneer in the field. They worked together for 12 years -- until Elder Scott was called to serve as mission president in Argentina in 1965. Elder Scott explained how he received the call: ``I was in a meeting one night with those developing an essential part of the nuclear power plant. My secretary came in and said, `There's a man on the phone who says if I tell you his name you'll come to the phone.' ``I said, `What's his name?' ``She said, `Harold B. Lee.' ``I said, `He's right.' I took the phone call. Elder Lee, who later became President of the Church, asked if he could see me that very night. He was in New York City, and I was in Washington, D.C. I flew up to meet him, and we had an interview that led to my call to be a mission president.'' Elder Scott then felt he should immediately let Admiral Rickover, a hardworking and demanding individual, know of his call. ``As I explained the mission call to him and that it would mean I would have to quit my job, he became rather upset. He said some unrepeatable things, broke the paper tray on his desk, and in the comments that followed clearly established two points: `` `Scott, what you are doing in this defense program is so vital that it will take a year to replace you, so you can't go. Second, if you do go, you are a traitor to your country.' ``I said, `I can train my replacement in the two remaining months, and there won't be any risk to the country.' ``There was more conversation, and he finally said, `I never will talk to you again. I don't want to see you again. You are finished, not only here, but don't ever plan to work in the nuclear field again.' '' ``I responded, `Admiral, you can bar me from the office, but unless you prevent me, I am going to turn this assignment over to another individual.' '' True to his word, the admiral ceased to speak to Elder Scott. When critical decisions had to be made, he would send a messenger. He assigned an individual to take Elder Scott's position, whom Elder Scott trained. On his last day in the office, Elder Scott asked for an appointment with the admiral. His secretary was shocked. Elder Scott entered the office with a copy of the Book of Mormon. Elder Scott explained what happened next: ``He looked at me and said, `Sit down, Scott, what do you have? I have tried every way I can to force you to change. What is it you have?' There followed a very interesting, quiet conversation. There was more listening this time. ``He said he would read the Book of Mormon. Then something happened I never thought would occur. He added, `When you come back from the mission, I want you to call me. There will be a job for you.' '' Elder Scott shared the lesson he learned from this and the many other times he chose the right despite opposition: ``You will have challenges and hard decisions to make throughout your life. But determine now to always do what is right and let the consequence follow. The consequence will always be for your best good.'' \subsection*{Discussion} \begin{enumerate} \item How did Elder Scott prioritize his life? \item Describe the pressure under which Elder Scott found himself. How did he deal with a high ranking commanding officer? Do you ever find yourself in similar situations? \item What can we learn from Elder Scott's understanding and application of our responsibility to share the gospel (see references). \end{enumerate} \section*{Obedience} From \textit{True to the Faith}, (2004), 108--9: \begin{quotation} Many people feel that the commandments are burdensome and that they limit freedom and personal growth. But the Savior taught that true freedom comes only from following Him: ``If ye continue in my word, then are ye my disciples indeed; and ye shall know the truth, and the truth shall make you free'' (John 8:31--32). God gives commandments for your benefit. They are loving instructions for your happiness and your physical and spiritual well-being. \end{quotation} \begin{enumerate} \item How would you respond to a friend who ways the commandments are too restrictive? \item What scriptures, examples, or personal experiences could you share with a friend to help him or her understand the purposes of God's commandments? \end{enumerate} \section*{Role-play} Practice explaining your standards in various situations. \end{document}
{ "alphanum_fraction": 0.7659413434, "avg_line_length": 65.2469135802, "ext": "tex", "hexsha": "e4195ae3d095ff46cc86c42b403a2f989cb22d8e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3db8d1e20b9ad6d1bd88f7e5e447819324dcb665", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "mpherg/church", "max_forks_repo_path": "cfm-sep-standards.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3db8d1e20b9ad6d1bd88f7e5e447819324dcb665", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "mpherg/church", "max_issues_repo_path": "cfm-sep-standards.tex", "max_line_length": 482, "max_stars_count": null, "max_stars_repo_head_hexsha": "3db8d1e20b9ad6d1bd88f7e5e447819324dcb665", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "mpherg/church", "max_stars_repo_path": "cfm-sep-standards.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1270, "size": 5285 }
\textbf{Revised \today , at \currenttime} % \providecommand{\comment}[1]{\textbf{[#1]}} \section{Introduction} Since the discovery of long-lived oscillations in cross peaks of the two-dimensional electronic spectrum of the Fenna-Matthews-Olsen (FMO) complex~\cite{FMO1}, there has been a great deal of interest because the observed oscillations could mean that photosynthetic chlorophylls were keeping a stable superposition of electronic-excited-states for much longer than a chaotic biological environment was expected to allow ~\cite{FMO2,Panitchayangkoon2011,Lambert2012}. % A long-lived electronic superposition (coherence) can have positive consequences for energy transfer rates and efficiencies~\cite{FMO1}, so biological protection against electronic decoherence would be a profoundly interesting result of natural selection--one that we might hope to emulate. There is further suggestion that the FMO and other photosynthetic complexes may have an optimal dephasing rate for energy transfer efficiency between perfect coherence and strong dephasing, both of which are sub-optimal ~\cite{energyTransfer} ~\cite{Panitchayangkoon2011,Fidler2013,Collini2010}. There is also great interest in being able to mimic these evolution-tuned efficiencies for artificial designs ~\cite{Creatore2013}. The assignment of the off-diagonal oscillations to electronic effects is, however, disputed, as vibrations can cause similar signatures. While much useful work has been done to pinpoint the unique signatures of electronic coherence and the unique signatures of vibrational coherence in a two-dimensional electronic spectroscopy experiment ~\cite{FMO2,mech2,mech3,mech1,mech4,Halpin2014,Chenu2013}, such signatures remain disputed, and a simpler, unambiguous tool to distinguish electronic from vibrational coherences is valuable. Yuen-Zhou et al.\ proposed such an experiment \cite{witness}, which aims to give a concrete yes or no answer (called a witness in the language of quantum information~\cite{Chuang2005}) to the question of whether a given coherent oscillation has electronic character. That work proposed using only a one dimensional pump-probe experiment in the impulsive (i.e., ultrashort pulse) limit. They showed that for impulsive pulses, there are no oscillations in the frequency-integrated pump probe signal when a system has only vibrational coherences. Oscillations in the pump-probe signal are a witness for electronic coherence. Thus for the rest of this paper, we will refer to the proposed procedure as ``The Witness''. The effects of pulse widths on pump-probe spectroscopy had been studied previously~\cite{pulseWidth} but the proposed witness uniquely suggested using pulse width as an independent variable in an experiment, which should be feasible given developments in two-dimensional spectroscopy using pulse-shaping devices~\cite{pulse1,pulse2,Stock1988,Stock1992}. Johnson et al. ~\cite{allanWitness} showed that the witness procedure works for finite-duration pulses and proposed to use several such pulse durations to extrapolate to the impulsive limit. Generally speaking, if the oscillations in a pump probe experiment approach zero as the pulse duration approaches 0, then there is no electronic coherence. If, however, the oscillations increase as the pulse duration approaches 0, then there is an electronic coherence. An essential assumption of Refs.~\cite{witness,allanWitness} is the Condon approximation, that transition dipoles have no variation with respect to nuclear coordinate ~\cite{Condon,FranckCondon}. This approximation is widely used in molecular spectroscopy, and there are also well-known systems in which non-Condon effects are highly important \cite{photosyntheticKappa,MavrosNonCondon,hellerGraphene,Hockett2011}. In this work, we show how well the witness protocol can tolerate a violation of the Condon approximation. We show that for systems with small electron-vibration coupling, quantified by the Huang-Rhys factor $S$, the witness protocol can give false-positive results even with small non-Condon effects. For larger values of $S$, where the vibrational coherence is more visible in optical experiments, the witness protocol is robust to larger non-Condon effects, with a small modification to the interpretation of the data. We quantify the parameter range in which the protocol is expected to work. To test the effect of transition dipole variation on the witness experiment, we construct the simplest possible system with vibrational effects but no electronic coherence and then simulate pump probe experiments. Our system of choice has two electronic levels and a single harmonic vibrational level. This system is the same as that studied in Refs.~\cite{witness,allanWitness}, with the addition of non-Condon effects in the transition dipole. \section{Methods} We study a model system with two electronic levels, labelled $\ket{g}$ and $\ket{e}$, coupled to a single harmonic vibrational mode. The frequency of the vibration is $\omega_\gamma$, $\omega_\epsilon$ when the system is in the state $\ket{g}$, $\ket{e}$, respectively, giving a Hamiltonian \begin{align} H_0 &= \sum_n \hbar \omega_{\gamma} \left(n + \frac{1}{2} \right) \ket{n_{\gamma}}\ket{g}\bra{g} \bra{n_{\gamma}} \\ &+ \sum_m \left( \hbar \omega_{\epsilon} \left(m + \frac{1}{2} \right) + \omega_e \right) \ket{m_{\epsilon}} \ket{e}\bra{e} \bra{m_{\epsilon}}. \end{align} Greek indices correspond to the vibrational states and the roman indices correspond to the electronic states. The ground-state vibrational wavefunctions $\langle x \ket{n_\gamma}$ are centered at vibrational coordinate $x=0$ while the excited-state vibrational wavefunctions $\langle x \ket{m_\epsilon}$ are centered at $x=\delta x$, giving a Huang-Rhys factor of $S=\frac{\omega_\gamma \omega_\epsilon(\delta x)^2}{\hbar (\omega_\gamma+\omega_\epsilon)}$. We consider the transition dipole to be aligned with the optical pulse polarizations, for simplicity, with a transition dipole \begin{align} \hat{\mu}(x) &= \mu (x) \left( \ket{e}\bra{g} + \ket{g} \bra{e} \right). \end{align} In the Condon approximation, $\mu(x)=\mu_0$ is a constant. In the spirit of keeping things as simple as possible to demonstrate the effect of transition dipole variations, we consider a linear form of $\mu(x)$: \begin{align} \mu(x) &= \mu_0\left( 1 + \kappa x \right) \end{align} To make $\kappa$ comparable for different values of $\delta x$, we define also the dimensionless quantity $\lambda=\delta x \kappa$. The quantity $\lambda$ has the useful property that above $\lambda=1$, the variation of $\mu$ as the system moves from the ground- to excited-state equilibrium position becomes larger than the mean value $\mu_0$. As a reference, in C-phycocyanin, $\kappa$ has been calculated to be approximately 0.3 and $\lambda$ to be approximately 0.1 \cite{photosyntheticKappa}. We simulate Pump Probe spectroscopy using ideal Gaussian laser pulses, which differ only in their pulse durations and arrival times. This means that both of our pulses (labeled pu for pump and pr for probe) will have the form: \begin{align} E_{\text{pu}} &= \frac{E_0}{\sqrt{2 \pi \sigma^2}} e^{-\frac{t^2}{2 \sigma^2} } \left[ e^{-i \omega_c t} + e^{i \omega_c t} \right]\\ E_{\text{pr}} &= \frac{E_0}{\sqrt{2 \pi \sigma^2}} e^{-\frac{\left(t-T\right)^2}{2 \sigma^2} } \left[ e^{-i \omega_c \left(t-T\right)} + e^{i \omega_c \left(t-T\right)} \right] \end{align} We normalize the pulses as in Ref.~\cite{allanWitness} to keep the integral of the absolute value of the electric field constant across with varying pulse widths. The pulse's central frequency $\omega_c$ is always tuned to the strongest absorption transition in the simulated system and the pulse width $\sigma$ is our main experimental independent variable. In the rotating wave approximation, it is helpful to further split the laser pulses into positive frequency excitation pulses: \begin{align} E_{\text{pu}+}(t) &= \frac{E_0}{\sqrt{2 \pi \sigma^2}} e^{-\frac{t^2}{2 \sigma^2} } e^{-i \omega_c t} \\ E_{\text{pr}+}(t) &= \frac{E_0}{\sqrt{2 \pi \sigma^2}} e^{-\frac{\left(t-T\right)^2}{2 \sigma^2} } e^{-i \omega_c \left(t-T\right)} \end{align} and negative frequency relaxation pulses: \begin{align} E_{\text{pu}-}(t) &= \frac{E_0}{\sqrt{2 \pi \sigma^2}} e^{-\frac{t^2}{2 \sigma^2} } e^{i \omega_c t} \\ E_{\text{pr}-}(t) &= \frac{E_0}{\sqrt{2 \pi \sigma^2}} e^{-\frac{\left(t-T\right)^2}{2 \sigma^2} } e^{i \omega_c \left(t-T\right)} \end{align} Assuming that the system begins in state $\ket{\psi_0}$, the first order perturbative wavepacket after one interaction with pulse $Q$ is \begin{align} \ket{\psi_{Q} (t)} &= -\frac{i}{\hbar} \int_{-\infty}^{t} U(t, \tau) E_{Q}(\tau) \hat{\mu}(x) \ket{\psi_0 (\tau)} d \tau. \end{align} A subsequent interaction with pulse $P$, which could be the same pulse, gives the second-order perturbative wavepacket \begin{align} \ket{\psi_{Q, P} (t)} &= -\frac{i}{\hbar} \int_{-\infty}^{t} U(t, \tau) E_P (\tau) \hat{\mu}(x) \ket{\psi_Q (\tau)} d \tau. \end{align} Assuming an ensemble of identical systems distributed over a volume larger than the pulse wavelength, contributions to the pump-probe signal require phase matching. In the rotating wave approximation, these perturbative wavepackets allow construction of the pump probe signal $S_{PP}(T)$ as \begin{align*} E_{GSB1} (t, T) &= i \bra{\psi_{0} (t)} \hat{\mu} (x) \ket{\psi_{\text{pu+, pu-, pr+}} (t, T)}\\ E_{ESA} (t, T) &= i \bra{\psi_{\text{pu+}} (t)} \hat{\mu} (x) \ket{\psi_{\text{pu+, pr+}} (t, T)}\\ E_{GSB2} (t, T) &= i \bra{\psi_{\text{pu+, pr-}} (t, T)} \hat{\mu} (x) \ket{\psi_{\text{pu+}} (t)}\\ E_{SE} (t, T) &= i \bra{\psi_{\text{pu+, pu-}} (t)} \hat{\mu} (x) \ket{\psi_{\text{pr+}} (t, T)} \\ S_{i} (T) &= 2 \text{Re} \left[ \int E^*_{\text{pr+}} (t) E_i (t, T) dt \right] \\ S_{PP} (T) &= S_{GSB1} (T) + S_{GSB2} (T) + S_{ESA} (T) + S_{SE} (T) \\ \tilde{S}(\Omega) &= \int S_{PP} (T) e^{i \Omega T} d T,\\ \Gamma &= \int |S_{PP}(T)-\bar{S}_{PP}|^2 dT \end{align*} where $t$ is the laboratory time, $T$ is the time delay between the pump and probe pulses. The main object investigated in this work is $\tilde{S}(\Omega)$, the Fourier transform of $S_{PP}(T)$. When $\Omega$ is chosen equal to the vibrational frequency, we explore the changing amplitude of $\tilde{S}(\Omega)$ with pulse duration $\sigma$ for a range of systems with varying Huang-Rhys factors $S$ and non-Condonicities $\kappa$. We calculate these quantities using time-domain wavepacket propagation, as described previously \cite{Mukamel1995,UFSwavepackets,Tannor2007,technique}. In this work, we consider only systems with a single electronic excited state, so they have no excited-state electronic coherence. All oscillatory effects in the pump-probe signal are vibrational, so we want to show when the modified witness successfully characterizes these oscillations as vibrational in origin. In all simulations, we choose the central frequency of the optical pulses to be resonant with the strongest transition in the calculated absorption spectrum. We begin with initial states in the lowest-energy vibrational state of the ground state manifold and do not consider thermal or orientational averaging effects, which we do not believe are important for the present study. We do not consider dephasing or inhomogeneous broadening processes. If we consider the impulsive limit, then we can show that thethe Pump-Probe signals simplfy to: \begin{align} S_{GSB2} &= E_0^4 \sum_{k} \cos \left[ \omega_{\gamma} \left( k - \eta \right) T \right] \left( \bra{\eta_{\gamma}} \mu^2 (x) \ket{k_{\gamma}} \right)^2 \\ S_{SE} &= E_0^4 \sum_{j, l} \cos \left[\omega_{\epsilon} \left( j - l \right) T \right] \\ &\times \bra{\eta_{\gamma}} \mu (x) \ket{j_{\epsilon}} \bra{j_{\epsilon}} \mu^2 (x) \ket{l_{\epsilon}} \bra{l_{\epsilon}} \mu (x) \ket{\eta_{\gamma}} \\ S_{ESA} &= 0 \\ S_{GSB1} &= E_0^4 \sum_{k} \cos \left[ \omega_{\gamma} \left( k - \eta \right) T \right] \left( \bra{\eta_{\gamma}} \mu^2 (x) \ket{k_{\gamma}} \right)^2 \end{align} which because the vibrational states are eigenfunctions, this will be zero unless $\mu(x)$ is non-constant. So we do expect oscillations at the impulsive limit for non-Condon systems, but we will address the question of how they approach the impulsive limit numerically. \begin{figure} \includegraphics[width=1.0\columnwidth]{excitation_figure.png} \caption{In the case where there is no transition dipole variation, when the transition dipole operator is applied (with an impulsive pulse), the ground state wavepacket's shape is perfectly replicated as seen in a. If, however, we introduce a linear variation, the wavepacket's shape is distorted as we see in b. THis distortion is, in itself, a change in the vibrational coherence which manifests as a demonstrable difference in the pump probe signal as seen in c and could. What is not clear is how the the signal changes as the pulse width changes for different transition dipole slopes, so we set out to investigate and see if this change keeps the protocol for electronic coherence detection proposed in ~\cite{witness,allanWitness} from working properly. } \label{fig:physicalIllustration} \end{figure} \section{Results and Discussion} We can consider a variety of model systems, characterized by their vibrational frequencies $\omega_{\gamma}$, $\omega_{\epsilon}$, Huang-Rhys factor $S$, and non-Condonicity $\lambda$. For the commonly considered case with $\omega_\gamma\approx\omega_\epsilon$, we show that the witness protocol breaks down for the smallest values of $S$ but is robust for a larger range of $\lambda$ when $S$ is larger. We begin by considering the smallest values of $S$, where the witness is broken, and then consider larger values of $S$ and propose a modified protocol that remains robust even in the presence of some non-Condonicity. For strongly non-Condon transition dipoles, the witness does not work at any $S$. For specificity, we consider $\omega_{\gamma} = \omega_{\epsilon}= 640 \text{cm}^{-1}$, but all timescales can be easily scaled for any other vibrational frequency of interest. The worst case scenario for the witness protocol occurs when $S=0$. In that limit, in the Condon-approximation with $\omega_{\gamma} = \omega_{\epsilon}$, the only allowed optical transition is the 0-0 transition, from the lowest vibrational state of the ground state to the lowest vibrational state of the excited state. There is no coherence of any variety, so there is no oscillation in the pump-probe signal, and $\tilde{S}_{PP}(\omega_\epsilon)=0$ always. Even in the Condon approximation, this case is difficult for the witness protocol, as the signal is independent of pulse duration $\sigma$, rather than monotonically decreasing as $\sigma$ is decreased. This case does, however, illustrate the effects of a non-Condon transition dipole. In the non-Condon case, the 0-0 and 0-1 transitions are both allowed, producing a vibrational coherence in the singly-excited manifold and a nonzero $\tilde{S}_{PP}(\omega_\epsilon)$. This oscillatory pump-probe signal becomes stronger as $\sigma$ becomes smaller, which looks qualitatively similar to the case of a Condon electronic coherence, as shown for several values of $\kappa$ in Fig.\ \ref{fig:tunedZero}. That is, this witness technique cannot distinguish an electronic coherence from a non-Condon vibrational-only system. \begin{figure} \includegraphics[width=1.0\columnwidth]{S0_samefrequency_kappa_comparison.png} \caption{Pump-probe oscillations at the vibrational frequency, $\tilde{S}_{PP} ( \omega)$, for a system with $S=0$ and $\omega_{\gamma} = \omega_{\epsilon} = \omega = $ 640 cm$^{-1}$, for various values of $\kappa$. The signal is log scaled due to the signal's fourth-order dependence on $\mu(x)$; small differences in $\kappa$ make very large differences in signal. In all of these systems, an experimenter using the proposed protocol would conclude that an electronic coherence exists in the system when there's not even a vibrational coherence in the ground state, absent a non-constant transition dipole. } \label{fig:tunedZero} \end{figure} We now consider $S=0.005$, which is close to photosynthetic Huang-Rhys factors ~\cite{typicalHRFforPhotosynthesis}, with results in Fig.\ \ref{fig:s_0p005}. For this small $S$, the witness protocol essentially does not work for any $\lambda>10^{-3}$. \begin{figure} \includegraphics[width=1.0\columnwidth]{W_S_0p005_contour.png} % \caption{$\tilde{S}_{PP} ( \omega_e)$ as a function of $\lambda$ and the pulse width $\sigma$ in a system where $S=0.005$, $\omega_e = .9 \omega_g$. and $\omega_g = 640 \text{cm}^{-1}$. $T_w$ is the same as defined in reference \cite{allanWitness}: the peak of the signal before it goes down again. Some traces in $\lambda$ have local minima as well when the signal goes up and we plot those as $\sigma_{up}$. In the case where the signal just goes up as the pulse width goes down, then $T_w$ is set as $\sigma=0$. } \caption{$\Gamma$ as a function of $\lambda$ and the pulse width $\sigma$ in a system where $S=0.005$, $\omega_e = .9 \omega_g$. and $\omega_g = 640 \text{cm}^{-1}$. $T_w$ is the same as defined in reference \cite{allanWitness}: the peak of the signal before it goes down again. Some traces in $\lambda$ have local minima as well when the signal goes up and we plot those as $\sigma_{up}$. In the case where the signal just goes up as the pulse width goes down, then $T_w$ is set as $\sigma=0$. } \label{fig:s_0p005} \end{figure} We now consider $S=0.2$, with results in Fig.\ \ref{fig:s_0p2}. For this larger value of $S$, for $\lambda<0.1$, the witness signature is apparent in the data, with a declining pump-probe signal as $\sigma$ is reduced, until the pulse duration $\sigma_{up}$ is reached. For $\sigma<\sigma_{up}$, the signal again increases, due to the non-Condonicity. This upturn occurs at short pulse durations, after the usual peak structure of a negative witness is well established, coming from large $\sigma$ to $T_w$ down to $\sigma_{up}$. We propose a modified witness protocol that declares a negative witness (i.e., vibrational-only coherence) in experiments that observe a peak at $T_w$ and a short-duration upturn at $\sigma_{up}$. We consider such a structure to be observed if the signal $\tilde{S}_{PP}(\omega)$ at $T_w$ is more than twice the value of the signal at $\sigma_{up}$, and we call this ratio $w$. In such a case, the experiment has determined both the vibrational character of the relevant coherence as well as the presence of a non-Condon contribution to the transition dipole. Further methods for detecting and measuring non-Condon transition dipoles can be found in Ref.\ ~\cite{myDetectingNonCondonPaper}. With this modified protocol in mind, we construct figure \ref{fig:working_regions_W} where we show the ratio $w$ for 5 values of $S$. With our criteria that $w \leq 0.5$ there are quite a few regions where the proposed protocol will still work, but even more where it will give false positives. \begin{figure} \includegraphics[width=1.0\columnwidth]{working_regions_W.png} \caption{The ratio $w$ of the oscillation amplitude at the minimum pulse width to the oscillation amplitude at the local maxima (or witness time $T_w$) for various values of $S$. The region shaded in green is where our modified protocol does not produce a false-positive. } \label{fig:working_regions_W} \end{figure} Reference \cite{allanWitness} proposed using the entire frequency-integrated pump-probe signal $\Gamma=\int |S_{PP}(T)-\bar{S}_{PP}|^2 dT$, where $\bar{S}_{PP}$ is the average value of the pump-probe signal and the integral is taken only for $T>3\sigma_{max}$, to avoid pulse-overlap effects, where $\sigma_{max}$ is the largest pulse duration studied in the experiment. In this work we have considered the frequency-resolved pump-probe signal $\tilde{S}_{PP}(\omega)$, but the analysis for $\Gamma$ is similar and gives the same regions of $(S,\lambda)$ where the modified witness protocol functions. \begin{figure} \includegraphics[width=1.0\columnwidth]{W_S_0p2_contour.png} \caption{$\Gamma$ as a function of $\lambda$ and the pulse width $\sigma$ in a system where S=0.2, $\omega_e = .9 \omega_g$. and $\omega_g = 640 \text{cm}^{-1}$. } % \caption{$\tilde{S}_{PP} ( \omega_e)$ as a function of $\lambda$ and the pulse width $\sigma$ in a system where S=0.2, $\omega_e = .9 \omega_g$. and $\omega_g = 640 \text{cm}^{-1}$. \textbf{Add a part (b) to this figure with three line cuts at $\lambda=0.001,0.01,0.1$?} } \label{fig:s_0p2} \end{figure} \begin{figure} \includegraphics[width=1.0\columnwidth]{W_S_0p4_contour.png} \caption{$\Gamma$ as a function of $\lambda$ and the pulse width $\sigma$ in a system where $S=0.4$. $\omega_e = .9 \omega_g$. and $\omega_g = 640 \text{cm}^{-1}$.} % \caption{$\tilde{S}_{PP} ( \omega_e)$ as a function of $\lambda$ and the pulse width $\sigma$ in a system where $S=0.4$. $\omega_e = .9 \omega_g$. and $\omega_g = 640 \text{cm}^{-1}$.} \label{fig:s_0p4} \end{figure} \begin{figure} \includegraphics[width=1.0\columnwidth]{W_S_1p0_contour.png} \caption{$dT$ as a function of $\lambda$ and the pulse width $\sigma$ in a system where $S=1.0$, $\omega_e = .9 \omega_g$. and $\omega_g = 640 \text{cm}^{-1}$. } % \caption{$\tilde{S}_{PP} ( \omega_e)$ as a function of $\lambda$ and the pulse width $\sigma$ in a system where $S=1.0$, $\omega_e = .9 \omega_g$. and $\omega_g = 640 \text{cm}^{-1}$. } \label{fig:s_1p0} \end{figure} To investigate the dips in signal seen in figures \ref{fig:s_0p005} and \label{fig:s_0p2}, we introduce figure \ref{fig:addedSignals}, where we look at $S=.2$ and $\kappa=.037$. When the nonzero $\kappa$ is turned off and set to zero, we get a decreasing oscillatory signal approaching zero as the laser pulse width approaches zero just as predicted by ~\cite{allanWitness}. If, however, we turn on the nonzero $\kappa$, we see the while there is a dip at a certain point in the signal, it increases as the pulse width approaches zero. Indeed, even if we set $S=0.0$ but keep the same value of $\kappa$, and thus have a system that should have almost no oscillations, there is still an increasing oscillation signal. The shape of these three curves suggests that there is an interference interplay between the tendency of a vibrational coherence to go down as the pulse width approaches zero and the tendency of an oscillation induced by a linear transition dipole variation to increase as the pulse width gets toward zero. \begin{figure} \includegraphics[width=1.0\columnwidth]{comparison_omega_g.jpg} \caption{$\tilde{S}_{PP} ( \omega_{\gamma})$ Here we have plotted the ground state oscillations for 3 systems: $\kappa=.037, S=0$; $\kappa=.037, S=0.2$; and $\kappa=0.0, S=0.2$. This shows roughly that the effects of a non-Condon monomer without vibrational displacement add together with the effects of a Condon monomer to get the result for a non-Condon monomer with a vibrational displacement} \label{fig:addedSignals} \end{figure} \section{Conclusion} The proposed witness for electronic coherences \cite{witness,allanWitness} relied on the Condon approximation. We have found that in vibrational systems with very small Huang-Rhys factors $S$, small non-Condon effects can give false-positive effects, where a system with no electronic coherence is incorrectly determined to have an electronic coherence. For larger $S$, the witness protocol can be successfully used for larger values of $\lambda$. It is possible that modified experimental procedures could increase the domain of ($S$, $\kappa$) in which the witness protocol will function. In order to apply the witness protocol, one must be able to estimate or measure the non-Condon effects to be sufficiently small. Methods to measure non-Condonicity are proposed in Ref. \cite{myDetectingNonCondonPaper}. \section{Acknowledgments} We acknowledge support from the Center for Excitonics, an Energy Frontier Research Center funded by the U.S. Department of Energy under award DE-SC0001088 (Solar energy conversion process) and the Natural Sciences and Engineering Research Council of Canada. The authors also wish to thank Doran Bennet for his useful commentary on this manuscript.
{ "alphanum_fraction": 0.7466612212, "avg_line_length": 135.2762430939, "ext": "tex", "hexsha": "42e8b86ddee21aa6642e53a2c1a04e184d3b6ddb", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "012ad400e1246d2a7e63cc640be4f7b4bf56db00", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "jgoodknight/dissertation", "max_forks_repo_path": "NonCondonPaper/text.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "012ad400e1246d2a7e63cc640be4f7b4bf56db00", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "jgoodknight/dissertation", "max_issues_repo_path": "NonCondonPaper/text.tex", "max_line_length": 1028, "max_stars_count": 1, "max_stars_repo_head_hexsha": "012ad400e1246d2a7e63cc640be4f7b4bf56db00", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "jgoodknight/dissertation", "max_stars_repo_path": "NonCondonPaper/text.tex", "max_stars_repo_stars_event_max_datetime": "2020-04-21T06:20:42.000Z", "max_stars_repo_stars_event_min_datetime": "2020-04-21T06:20:42.000Z", "num_tokens": 6708, "size": 24485 }
\section{Introduction} \subsection{The purpose of the report} The purpose is to give the reader information about the development process of the application. The intent is for the reader to understand how the project was solved. This includes the whole project from start to finish, including the requirements and thought process behind the decisions that were made during development. \subsection{Scope} This report contains the information about the development process for the application MetIma. The name MetIma is a combination of the words \textit{metadata} and \textit{image}. The application is primarily an imaging application which reads the metadata of images and saves it in a accessible database. This report covers all the different elements from the process, as well as how the process was structured. As for the application itself, and its position in the market, MetIma is part of a very competitive space. With already well established similar applications like Picasa and Photoscape, MetIma will have a hard time acquiring users that already use these applications. At the same time, one of MetImas differentiating features is that it is very lightweight and tailored towards users that prefer a simple to use interface. This could entice users to switch over to MetIma if it offers all the functionality they need in a software like it. MetIma also has the advantage of being commissioned by a customer, so it has a guaranteed user base. \newpage \subsection{Definitions, Acronyms and Abbreviations} In the report, there are a lot of terms used that might not have an obvious meaning by themselves. Therefore, a list of these terms and their explanations have been included: \begin{itemize} \item IDE - \textbf{I}ntegrated \textbf{d}evelopment \textbf{e}nvironment. \item NTNU - Norwegian University of Science and Technology. (\textbf{N}orges \textbf{t}eknisk-\textbf{n}aturvitenskapelige \textbf{u}niversitet) \item WCAG - \textbf{W}eb \textbf{C}ontent \textbf{A}ccessibility \textbf{G}uidelines \item MVP - \textbf{M}inimum \textbf{v}iable \textbf{p}roduct \item GUI - \textbf{G}raphical \textbf{u}ser \textbf{i}nterface \item VoIP - \textbf{V}oice over \textbf{I}nternet \textbf{P}rotocol \item DevOps - Software development (Dev) and Information-technology operations (Ops) \item ORM - \textbf{O}bject-\textbf{r}elational \textbf{m}apping \end{itemize} \subsection{Overview} This report explains what the project is and how it was solved. There is also additional information regarding the application, and its functions. The report goes into detail about how the development team collaborated and solved problems during the project.
{ "alphanum_fraction": 0.7908351811, "avg_line_length": 84.5625, "ext": "tex", "hexsha": "204b481ff0308f5113ba1e287a44be98d1aa41b4", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "1b02f2ed8bf15506fc535d72342656bf0d422a88", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Gluppe/ImagningApplication_project", "max_forks_repo_path": "documents-and-resources/document_sources/Main report/Sections/Introduction.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "1b02f2ed8bf15506fc535d72342656bf0d422a88", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Gluppe/ImagningApplication_project", "max_issues_repo_path": "documents-and-resources/document_sources/Main report/Sections/Introduction.tex", "max_line_length": 640, "max_stars_count": null, "max_stars_repo_head_hexsha": "1b02f2ed8bf15506fc535d72342656bf0d422a88", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Gluppe/ImagningApplication_project", "max_stars_repo_path": "documents-and-resources/document_sources/Main report/Sections/Introduction.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 657, "size": 2706 }
% design and analysis % % DESIGN AND ANALYSIS: Objective of the study, data source, statistical model/tools/methodology, % validity of the assumptions if any, results of the study (graphs, tables will go here), % results discussion, (interpretations/consclusions/inferences) % %---------------------------------------------------------------------------------------- % PACKAGES AND OTHER DOCUMENT CONFIGURATIONS %---------------------------------------------------------------------------------------- % none \section{Design and Analysis} To best model the dichotomous response variable, Y\textunderscore HighGradeCancer, in the Prostate Cancer case study, I will employ a multiple logistic regression model, where 1 indicates high grade cancer and 0 indicates not high grade cancer. \par In statistics, if \(\pi = f(x)\) is a probability then \(\frac{\pi}{1-\pi}\) is the corresponding \textit{odds}, and the \textbf{logit} of the probability is the logarithm of the odds: \begin{equation} logit(\pi) = log(\frac{\pi}{1-\pi}) \end{equation} Now, simple logistic regression means assuming that \(\pi(x)\) is related to \(\beta_0 + \beta_1x\) (the \textit{logit response function}) by the logit function. By equating \(logit(\pi)\) to the logit response function, we understand that the logarithm of the odds is a linear function of the predictor. In particular, the slope parameter \(\beta_1\) is the change in the log odds associated with a one-unit increase in \textit{x}. This implies that the odds itself changes by the multiplicative factor \(e^{\beta_1}\) when \textit{x} increases by 1 unit. \begin{equation} log(\frac{\pi}{1-\pi}) = \beta_0 + \beta_1x \end{equation} From here, straightforward algebra will then show the Simple Linear Regression Model: \begin{equation} E[Y] = \pi(x) = \frac{e^{\beta_0 + \beta_1x}}{1+e^{\beta_0 + \beta_1x}} \end{equation} Next, this simple logistic regression model is easily extended to more than one predictor variable by inclusion of the following two vectors, in matrix notation: \[ \boldsymbol{\beta} = \begin{bmatrix} \beta_0 \\ \beta_1 \\ \vdots \\ \beta_{p-1} \end{bmatrix} \quad \textbf{X} = \begin{bmatrix} 1 \\ X_1 \\ X_2 \\ \vdots \\ X_{p-1} \end{bmatrix} \] With this notation, the simple logistic response function (Eqn. 3) extends to the multiple logistic response function as follows: \begin{equation} E[Y] = \pi(\textbf{X}) = \frac{exp(\textbf{X}'\boldsymbol{\beta})}{1+exp(\textbf{X}'\boldsymbol{\beta})} \end{equation} Fitting the logistic regression to the sample data requires that the parameters \(\beta_0\), \(\beta_1\),\(\cdots\), \(\beta_{p-1}\) be estimated. This will be done using the maximum likelihood technique provided within the statistical packages of both \textbf{R} and \textit{Python}. \subsection{Data Transformations and Standardization} Variable transformation is an important technique to create robust models using logistic regression, and the appropriate transformations on continuous variables are necessary to optimize the model predictiveness. Because the predictors are linear in the log of the odds, it is often helpful to transform the continuous variables to create a more linear relationship. \par The raw data collected contained several predictors with high skewness values. A few concerning features were determined to be PSA Level (skewness = 4.39), Cancer Volume (skewness = 2.18), and Weight (skewness = 7.46). As a prepossessing step to reduce skewness, I elected to transform these continuous predictor variables using the log-transformation, and standardize \textit{all} the data on top of that. The standardization step was used to normalize the data, and did not affect any underlying distributions among the predictor variables. \par The finalized data skewness is summarized directly below in Figure 1. \begin{figure}[H] \centering \includegraphics[scale=0.9]{final_skewness} \caption{Finalized Skewness Values of Transformed Predictor Variables.} \end{figure} Additionally, I've included the histogram of PSA Level vs. Cancer Volume in Figure 2 - a helpful visual for the two predictors which carried the most significance through much of my analysis, as we soon shall see. Notice how the distributions exhibit no notable skewness, are quite symmetrical, and are centered on zero. \begin{figure}[H] \centering \includegraphics{psalevel_cancervol_skewness} \caption{Finalized PSALevel vs. CancerVol Histogram.} \end{figure} \subsection{Model Selection} The data of 97 individual men in the Prostate Cancer sample was split at 80\% for train (model-building) and test (validation) sets. The training set is a random 76 observations and was used for fitting the model, and the remaining 21 cases were saved to serve as a validation data set. Figure 3 in columns 1-8 contains the variables, and shows a portion of the finalized and processed training data. \par \begin{figure}[H] \centering \includegraphics[scale=0.7]{train_data_python} \caption{Portion of Processed Model-Building Data Set - \textit{Python} Dataframe.} \end{figure} \pagebreak \subsubsection{Best Subsets Procedure} The procedure outlined here will help identify a group of subset models that give the best values of a specified criterion. This technique has been developed by time-saving algorithms which can find the most promising models, without having to evaluate all \(2^{p-1}\) candidates. The use of the best subset procedure is based on the \textit{AIC\textsubscript{p}} criteria, where promising models will yield a relatively small value. \par The minimized \textit{AIC\textsubscript{p}} stepwise output given by \textbf{R} is provided in Figure 4 below. \begin{figure}[H] \centering \includegraphics[scale=0.9]{best_subset} \caption{Full Linear Model \textit{AIC\textsubscript{p}} Best Subset Results - \textbf{R} Output.} \end{figure} In this procedure, I instructed \textbf{R} to iterate "backwards" through all 7 predictor variables and it was determined \textit{AIC\textsubscript{p}} was minimized for \(p=3\). In particular, the results reveal that the best two-predictor model for this criteria is based on \textit{PSA Level} and \textit{Cancer Volume}. The \textit{AIC\textsubscript{p}} was minimized to 50.63, with a Null Deviance equal to 72.61 and Residual Deviance equal to 44.63. \subsubsection{Model Fitting} A first-order multiple logistic regression model with two predictor variables was considered to be reasonable by \S4.2.1: \begin{equation} \pi(\textbf{X}) = \frac{exp(\textbf{X}'\boldsymbol{\beta})}{1+exp(\textbf{X}'\boldsymbol{\beta})} = [1+exp(-\textbf{X}'\boldsymbol{\beta})]^{-1} \end{equation} where: \begin{equation} \textbf{X}'\boldsymbol{\beta} = \beta_0+\beta_1X_1+\beta_2X_2 \end{equation} This model was then fit by the method of maximum likelihood to the data from the 76 random training cases. Results are summarized in the Figure 5 \textit{Python} output below. Provided in the output are the estimated coefficients, their standard errors, \textit{z}-scores, \textit{p}-values, and the accompanying 95\% confidence intervals. \par \begin{figure}[H] \centering \includegraphics[scale=0.9]{model_fit_python} \caption{Maximum Likelihood Estimates of Logistic Regression Function - \textit{Python} Output.} \end{figure} Thus, the estimated logistic response function is: \begin{equation} \hat{\pi}=[ 1+ exp(-2.6867 + 1.0577X_1 + 1.5502X_2)]^{-1} \end{equation} \textbf{Note}: Although the PSALevel predictor is not of 5\% significance (\textit{p}-value=0.0879), I did find it necessary to maintain it within the model. When removed, the Residual Deviance score of 44.628 from Figure 4 rose to a value of 48.123. Therefore, I've deemed it significant, and a valuable and impactful variable to achieve high model accuracy, and have not removed it from this subset of predictors. \par With the estimated logistic regression equation now developed, it is left to consider second-order options and make adjustments if required, analyze the residuals and influential observations, test goodness of fit, apply a prediction rule for new observations, and finally apply the final model to the validation data and evaluate the results. \subsubsection{Geometric Interpretation} When fitting a standard multiple logistic regression model with two predictors, the estimated regression shape is an S-shaped surface in three-dimensional space. Figure 6 displays a three-dimensional plot of the estimated logistic response function that depicts the relationship between the diagnosis of high grade prostate cancer (\textit{Y}, the binary outcome) and two continuous predictors, PSA Level (\textit{X}\textsubscript{1}) and Cancer Volume (\textit{X}\textsubscript{2}). \par This surface increases in an approximately linear fashion with increasing values of PSA Level and Cancer Volume, but levels off and is nearly horizontal for very small and large values of these predictors. \begin{figure}[H] \centering \includegraphics[scale=0.6]{3d_plot} \caption{Three-Dimensional Fitted Logistic Response Surface.} \end{figure} \subsubsection{Second-Order Predictors} Occasionally, the first-order logistic model may not provide a sufficient fit to the data, and the inclusion of higher-order predictors may be considered. I'll conclude my model development stage by attempting to fit the Prostate Cancer data to a \textit{polynomial logistic} regression model of the second order, and analyze the results. \par For simplicity, a 2\textsuperscript{nd}-order polynomial model in \textit{two} predictors has a logit response function as: \begin{equation} logit(\pi) = \beta_0 + \beta_1x_1 + \beta_2x_2 + \beta_{11}x_1^2 + \beta_{22}x_2^2 + \beta_{12}x_1x_2 \end{equation} \noindent and can be extended to more predictors by the inclusion of additional variables, their coefficients, and accompanying cross terms. Please recall, the Prostate Cancer data set considers 7 predictors. \par In many situations the true regression function has one or more peaks or valleys, and in such cases a polynomial function can provide a satisfactory approximation. However, a polynomial fit was not successful here, as indicated by non-significant \textit{p}-values across all predictors, at 5\% significance (Figure 7). \begin{figure}[H] \centering \includegraphics[scale=0.9]{poly_output} \caption{Logistic Regression Fit for Second-Order Model - \textbf{R} Output.} \end{figure} Additionally, my preliminary scatter plot analysis did not indicate any reason to believe a polynomial fit would be suitable in this study. For example, PSALevel was a major focus of this study and I've provided the scatter plot below in Figure 8. Additional scatter plots are provided in the Appendix, \S7.1. \begin{figure}[H] \centering \includegraphics{psalevel_scatterplot} \caption{PSALevel vs Y\_HighGradeCancer Scatterplot - Train Data.} \end{figure} Without evidence to be concerned of successfully fitting a model with second-order predictors, I will move forward with my analysis of the previously developed multiple logistic linear regression model. \subsection{Analysis of Residuals} In this section I will discuss the analysis of residuals and the identification of any influential observations for logistic regression. Due to the nature of logistic regression, and the fact that non-constant variance is always present in this setting, I will focus only on the detection of model inadequacy. \pagebreak \subsubsection{Logistic Regression Residuals} If the logistic regression model is correct, then \(E[Y_i]=\pi_i\) and it follows that: \begin{equation} E[Y_i-\hat{\pi}_i]=E[e_i]=0 \end{equation} This suggests that if the model is correct, a lowess smooth of the plot of residuals against the linear predictor \(\hat{\pi}^{'}_i\) should result in approximately a horizontal line with zero intercept. Any significant departure from this suggests that the model may be inadequate. Shown in Figure 9 are the Pearson residuals plotted against the linear predictor, with the lowess smooth superimposed. \begin{figure}[H] \centering \includegraphics[scale=0.70]{residual_plot} \caption{Pearson Residual Plot with Lowess Smooth.} \end{figure} Looking at the plot, the lowess smooth adequately approximates a line having zero slope and zero intercept, and I conclude that no significant model inadequacy is apparent. \subsubsection{Influential Observations} To aid in the identification of influential observations, I will use the \textbf{Cook's Distance} statistic, \(D_i\), which measures the standardized change in the linear predictor \(\hat{\pi}_i\) when the \textit{i}th case is deleted. Cook's distances are listed in the \textbf{R} Appendix \S7.2 for a portion of the Prostate Cancer testing data. \par The plot of distances in Figure 10 identifies observation 90 as being the most outlying in the \textit{X} space, and therefore potentially influential - observations 37 and 91 also read relatively high values. Observation 90 was temporarily deleted and the logistic regression fit was obtained. The results were not particularly different from those obtained from the full test set, and the observation was retained. \textbf{Note}: I additionally and temporarily removed observations 37 and 91 and obtained a fit to the updated final model. The results were not particularly different, and those records were also retained. Thus, no changes to the model are yet necessary. \begin{figure}[H] \centering \includegraphics[scale=0.55]{cooks_distance} \caption{Index Plot of Cook's Distances.} \end{figure} \subsection{Goodness Of Fit Evaluation} The appropriateness of the fitted logistic regression model needs to be examined before it is accepted for use. In particular, we need to examine whether the estimated response function for the data is monotonic and sigmoidal in shape, as are logistic response functions. Here I will employ the Hosmer-Lemeshow test, which is useful for unreplicated data sets, as is the Prostate Cancer data. The test can detect major departures from a logistic response function, and the alternatives of interest are as follows: \begin{align} \begin{split} H_0: E[Y]= [1+exp(-\textbf{X}'\boldsymbol{\beta})]^{-1} \\ H_1: E[Y] \neq [1+exp(-\textbf{X}'\boldsymbol{\beta})]^{-1} \end{split} \end{align} \subsubsection{Hosmer-Lemeshow} The Hosmer-Lemeshow Goodness of Fit procedure consists of grouping that data into classes with similar fitted values \(\hat{\pi}_i\), with approximately the same number of cases in each class. Once the groups are formed, the Hosmer-Lemeshow goodness of fit statistic is calculated by using the Pearson chi-square test statistic of observed and expected frequencies. The test statistic is known to be well approximated by the chi-square distribution with \(c-2\) degrees of freedom. \begin{equation} \chi^2 = \sum_{j=1}^{c} \sum_{k=0}^{1} \frac{(O_{jk}-E_{jk})^2}{E_{jk}} \end{equation} The output from \textbf{R} using 5 groups is shown in Figure 11 below. \begin{figure}[H] \centering \includegraphics{gof_detail} % \caption{insert caption here} \includegraphics{gof_results} \caption{Hosmer-Lemshow Goodness of Fit Test for Logistic Regression Function.} \end{figure} Large values of the test statistic X\textsuperscript{2} indicate that the logistic response function is not appropriate. The decision rule for testing the alternatives (Eqn. 10) when controlling the level of significance at \(\alpha\) therefore is: \begin{align} \begin{split} \textrm{If X\textsuperscript{2}} \leq \chi^2(1-\alpha; c-p)\textrm{, conclude } H_0 \\ \textrm{If X\textsuperscript{2}} > \chi^2(1-\alpha; c-p)\textrm{, conclude } H_1 \end{split} \end{align} Thus, for \(\alpha=0.05\) and \(c-2=5-2=3\), we require \(\chi^2(0.95; 3)=7.81\). Since \(X^2=0.838\leq7.81\), we conclude \textit{H}\textsubscript{0}, that the logistic response function is appropriate. The \textit{p}-value of the test is 0.8403. \subsection{Development of ROC Curve} Multiple logistic regression is often employed for making predictions for new observations. The \textit{receiver operating characteristic} (ROC) \textit{curve} plots \(P(\hat{Y}=1 | Y=1)\) as a function of \(1-P(\hat{Y}=0 | Y=0)\) and is an effective way to graphically display prediction rule information, and possible cutoff points. \par The "True Positive" \textit{y}-axis on an ROC curve is also known as \textit{sensitivity}, and the "False Positive" \textit{x}-axis is 1-\textit{specificity}. Figure 12 below exhibits the ROC curve for my model (Eqn. 7) for all possible cut points between 0 and 1. \begin{figure}[H] \centering \includegraphics[scale=0.45]{roc} \caption{ROC Curve.} \end{figure} \subsubsection{Prediction Rule} In the training data set (which represented a random 80\% of the 97 provided observations), there were 14 men who were observed as high grade cancer patients; hence the estimated proportion of persons who had high grade cancer is \(14/76=0.184\). This proportion can be used as the starting point in the search for the best cutoff in the prediction rule. \par Thus, if \(\hat{\pi}_h\) represents a newly fitted observation, my first prediction rule investigated is: \begin{equation} \textrm{Predict 1 if } \hat{\pi}_h \geq 0.184\textrm{; predict 0 if } \hat{\pi}_h < 0.184 \end{equation} The Confusion Matrix of Table 1 below provides a summary of the number of correct and incorrect classifications based on the initial prediction rule (Eqn. 13). Of the 62 men without high grade cancer, 13 would be incorrectly predicted to have high grade cancer, or an error rate of 21.0\%. Furthermore, of the 14 persons with high grade cancer, 1 would be incorrectly predicted to not have high grade cancer, or 7.1\%. Altogether, \(13+1=14\) of the 76 predictions would be incorrect, so that the prediction error rate for the rule is \(14/76=0.184\) or 18.4\%. Coincidentally, the model exactly matches our training set proportions with the current prediction rule. \par \begin{table}[H] \centering \begin{tabular}{ |c||c|c||c| } \hline \multicolumn{4}{|c|}{Prediction Rule Eqn. 13} \\ \hline\hline True Classification&\(\hat{Y}=0\)&\(\hat{Y}=1\)&Total\\ \hline \(Y=0\)&49&13&62\\ \(Y=1\)&1&13&14\\ \hline\hline Total&50&26&76\\ \hline \end{tabular} \caption{Classification based on Logistic Response Function Eqn. 7 and Prediction Rule Eqn. 13.} \end{table} \pagebreak With this baseline understood, it is straightforward to choose a stronger cutoff point in utilizing the ROC curve of Figure 12. As detailed above, the false-positive rate is not ideal at 21.0\% - there are too many cases where a man may opt for additional screening and treatment, even invasive actions, because he believes he has high grade prostate cancer. It will be wise to now reference the ROC curve to better choose a prediction cutoff, while also not significantly disturbing the false-negative accuracy for the worse. \par Looking at Figure 12, a step occurs at 0.20 and I use this value for my new cutoff candidate. Thus, my updated prediction rule is stated as follows: \begin{equation} \textrm{Predict 1 if } \hat{\pi}_h \geq 0.20\textrm{; predict 0 if } \hat{\pi}_h < 0.20 \end{equation} and the effects of this change can be summarized by the Confusion Matrix in Table 2 below. \begin{table}[H] \centering \begin{tabular}{ |c||c|c||c| } \hline \multicolumn{4}{|c|}{Prediction Rule Eqn. 14} \\ \hline\hline True Classification&\(\hat{Y}=0\)&\(\hat{Y}=1\)&Total\\ \hline \(Y=0\)&52&10&62\\ \(Y=1\)&2&12&14\\ \hline\hline Total&54&226&76\\ \hline \end{tabular} \caption{Classification based on Logistic Response Function Eqn. 7 and Prediction Rule Eqn. 14.} \end{table} Here, of the 62 men without high grade cancer, 10 are incorrectly predicted, or an error rate of 16.1\%. Continuing, of the 14 men with high grade cancer, 2 would be incorrectly predicted, or an error rate of 14.3\%. Altogether, updated prediction rule (Eqn. 14) now provides a total error rate of \(12/76=0.158\) or 15.8\%. Thus, the model accuracy has now increased with a significantly better false-positive rate, which is intended to reduce unnecessary financial stress across the healthcare economy. \subsection{Model: Strengths and Weaknesses} \subsubsection{Strengths} -The two predictors which build the final logistic model are PSA Level and Cancer Volume, and they both are adequately correlated with the dependent (outcome) variable Y\_HighGradeCancer - their correlation values are 0.489 and 0.493, respectively. In fact, they are more correlated to the dependent variable than any other predictors of the data set. A consumable heat-map version of a correlation matrix is provided by Figure 13 below, with a color legend given in the upper-left corner. \begin{figure}[H] \centering \includegraphics[scale=0.7]{corr_map} \caption{Correlation Heatmap - Train Data} \end{figure} -The final model has an excellent accuracy score of 84.2\% against training data. If this model also performs well in the validation step, then deploying such a model in use could possibly help non high grade cancer men be categorized as such, and thus not pursue unnecessary invasive testing and not inflate costs within the healthcare system. Also, by properly identifying those men who are high grade cancer patients, treatment and a plan can be devised sooner, as well as doing so by only use of PSA Level and Cancer Volume information, and not invasive testing. \\ \subsubsection{Weaknesses} -An initial concern while building this model occurs at the level of the provided raw data, namely the existence of multicollinearity. Figure 14 below is the Correlation Matrix of the two final predictors which built the final logistic model: PSA Level and Cancer Volume. \begin{figure}[H] \centering \includegraphics[scale=1.0]{corr_matrix} \caption{PSALevel vs. CancerVol Correlation Matrix - Train Data} \end{figure} \pagebreak As shown, PSA Level and Cancer Volume have a mild correlation value of 0.738 in the full data set. One primary danger in designing models with multicollinearity is that small changes to the input data can lead to large changes in the model, which can further lead to over-fitting. Therefore, this logistic model may be considered mildly "noisy", sensitive, and not particularly robust. \\ -The Goodness of Fit Evaluation of \S4.4 deserves some concern regarding the Pearson chi-square test. As described previously, the Hosmer-Lemeshow procedure was utilized to determine a goodness of fit, and the test statistic is known to be well approximated by the chi-square distribution with \(c-2\) degrees of freedom (Eqn. 11). However, in view of the \textbf{R} output (Figure 11) with 5 groupings, the expected values (\textit{y\textsubscript{1}}) returned were: 0, 0, 1, 4, 9. Because many values are less than 5, and two of the expected values equal 0, the conditions for a chi-square test may be voided, and it may not be an appropriate test procedure here. At the very lest, the results of the Hosmer-Lemeshow test should be accepted carefully. \\ -The final model may produce high false-negative rates. In view of Figure 15-A we see that the first prediction cutoff of 0.184 produced 1 count of false-negatives (\(7.1\%\) error rate), and an overall model accuracy of \(81.6\%\). After ROC analysis the final prediction cutoff I've employed is 0.20. By Figure 15-B this rule produces 2 counts of false-negatives (\(14.3\%\) error rate), and an overall model accuracy of 84.2\%. Thus the overall model accuracy has improved 2.6 percentage points by correctly predicting more non high grade cancer cases, but has concurrently doubled the false-negative rate. Because correctly identifying high grade cancer patients may be considered most important, this arrangement may be a downfall of the final model. \begin{figure}[H] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.8\linewidth]{confusion_matrix_PR1} \caption{Confusion Matrix - 0.184 Cutoff Rule.} \label{fig:sub1} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.8\linewidth]{confusion_matrix_PR2} \caption{Confusion Matrix - 0.20 Cutoff Rule} \label{fig:sub2} \end{subfigure} \caption{Classification Based on Logistic Response Function (Eqn. 7) and Prediction Rules (Eqn. 13) and (Eqn. 14).} \label{fig:test} \end{figure}
{ "alphanum_fraction": 0.7651322795, "avg_line_length": 70.7768115942, "ext": "tex", "hexsha": "12e4f7525690de1b3c884ac9b0c4bd4cd5acf641", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "4920f3f3066bac5ceab241f724ff1cda8eda559b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "josiwala/prostate-cancer", "max_forks_repo_path": "reports/sections/design_and_analysis.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "4920f3f3066bac5ceab241f724ff1cda8eda559b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "josiwala/prostate-cancer", "max_issues_repo_path": "reports/sections/design_and_analysis.tex", "max_line_length": 757, "max_stars_count": null, "max_stars_repo_head_hexsha": "4920f3f3066bac5ceab241f724ff1cda8eda559b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "josiwala/prostate-cancer", "max_stars_repo_path": "reports/sections/design_and_analysis.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 6417, "size": 24418 }
\subsection{Period 2: Drop and disappearance of the main haze layer around the Vernal Equinox (2008-2012) - $L_s=\ang{340}-\ang{30}$} A precursor sign of the drop of the detached haze can be seen in March 2008 (Fig.~\ref{fig:dhl_2008_2012}a). The main haze starts an initial contraction around \ang{35}S. There, the depleted zone is almost 75 km thick at its maximum. In January 2009, the main haze continued to fall from 425 km down to 375 km while the detached haze layer remained around 500 km (Fig.~\ref{fig:dhl_2008_2012}b). After the drop of the main haze in early 2009, the detached haze starts its own descent in June 2009, just before the equinox (Fig.~\ref{fig:dhl_2008_2012}c). This delay in collapse increased the apparent thickness of the depletion zone between the two haze layers. \begin{figure*}[!ht] \plotone{Fig/Lat_beta-2008_2012} \includegraphics[width=.5\textwidth]{Fig/Extinction_colorbar} \caption{Same as the figure~\ref{fig:dhl_2004_2008} for 8 images taken between 2008 and 2012 ($L_s=\ang{340}-\ang{30}$) showing the drop and disappearance of the DHL. The color schema is identical to the figure~\ref{fig:dhl_2004_2008} to provide direct comparisons. In this case, the altitude range is extended down to 300 km (where the model is less reliable).} \label{fig:dhl_2008_2012} \end{figure*} As for the main haze, the detached haze collapses first in the Southern hemisphere, from 500 km to 425 km, and then at the equator and in the northern hemisphere (Fig.~\ref{fig:dhl_2008_2012}c). This is associated with the circulation turnover affecting first the summer hemisphere ascending branch. With time, the detached haze gradually settled in altitude and finally disappear below 300 km. Later ISS observations made with the Blue and Green filters visually show that the main DHL continues its descend below 300 km during 2011. Since our current model was only tested for UV observations, we were not able to observe its merge with main haze. The complete collapse of the detached haze, as it appeared in the UV3 filter, is displayed in Fig.~\ref{fig:dhl_2008_2012}c to Fig.~\ref{fig:dhl_2008_2012}h. We note that the column extinction is smaller at equator than at other latitudes, and this is the case during entire period of the collapse. During the fall, a second thin detached haze layer, at planetary scale, is evident above the collapsing detached haze layer. In January 2010 (Fig.~\ref{fig:dhl_2008_2012}e), the detached haze layer was located between 375 and 400 km. We can still see a double deck of haze, and this time the detached haze appears higher at the equator compared to the two hemispheres, producing an arch. The haze peak extinction has globally increased by a factor of two due to sedimentation in denser layers. In August 2010 (Fig.~\ref{fig:dhl_2008_2012}f), one year after equinox, the detached haze layer continued its drop down to 375 km around \ang{40}S and 400 km at the equator. It has gained in complexity with multiple secondary layers up to 520 km. The detached haze formed a remarkable arch with a difference of about 50 km in altitude between the equator and the poles as previously noticed by~\cite{West2011}. This observation and the next one correspond to the same seasonal phase of Voyager flybys ($L_s=\ang{8}$ and \ang{18}). They can be compared directly. We now know that this season was a time of rapid change, and that the Voyager probes observed transient situations. Voyager also observed the detached haze higher near equator than elsewhere \citep{Rages1983, Rannou2000}. Due to orbital constraints and mission planning, the next observation was made in September 2011 (Fig.~\ref{fig:dhl_2008_2012}g). The detached haze layer was, at that time, well below the level of the polar hoods. Again, secondary detached layers show up as high as 470 and 520 km. The south polar hood was not present in August 2010 (Fig.~\ref{fig:dhl_2008_2012}f) and appeared in less than 13 months. This indicates that the circulation started to reverse around the equinox and the southward circulation sent haze to the southern polar region and produced a polar hood. The change in haze distribution is a very good indication of the timing of the equinoctial circulation turnover, as discussed later. We note that the strong haze depletion at 300 km and between \ang{30}S and \ang{20}N is real (and visible in the I/F profiles) but may be exaggerated at \ang{20}N due to the limit of the retrieval procedure. At this altitude level, Titan's atmosphere is opaque to UV radiation (see Fig.~\ref{fig:model_uncertainties}) and does not allow us to follow the main depletion below this altitude. The last UV3 image we have showing a detached haze layer was taken in February of 2012 (Fig.~\ref{fig:dhl_2008_2012}h). At that time, the initial detached haze had completely disappeared and the secondary detached haze layer was still descending and had reached 400 km altitude. The secondary detached haze layer is not well delineated by a layer strongly depleted in aerosols. The south polar hood increased its latitudinal extent northward to \ang{50}S and became larger than the northern polar hood which tends to retreat.
{ "alphanum_fraction": 0.7926169308, "avg_line_length": 79.6, "ext": "tex", "hexsha": "d602c39c859eb6c230d62c5d43163223a10b9a36", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "de578e85c06c7bec2e187d1dab12e6c7c9055207", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "seignovert/arxiv-2102.05384", "max_forks_repo_path": "3.2-2008_2012.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "de578e85c06c7bec2e187d1dab12e6c7c9055207", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "seignovert/arxiv-2102.05384", "max_issues_repo_path": "3.2-2008_2012.tex", "max_line_length": 154, "max_stars_count": null, "max_stars_repo_head_hexsha": "de578e85c06c7bec2e187d1dab12e6c7c9055207", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "seignovert/arxiv-2102.05384", "max_stars_repo_path": "3.2-2008_2012.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1354, "size": 5174 }
We discuss here an alternative proof method for proving normalization. We will focus here on a \emph{semantic} proof method using \emph{saturated sets} (see \cite{Luo:PHD90}). This proof method goes back to \cite{Girard1972} building on some previous ideas by \cite{Tait67}. The key question is how to prove that given a lambda-term, its evaluation terminates, i.e. normalizes. We concentrate here on a typed operational semantics following \cite{Goguen:TLCA95} and define a reduction strategy that transforms $\lambda$-terms into $\beta$ normal form. This allows us to give a concise presentation of the important issues that arise. We see this benchmark as a good jumping point to investigate and mechanize the meta-theory of dependently typed systems where a typed operational semantics simplifies the study of its meta-theory. The approach of typed operational semantics is however not limited to dependently typed systems, but it has been used extensively in studying subtyping, type-preserving compilation, and shape analysis. Hence, we believe it does describe an important approach to describing reductions. \section{Simply Typed Lambda Calculus with Type-directed Reduction} Recall the lambda-calculus together with its reduction rules. \[ \begin{array}{llcl} \mbox{Terms} & M,N & \bnfas & x \mid \lambda x{:A}. M \mid M\;N \\ \mbox{Types} & A, B & \bnfas & \base \mid A \arrow B \end{array} \] We consider as the main rule for reduction (or evaluation) applying a term to an abstraction, called \emph{$\beta$-reduction}. % together with \emph{$\eta$-expansion}. We only $\eta$-expand a term, if we do not immediately create a redex to avoid infinite alternations between $\eta$-expansion and $\beta$-reduction. We use the judgment $\Gamma \vdash M \red N :A$ to mean that both $M$ and $N$ have type $A$ in the context $\Gamma$ and the term $M$ reduces to the term $N$. % \[ % \begin{array}{lcll} % %\multicolumn{3}{l}{\mbox{$\beta$-reduction}} \\ % \Gamma \vdash (\lambda x{:}A.M)\;N : B & \red & \Gamma \vdash [N/x]M : B & \mbox{$\beta$-reduction} \\ % \Gamma \vdash M : A \arrow B & \red & \Gamma \vdash \lambda x{:}A.M~x : A \arrow B & \mbox{$\eta$-expansion} % \end{array} % \] % The $\beta$-reduction rule only applies once we have found a redex. However, we also need congruence rules to allow evaluation of arbitrary subterms. \[ \begin{array}{c} \infer[\beta] {\Gamma \vdash (\lambda x{:}A.M)~N \red [N/x]M : B } % {\Gamma \vdash \lambda x{:}A.M : A \arrow B & \Gamma \vdash N : A} {\Gamma, x{:}A \vdash M : B & \Gamma \vdash N : A} \qquad %\infer[\eta]{\Gamma \vdash M \red \lambda x{:}A.M~x : A \arrow B}{ % M \not= \lambda y{:}A.M'} \\[1em] \infer{\Gamma \vdash M\,N \red M'\,N : B}{\Gamma \vdash M \red M' : A \arrow B & \Gamma \vdash N : A} \qquad \infer{\Gamma \vdash M\,N \red M\,N' : B}{\Gamma \vdash M : A \arrow B & \Gamma \vdash N \red N' : A} \\[1em] \infer{\Gamma \vdash \lambda x{:}A.M \red \lambda x{:}A.M' : A \arrow B}{\Gamma, x{:}A \vdash M \red M' : B} \end{array} \] Our typed reduction relation is inspired by the type-directed definition of algorithmic equality for $\lambda$-terms (see for example \cite{Crary:ATAPL} or \cite{Harper03tocl}). Keeping track of types in the definition of equality or reduction becomes quickly necessary as soon as want to add $\eta$-expansion or add a unit type where every term of type unit reduces to the unit element. We will consider the extension with the unit type in Section~\ref{sec:unit}. On top of the single step reduction relation, we can define multi-step reductions as usual: \[ \ianc{\Gamma \vdash M \red N}{\Gamma \vdash M \mred N : B}{} \qquad \ibnc{\Gamma \vdash M \red N : B}{\Gamma \vdash N \mred M' : B} {\Gamma \vdash M \mred M' : B}{} \] Our definition of multi-step reductions guarantees that we take at least one step. In addition, we have that typed reductions are only defined on well-typed terms, i.e. if $M$ steps then $M$ is well-typed. \begin{lemma}[Basic Properties of Typed Reductions and Typing]\quad \begin{itemize} \item If $\Gamma \vdash M \red N : A$ then $\Gamma \vdash M : A$ and $\Gamma \vdash N : A$. \item If $\Gamma \vdash M : A$ then $A$ is unique. \end{itemize} \end{lemma} The typing and typed reduction strategy satisfies weakening and strengthening\ednote{Should maybe be already formulated using context extensions? -bp. Likewise, the subsitution lemma cound be formulated with well typed subst. Moreover the weaken/stren of typing follow from the same for reduction and lemma 1.1 -am}. \begin{lemma}[Weakening and Strengthening of Typed Reductions]\label{lem:redprop}\quad \begin{itemize} % \item If $\Gamma, \Gamma' \vdash M : B$ then $\Gamma, x{:}A, \Gamma' \vdash M : B$. % \item If $\Gamma, x{:}A, \Gamma' \vdash M : B$ and $x \not\in\FV(M)$ then $\Gamma, \Gamma' \vdash M : B$. \item If $\Gamma, \Gamma' \vdash M \red N : B$ then $\Gamma, x{:}A, \Gamma' \vdash M \red N : B$. \item If $\Gamma, x{:}A, \Gamma' \vdash M \red N : B$ and $x \not\in \FV(M)$ then $x \not\in \FV(N)$ and $\Gamma, \Gamma' \vdash M \red N : B$. \end{itemize} \end{lemma} \begin{proof} By induction on the first derivation. \end{proof} \begin{lemma}[Substitution Property of Typed Reductions]\label{lem:redsubst}\quad If $\Gamma, x{:}A \vdash M \red M' : B$ and $\Gamma \vdash N : A$ then $\Gamma \vdash [N/x]M \red [N/x]M' : B$. \end{lemma} \begin{proof} By induction on the first derivation, using standard properties of composition of substitutions. \end{proof} We also will rely on some standard multi-step reduction properties which are proven by induction. \begin{lemma}[Properties of Multi-Step Reductions]\label{lm:mredprop} \quad \begin{enumerate} \item\label{lm:mredtrans} If $\Gamma \vdash M_1 \mred M_2 : B$ and $\Gamma \vdash M_2 \mred M_3 : B$ then $\Gamma \vdash M_1 \mred M_3 : B$. \item\label{lm:mredappl} If $\Gamma \vdash M \mred M' : A \arrow B$ and $\Gamma \vdash N : A$ then $\Gamma \vdash M~N \mred M'~N : B$. \item\label{lm:mredappr} If $\Gamma \vdash M : A \arrow B$ and $\Gamma \vdash N \mred N' : A$ then $\Gamma \vdash M~N \mred M~N' : B$. \item\label{lm:mredabs} If $\Gamma,x{:}A \vdash M \mred M' : B$ then $\Gamma \vdash \lambda x{:}A.M \mred \lambda x{:}A.M' : A \arrow B$. \item\label{lm:mredsubs} If $\Gamma, x{:}A \vdash M : B$ and $\Gamma \vdash N \red N' : A$ then $ \Gamma \vdash [N/x]M \mred [N'/x]M : B$. \end{enumerate} \end{lemma} \subsection*{When is a term in normal form?} We define here briefly when a term is in $\beta$-normal form. % The presence of $\eta$ again requires our definition to be type directed. We define the grammar of normal terms as given below \[ \begin{array}{llcl} \mbox{Normal Terms} & M,N & \bnfas & \lambda x{:A}. M \mid R \\ \mbox{Neutral Terms} & R, P & \bnfas & x \mid R\;M \\ \end{array} \] This grammar does not enforce $\eta$-long. % For example, $\lambda x{:}A \arrow A. x$ is not in $\eta$-long form. % To ensure we only characterize $\eta$-long forms, we must ensure that we allow to switch between normal and neutral types at base type. % On the other hand, $\lambda x{:}A \arrow A. \lambda y{:}A.x~y$ is in $\beta$-short and $\eta$-long form. % \[ % \begin{array}{c} % \multicolumn{1}{l}{\fbox{$\nf {\Gamma \vdash M} A$}~~\mbox{Term $M$ is normal at type $A$}}\\[1em] % \ianc{\nf {\Gamma, x{:}A \vdash M} B} % {\nf {\Gamma \vdash \lambda x{:}A.M} {A \arrow B}}{} \quad % \ianc{\neu {\Gamma \vdash R}{\base}} % {\nf {\Gamma \vdash R}{\base}}{} % \\[1em] % \multicolumn{1}{l}{\fbox{$\neu {\Gamma \vdash M} A$}~~\mbox{Term $M$ is neutral at type $A$}}\\[1em] % \ibnc{\neu {\Gamma \vdash R} {A \arrow B}}{\nf {\Gamma \vdash M} A} % {\neu {\Gamma \vdash R~M} {B}}{} % \qquad % \ianc{x{:}A \in \Gamma}{\neu {\Gamma \vdash x} {A}}{} % \end{array} % \] % In practice, it often suffices to enforce that we reduce a term to a weak head normal form. For weak head normal forms we simply remove the requirement that all terms applied to a neutral term must be normal. \subsection*{Proving normalization} The question then is, how do we know that we can normalizing a well-typed lambda-term into its $\beta$ normal form? - This is equivalent to asking whether after some reduction steps we will end up in a normal form where there are no further reductions possible. Since a normal lambda-term characterizes normal proofs, normalizing a lambda-term corresponds to normalizing proofs and demonstrates that every proof in the natural deduction system indeed has a normal proof. % Proving that reduction must terminate is not a simple syntactic argument based on terms, since the $\beta$-reduction rule may yield a term which is bigger than the term we started with. % Further, $\eta$-expansion might make the term bigger. As syntactic arguments are not sufficient to argue that we can always compute a $\beta$ normal form, we hence need to find a different inductive argument. For the simply-typed lambda-calculus, we could prove that while the expression itself does not get smaller, the type of an expression does\footnote{This is the essential idea of hereditary substitutions \cite{Watkins02tr}}. This is a syntactic argument; it however does not scale to polymorphic lambda-calculus or full dependent type theories. We will here instead discuss a \emph{semantic} proof method where we define the meaning of well-typed terms using the abstract notion of \emph{reducibility candidates}. Throughout this tutorial, we stick to the simply typed lambda-calculus and its extension. This allows us to give a concise presentation of the important issues that arise. However the most important benefits of typed operational semantics and our approach are demonstrated in systems with dependent types where our development of the metatheoretic simpler than the existing techniques. We see this benchmark hence as a good jumping point to investigate and mechanize the meta-theory of dependently typed systems. % Unlike all the previous proofs which were syntactic and direct based on the structure of the derivation or terms, semantic proofs \section{Semantic Interpretation} Working with well-typed terms means we need to be more careful to consider a term within its typing context. In particular, when we define the semantic interpretation of $\inden{\Gamma}{M}{A \arrow B}$ we must consider all extensions of $\Gamma$ (described by $\Gamma' \ext \rho \Gamma$) in which we may use $M$. \begin{itemize} \item $\inden{\Gamma}{M}{\base}$ iff $\Gamma \vdash M\hastype \base$ and $M$ is strongly normalizing % , i.e. $\Gamma \vdash M \in \SN$. \item $\inden{\Gamma}{M}{A \arrow B}$ iff for all $\Gamma' \ext{\rho} \Gamma$ and $\Gamma' \vdash N :A$, if $\inden{\Gamma'}{N}{A}$ then $\inden{\Gamma'}{[\rho]M~N}{B}$. \end{itemize} % Weakening holds for the semantic interpretations. % \begin{lemma}[Semantic Weakening]\ref{lm:sweak} % If $\Gamma \models M : A$ then $\Gamma, x{:}C \models M : A$. % \end{lemma} % We sometimes write these definitions more compactly as follows % \[ % \begin{array}{llcl} % \mbox{Semantic base type} & \den{o} & := & \SN \\ % \mbox{Semantic function type} & \den{A \arrow B} & := & \{ M | \forall \Gamma' \ext{\rho} \Gamma,~\forall \Gamma' \vdash N : A.~ \Gamma'\ models N : A \ \in \den{A}. M\;N \in \den{B} \} % \end{array} % \] \section{General idea} We prove that if a term is well-typed, then it is strongly normalizing in two steps: \begin{description} \item[Step 1] If $\inden{\Gamma}{M}{A}$ then $\Gamma \vdash M : A$ and $M$ is strongly normalizing. \item[Step 2] If $\Gamma \vdash M : A$ and $\inden{\Gamma'}{\sigma}{\Gamma}$ then $\inden{\Gamma'}{[\sigma]M}{A}$. \end{description} Therefore, we can conclude that if a term $M$ has type $A$ then $M$ is strongly normalizing and its reduction is finite, choosing $\sigma$ to be the identity substitution. % \\[1em] % We remark first, that all variables are in the semantic type $A$ and variables are strongly normalizing, i.e. they are already in normal form. % % \begin{lemma}~\\ % \begin{itemize} % \item If $\Gamma \vdash x : A$ then $\Gamma \models x : A$ % \item If $\Gamma \vdash x : A$ then $(\Gamma \vdash x) \in \SN$. % \end{itemize} % \end{lemma} % These are of course statements we need to prove.
{ "alphanum_fraction": 0.6953271028, "avg_line_length": 52.1398305085, "ext": "tex", "hexsha": "8a1ee7a9382ab33b3fbb1b1c1e9becafca48cf7e", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2018-02-23T18:22:17.000Z", "max_forks_repo_forks_event_min_datetime": "2017-11-10T16:44:52.000Z", "max_forks_repo_head_hexsha": "67bb99c3d0f4b70cfc7b0812a3f36ed55f8cf424", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "andreasabel/strong-normalization", "max_forks_repo_path": "sn-proof/intro.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "67bb99c3d0f4b70cfc7b0812a3f36ed55f8cf424", "max_issues_repo_issues_event_max_datetime": "2018-02-20T14:54:18.000Z", "max_issues_repo_issues_event_min_datetime": "2018-02-14T16:42:36.000Z", "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "andreasabel/strong-normalization", "max_issues_repo_path": "sn-proof/intro.tex", "max_line_length": 669, "max_stars_count": 32, "max_stars_repo_head_hexsha": "67bb99c3d0f4b70cfc7b0812a3f36ed55f8cf424", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "andreasabel/strong-normalization", "max_stars_repo_path": "sn-proof/intro.tex", "max_stars_repo_stars_event_max_datetime": "2021-03-05T12:12:03.000Z", "max_stars_repo_stars_event_min_datetime": "2017-05-22T14:33:27.000Z", "num_tokens": 3800, "size": 12305 }
\chapter{Requirements} \label{app:RE} This section contains all requirements for the developed application. It limits on the \textit{description}, \textit{title} and \textit{id} of the requirements to save space. \section{Functional Requirements} This section describes all functional requirements of the application. \subsection{Database} \paragraph{R001 Database} The application stores all demanded informations in a database. \paragraph{R018 Project Database} It exists a database where all projects are stored. \paragraph{R002 Project Database: Restriction} For the Bachelor Thesis the application only got a local database. \paragraph{R057 Delete Flag} Deleted projects get a deletion flag before deleting them completely. \paragraph{R058 Erase Deleted Projects} Projects with a deletion flag can be erased from the database. \paragraph{R059 Stored Property Parameters} The parameters for each property are stored in the database. \paragraph{R060 Property Distance Informations} The database contains all informations to calculate the distance of projects. \paragraph{R061 View Deleted Projects} It is possible to view all projects with deletion flag. \paragraph{R063 Recover Projects} It's possible to recover projects with a deletion flag. \paragraph{R064 Database Size} The application allows to view the physical size of the database in \texttt{mb}. \subsection{Projects} \paragraph{R004 Projects Overview} The application must have an overview where all projects are displayed. \paragraph{R005 Filter Projects} In the projects overview the user has the possibility to filter the project properties. \paragraph{R006 Sort Projects} In the projects overview the user has the possibility to sort the projects properties. \paragraph{R007 Sort Projects: Standard Sort} The Standard sort method for the projects is the creation date. \paragraph{R008 Search Project} In the project Overview exists the possibility to search a project. \paragraph{R072 Sort Last Edited} It is possible to sort the project after the last edited date. \paragraph{R073 Sort Name} It is possible to sort the projects after name. \subsubsection{Project Creation} \paragraph{R023 Create Project} It is possible to create a new project in the application \paragraph{R024 Create Project: Project Name} To create a Project a new Name must set \paragraph{R025 Create Project: unique ID} All Projects have an unique ID. \paragraph{R026 Create Project: Set Estimation Method} To create a Project an available estimation method must be set \paragraph{R027 Create Project: Load Project Influence Factors} After an Estimation Method is set the influencing factors are loaded \paragraph{R028 Create Project: Set Influencing Factors} All Influence Factors need to be set to complete the creation process \paragraph{R029 Create Project: Project Phase} For the creation Process the actual Project Phase can be set \paragraph{R030 Create Project: Project Properties} There has to be the possibility to add existing project properties \paragraph{R031 Create Project: Development Market} The development market of the Project is selectable \paragraph{R032 Create Project: Development Kind} The development kind of the Project is selectable \paragraph{R033 Create Project: Process Methodology} The process methodology of the Project is selectable \paragraph{R034 Create Project: Programming Language} The programming language of the Project is selectable \paragraph{R035 Create Project: Platform} The platform of the Project is selectable \paragraph{R036 Create Project: Industry Sector} The industry sector of the Project is selectable \paragraph{R037 Create Project: Architecture} The architecture of the Project is selectable \paragraph{R038 Create Project: Project Icon} The project icon of the Project is selectable \paragraph{R039 Create Project: Guided Creation Process} It is possible to start a guided creation for the project which guides the user through each step. \paragraph{R040 Create Project: List Creation} It is possible to create a new project with a list creation which inherits all properties for the creation \paragraph{R041 Create Project: Long Project Name} Project names have to be limited to 40 characters \paragraph{R042 Create Project: Preselected Properties} One value must be preselected for all properties. \paragraph{R043 Create Project: description} It should be possible to add a project description \paragraph{R044 Create Project: Property Sorting} All Properties are sorted alphabetically descending for the selection \paragraph{R045 Create Project: Suggest Estimation Method} There should be the possibility to calculate the best fitting estimation method for the project \paragraph{R046 Create Project: Suggest Estimation Method - Sorting} The suggested methods are sorted after the best fitting. \paragraph{R071 Guided Creation - Progress} The progress of the guided creation is visible. \paragraph{R072 Cancel Project Creation} If the user cancels the creation he will be warned that unsaved data will not be saved. \subsubsection{Project Properties} \paragraph{R010 Project Properties} Each project has properties \paragraph{R011 Project Properties View} It is possible to get a view of all properties of a project. \paragraph{R012 Saving Project Properties} All properties are saved in the database. \paragraph{R050 Property: Development Market} All projects contain the property for development market. \paragraph{R051 Property: Development Kind} All projects contain the property for development kind. \paragraph{R052 Property: Process Methodology} All projects contain the property for process methodology. \paragraph{R053 Property: Programming Language} All projects contain the property for programming language. \paragraph{R054 Property: Platform} All projects contain the property for platform. \paragraph{R055 Property: Industry Sector} All projects contain the property for industry sector. \paragraph{R056 Property: Software Architecture} All projects contain the property for software architecture. \paragraph{R071 Automatic Save} Changes on project properties are saved automatically. \subsubsection{Project Estimation} \paragraph{R014 Estimation} An Estimation of projects is possible in all implemented estimation techniques. \paragraph{R076 Function Point Estimation} An estimation with the function point technique is possible. \paragraph{R077 Function Point Estimation - Calculation Total Points} The Total Points of the estimation will be calculated automatically \paragraph{R078 Function Point Estimation - Calculation Evaluated Points} The Evaluated Points of the estimation will be calculated automatically with the set influence factor. \paragraph{R079 Function Point Estimation - Calculation Man Days} The man days will be calculated automatically for the estimation. \paragraph{R080 Man Days Calculation} The man days will be calculated with a basis factor if no terminated project is available. \paragraph{R081 Adapted Man Days Calculation} If terminated projects exists, the man days will be calculated with an average points-per-day value. \paragraph{R082 Change Influence Factor Set} It is possible to change the set of influence factors of a project. \paragraph{R083 Terminate Project} Each estimation must allow a termination of the project. \paragraph{R084 Editing After Termination} A terminated project cannot be edited. \subsection{Influence Factors} \paragraph{R085 Create New Influence Factor} The application allows the creation of a new influence factor set. \paragraph{R086 Influence Factor Name} Each influence factor must contain a name. \paragraph{R087 Influence Factor Empty Name} The name for an influence factor must not be an empty string. \paragraph{R088 Influence Factor ID} Each influence factor set must have an unique id. \paragraph{R089 Influence Factor Selection} All Influence Factors are sorted after the estimation technique. \paragraph{R090 Delete Influence Factor} A created influence factor set can be deleted. \paragraph{R091 Edit Influence Factor} A created Influence Factor set can be edited. \paragraph{R092 Influence Factor Sum} The sum of the factor can be shown. \subsection{Related Projects} \paragraph{R009 Find Related Projects} It must be possible to find related projects. \paragraph{R010 Load Components From Other Projects} It is possible to copy Components from other projects into the actual project. \paragraph{R013 Project Properties Distance} The distance of two projects is calculated with all project properties. \paragraph{R065 Project Relevance} The project relevance to the selected project is given in percentage. \paragraph{R066 Related Projects Sorting} All related projects are sorted descending after their relevance. \paragraph{R067 Related Project Property} It is possible to view the properties of a related project. \paragraph{R068 Relation Border} Only projects with a relevance more than 50\% are viewed. \paragraph{R069 Show All Related Projects} The application allows the option to show the relation to all projects. \paragraph{R070 Properties of Related Projects} It is possible to show the properties of a related project. \subsection{Export} \paragraph{R015 Export Project} The application allows an exportation of projects to external files. \paragraph{R074 Export XLS} An export to xls format is possible. \paragraph{R075 Open Exported XLS file} The exported xls file will be opened when it is done. \subsection{Help} \paragraph{R016 Help Method} The application inherits articles that provide informations to the user. \paragraph{R093 Search Articles} It is possible to search help articles. \paragraph{R094 Selected Articles} On the main help screen is only a selection of help articles visible. \paragraph{R095 Show all articles} It is possible to show all available articles on the screen. \paragraph{R096 Open Articles} A help article can be opened. \paragraph{R097 Shown Articles} The help articles are only shown with their title. \paragraph{R098 Feedback} The help screen allows sending of a feedback to the producer of the application. \subsection{User Informations} \paragraph{R017 Saving User Informations} The application stores all necessary user informations in a database. \paragraph{R099 Set User Name} It is possible to set the user name. \paragraph{R100 Password} It' possible to set the password. \paragraph{R101 User Name Length} The user name length is limited to 30 characters. \paragraph{R102 Password Length} Password length is limited to 30 characters. \paragraph{R103 Check Login} It is possible to verify the login information. \paragraph{R104 User Information} All user information can be displayed. \subsection{Feedback} \paragraph{R019 Send Support Request} The application provides the possibility to send a support request to the producer of the application. \subsection{Synchronization} \paragraph{R105 Sync Frequency} It is possible to set the frequency for automatic synchronization. \paragraph{R106 Automatic Sync} The application allows to change via automatic and manual synchronization. \paragraph{R107 Connection} A selection which with which connection an automatic synchronization is possible is provided by the application. \subsection{Miscellaneous} \paragraph{R108 App Information} The app provides a screen with all app information. \paragraph{R109 Version Number} The App Information shows the actual version number. \paragraph{R110 License} All license information can be viewed inside the application. \paragraph{R111 Crash Reports} All crash reports can be send to the server. \paragraph{R112 Automatically Send Crash Reports} It is possible to send crash reports automatically. \section{Non-Functional Requirements} This section describes all non-functional requirements of the application. \paragraph{R003 Project Database: Normalization} The database is normalized in the 3 normalization form. \paragraph{R009 Design Guidelines} The application must fulfills the Design Guidelines from Google - Material Design. \paragraph{R010 Save Changes Automatically} All changes in the application need to saved automatically. \paragraph{R046 Crash safety} The application does not crash if data was not found \paragraph{R047 Error Information} If an error occurs the application provides the user with a helpful error message. \paragraph{R113 View Orientation Data Loss} By changing the phone orientation no data must be lost. \paragraph{R114 Main Language} The main language for the application is English. \paragraph{R115 Code Annotation} All annotations of the source code are made in English. \paragraph{R116 Method Annotations} Each source code method and class have an annotation that describes the use.
{ "alphanum_fraction": 0.8033508338, "avg_line_length": 48.328358209, "ext": "tex", "hexsha": "b6a04ad8595c5504671540f07f1afec0dace1c97", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0efa2d6b5baf65b8afade400ce84518d1602bd6f", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "Freezor/MobileEstimate", "max_forks_repo_path": "Bachelor Thesis OLF/chapters/AppendixRequirements.tex", "max_issues_count": 28, "max_issues_repo_head_hexsha": "0efa2d6b5baf65b8afade400ce84518d1602bd6f", "max_issues_repo_issues_event_max_datetime": "2019-06-12T14:36:09.000Z", "max_issues_repo_issues_event_min_datetime": "2019-06-12T11:02:43.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "Freezor/MobileEstimate", "max_issues_repo_path": "Bachelor Thesis OLF/chapters/AppendixRequirements.tex", "max_line_length": 175, "max_stars_count": null, "max_stars_repo_head_hexsha": "0efa2d6b5baf65b8afade400ce84518d1602bd6f", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "Freezor/MobileEstimate", "max_stars_repo_path": "Bachelor Thesis OLF/chapters/AppendixRequirements.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2583, "size": 12952 }
%%% Chapter 1: Java Program Design and Development %%% 3rd Edition %%%REW First draft of chapter 1 - A pdf version sent to Betsy and RAM 12/18/03 %%%RAM Revised draft of chapter 1. Extensive revisions were required in %%%RAM reaction to Betsy's edits, were received in early January. %%%RAM Finished 1/31/03 \setcounter{chapter}{0} \setcounter{SSTUDYcount}{1} \chapter{Java Program Design and Development} \label{ch:intro2} \CObegin \secCOBH{Objectives} \noindent After studying this chapter, you will \begin{COBL} \item Know the basic steps involved in program development. \item Understand some of the basic elements of the Java language. \item Know how to use simple output operations in a Java program. \item Be able to distinguish between different types of errors in a \\program. \item Understand how a Java program is translated into machine language. \item Understand the difference between a Java console application and a Java \\Swing application. \item Know how to edit, compile, and run Java programs. \end{COBL} \secCOLH{Outline} \begin{COL} \item Introduction \item Designing Good Programs \item Designing a Riddle Program \item[] {{\color{cyan}Special Topic:} Grace Hopper and the First Computer Bug} \item Java Language Elements \item Editing, Compiling, and Running a Java Program \item From the Java Library: System and PrintStream \item From the Java Library: {System} and {PrintStream} \par\small\item[] Chapter Summary \par\small\item[] Solutions to Self-Study Exercises \par\small\item[] Exercises \end{COL} \COend \section{Introduction} \noindent This chapter introduces some of the basic concepts and techniques involved in Java program design and development. We begin by identifying the main steps in designing an object-oriented program. The steps are illustrated by designing a program that ``asks'' and ``answers'' riddles. As an example of a riddle, consider the question ``What is black and white and read all over?'' The answer, of course, is a newspaper. Following the design phase, we then focus on the steps involved in coding a Java program, including the process of editing, compiling, and running a program. Because Java programs can be text based applications or window based graphical applications, we describe how the coding process differs for these two varieties. Next we begin to familiarize ourselves with Java's extensive class library by studying its {\tt PrintStream} and {\tt System} classes. These classes contain objects and methods that enable us to print output from a program. By the end of the chapter you will be able to design and write a Java application that ``sings'' your favorite song. \section{Designing Good Programs} \noindent Programming is not simply a question of typing Java code. Rather, it involves a considerable amount of planning and careful designing. Badly designed programs rarely work correctly. Even though it is tempting for novice programmers to start entering code almost immediately, one of the first rules of programming is \JavaTIP{PROGRAMMING TIP}{}% {The sooner you begin to type code, the longer the program will take to finish, because careful design of the program must precede coding. This is particularly true of object-oriented programs.} \noindent In other words, the more thought and care you put into designing a program, the more likely you are to end up with one that works correctly. The following subsections provide a brief overview of the program development process. \subsection{The Software Engineering Life Cycle} \noindent Software engineering is the process of designing and writing software. The {\it software life cycle} refers to the different phases involved in the design and development of a computer program. Our presentation of examples in the book will focus on four phases of the overall life cycle. In the {\em specification} phase we provide a statement of the problem and a detailed description of what the program will do. In the {\em design} phase we describe the details of the various classes, methods, and data that will be used in the program. The {\em implementation} phase refers to the actual coding of the program into Java. In the {\em testing} phase we test the program's performance to make sure it is correct, recoding it or redesigning it as necessary. Figure~\ref{fig:progdev} gives a more detailed overview of the program development process, focusing most of the attention on the design phase of the software life cycle. It shows that designing an object-oriented program is a matter of asking the right questions about the classes, data, and methods that make up the program. Overall, the program development process can be viewed as one that repeatedly applies the divide-and-conquer principle. That is, most programming problems can be repeatedly divided until you have a collection of relatively easy-to-solve subproblems, each of which can be handled by an object. In this way the program is \marginnote{Divide and conquer} divided into a collection of interacting objects. For each object we design a class. During class design, each object is divided further into its variables and methods. \begin{figure}[h] %\begin{graphic} %\figa{CHPTR01:1f7.eps} \figa{chptr01/progdev.eps}% {An overview of the program development process.} {fig:progdev} %\end{graphic} \end{figure} %\epsfig{file=ch1-java/figures/progdev.eps} When should we stop subdividing? How much of a task should be assigned to a single object or a single method? The answers to these and similar questions are not easy. Good answers require the kind of judgment that comes through experience, and frequently there is more than one good way to design a solution. Here again, as we learn more about object-oriented programming, we'll learn more about how to make these design decisions. \section{Designing a Riddle Program} The first step in the program-development process is making sure you understand the problem (Fig. \ref{fig:progdev}). Thus, we begin by developing a detailed specification, which should address three basic questions: \begin{BL} \item What exactly is the problem to be solved? \item How will the program be used? \item How should the program behave? \end{BL} \noindent In the real world, the problem specification is often arrived at through an extensive discussion between the customer and the developer. In an introductory programming course, the specification is usually assigned by the instructor. To help make these ideas a little clearer, let's design an object-oriented solution to a simple problem. \BOXDT{{\bf Problem Specification.} Design a class that will represent a riddle with a given question and answer. The definition of this class should make it possible to store different riddles and to retrieve a riddle's question and answer independently. } \subsection{Problem Decomposition} \noindent Most problems are too big and too complex to be tackled all at once. So the next step in the design process is to divide the \marginnote{Divide and conquer} problem into parts that make the solution more manageable. In the object-oriented approach, a problem is divided into objects, where each object will handle one specific aspect of the program's overall job. In effect, each object will become an expert or specialist in some aspect of the program's overall behavior. Note that there is some ambiguity here about how far we should go in decomposing a given program. This ambiguity is part of the design process. How much we should decompose the program before its parts become ``simple to solve'' depends on the problem we're trying to solve and on the problem solver. One useful design guideline for trying to decide what objects are needed is the following: \JavaTIP{EFFECTIVE DESIGN}{Looking for Nouns.}{Choosing a program's objects is often a matter of looking for nouns in the problem specification.} \noindent Again, there's some ambiguity involved in this guideline. For example, the key noun in our current problem is {\it riddle}, so our solution will involve an object that serves as a model for a riddle. The main task of this Java object will be simply to represent a riddle. Two other nouns in the specification are {\it question} and {\it answer}. Fortunately, Java has built-in {\tt String} objects that represent strings of characters such as words or sentences. We can use two {\tt String} objects for the riddle's question and answer. Thus, for this simple problem, we need only design one new type of object---a riddle---whose primary role will be to represent a riddle's question and answer. Don't worry too much if our design decisions seem somewhat mysterious at this stage. A good understanding of object-oriented design can come only after much design experience, but this is a good place to start. \subsection{Object Design} \noindent Once we have divided a problem into a set of cooperating objects, designing a Java program is primarily a matter of designing and creating the objects themselves. In our example, this means we must now design the features of our riddle object. For each object, we must answer the following basic design questions: \begin{BL} \item What role will the object perform in the program? \vspace{2pt}\item What data or information will it need? \vspace{2pt}\item What actions will it take? \vspace{2pt}\item What interface will it present to other objects? \vspace{2pt}\item What information will it hide from other objects? \end{BL} For our riddle object, the answers to these questions are shown in Figure~\ref{fig:specs}. Note that although we talk about ``designing an object,'' we are really talking about designing the object's class. A class defines the collection of objects that belong to it. The class can be considered the object's {\em type}. This is the same as for real-world objects. Thus, Seabiscuit is a horse---that is, Seabiscuit is an object of type horse. Similarly, an individual riddle, such as the newspaper riddle, is a riddle. That is, it is an object of type Riddle. The following discussion shows how we arrived at the decisions for the design specifications for the {\tt Riddle} class, illustrated in Figure~\ref{fig:specs}. %%\begin{figure}[tb] \begin{figure}[h] \figaproga{100pt}{ \begin{BL}\rm \item Class Name: Riddle \item Role: To store and retrieve a question and answer \item Attributes (Information) \begin{BSE} \item question: A variable to store a riddle's question (private) \item answer: A variable to store a riddle's answer (private) \end{BSE} \item Behaviors \begin{BSE} \item Riddle(): A method to set a riddle's question and answer \item getQuestion(): A method to return a riddle's question \item getAnswer(): A method to return a riddle's answer \end{BSE} \end{BL} }\figaprogb{Design specification for the {\tt Riddle} class.} %%}\figaprogbleft{Design specification for the {\tt Riddle} class. {fig:specs} \end{figure} The role of the {\tt Riddle} object is to model an ordinary \marginnote{What is the object's role?} riddle. Because a riddle is defined in terms of its question and answer, our {\tt Riddle} object will need some way to store these two pieces of information. As we learned in Chapter~\ref{chapter-intro}, an instance variable is a named memory location that belongs to an object. The fact that the memory location is named, makes it easy to retrieve the data stored there by invoking the variable's name. For example, to print a riddle's question we would say something like ``print question,'' and whatever is stored in {\em question} would be retrieved and printed. In general, instance variables are used to store the information that an object needs to perform its role. \marginnote{What information will the object need?} They correspond to what we have been calling the object's attributes. Deciding on these variables provides the answer to the question, ``What information does the object need?'' Next we decide what actions a {\tt Riddle} object will take. A useful design guideline for actions of objects is the following: \JavaTIP{EFFECTIVE DESIGN}{Looking for Verbs.}{Choosing the behavior of an object is often a matter of looking for verbs in the problem specification.} \marginnote{What actions will the object take?} \noindent For this problem, the key verbs are {\it set} and {\it retrieve}. As specified in Figure~\ref{fig:specs}, each {\tt Riddle} object should provide some means of setting the values of its question and answer variables and a means of retrieving each value separately. Each of the actions we have identified will be encapsulated in a Java method. As you recall from Chapter~\ref{chapter-intro}, a method is a named section of code that can be {\em invoked}, or called upon, to perform a particular action. In the object-oriented approach, calling a method (method invocation) is the means by which interaction occurs among objects. Calling a method is like sending a message between objects. For example, when we want to get a riddle's answer, we would invoke the {\tt getAnswer()} method. This is like sending the message ``Give me your answer.'' One special method, known as a constructor, is invoked when an object is first created. We will use the {\tt Riddle()} constructor to give specific values to riddle's question and answer variables. In designing an object, we must decide which methods should be made \marginnote{What interface will it present, and what information will it hide?} available to other objects. This determines what interface the object should present and what information it should hide from other objects. In general, those methods that will be used to communicate with an object are designated as part of the object's interface. Except for its interface, all other information maintained by each riddle should be kept ``hidden'' from other objects. For example, it is not necessary for other objects to know where a riddle object stores its question and answer. The fact that they are stored in variables named {\tt question} and {\tt answer}, rather than variables named {\tt ques} and {\tt ans}, is irrelevant to other objects. \JavaTIP{EFFECTIVE DESIGN}{Object Interface.}{An object's interface should consist of just those methods needed to communicate with or to use the object.} \JavaTIP{EFFECTIVE DESIGN}{Information Hiding.}{An object should hide most of the details of its implementation.} \pagebreak Taken together, these various design decisions lead to the \marginfig{chptr01/riddleuml.eps}% {A UML class diagram representing the {\tt Riddle} class.} {fig:ruml} specification shown in Figure~\ref{fig:ruml}. As our discussion has illustrated, we arrived at the decisions by asking and answering the right questions. In most classes the attributes (variables) are private. This is represented by a minus sign ($-$). In this example, the operations (methods) are public, which is represented by the plus sign ($+$). The figure shows that the {\tt Riddle} class has two hidden (or private) variables for storing data and three visible (or public) methods that represent the operations that it can perform. \subsection{Data, Methods, and Algorithms} \noindent Among the details that must be worked out in designing a riddle object is deciding what type of data, methods, and algorithms we need. There are two basic questions involved: \begin{BL} \item What type of data will be used to represent the information needed by the riddle? \item How will each method carry out its task? \end{BL} \noindent Like other programming languages, Java supports a wide range of different types of data, some simple and some complex. \marginnote{What type of data will be used?} Obviously a riddle's question and answer should be represented by text. As we noted earlier, Java has a {\tt String} type, which is designed to store text, which can be considered a string of characters. In designing a method, you have to decide what the method will do. \marginnote{How will each method carry out its task?} In order to carry out its task, a method will need certain information, which it may store in variables. Plus, it will have to carry out a sequence of individual actions to perform the task. This is called its {\bf algorithm}, which is a step-by-step description of the solution to a problem. And, finally, you must decide what result the method will produce. Thus, as in designing objects, it is important to ask the right questions: \begin{BL} \item What specific task will the method perform? \item What information will it need to perform its task? \item What algorithm will the method use? \item What result will the method produce? \end{BL} \noindent Methods can be thought of as using an algorithm to complete a required action. The algorithm required for the {\tt Riddle()} constructor is very simple but also typical of constructors for many classes. It takes two strings and assigns the first to the {\tt question} instance variable and then assigns the second to the {\tt answer} instance variable. The algorithms for the other two methods for the Riddle class are even simpler. They are referred to as {\it get} methods that merely {\it return} or produce the value that is currently stored in an instance variable. Not all methods are so simple to design, and not all algorithms are so \marginnote{Algorithm design} simple. Even when programming a simple arithmetic problem, the steps involved in the algorithm will not always be as obvious as they are when doing the calculation by hand. For example, suppose the problem were to calculate the sum of a list of numbers. If we were telling our classmate how to do this problem, we might just say, ``add up all the numbers and report their total.'' But this description is far too vague to be used in a program. By contrast, here's an algorithm that a program could use: \begin{NL} \item Set the initial value of the sum to 0. \item If there are no more numbers to total, go to step 5. \item Add the next number to the sum. \item Go to step 2. \item Report the sum. \end{NL} \noindent Note that each step in this algorithm is simple and easy to follow. It would be relatively easy to translate it into Java. Because English is somewhat imprecise as an algorithmic language, programmers frequently write algorithms in the programming \marginnote{Pseudocode} language itself or in {\bf pseudocode}, a hybrid language that combines English and programming language structures without being too fussy about programming language syntax. For example, the preceding algorithm might be expressed in pseudocode as follows: \begin{jjjlisting} \begin{lstlisting} sum = 0 while (more numbers remain) add next number to sum print the sum \end{lstlisting} \end{jjjlisting} Of course, it is unlikely that an experienced programmer would take the trouble to write out pseudocode for such a simple algorithm. But many programming problems are quite complex and require careful design to minimize the number of errors that the program contains. In such situations, pseudocode could be useful. Another important part of designing an algorithm is to {\it trace} it---that is, to step through it line by line---on some sample data. For example, we might test the list-summing algorithm by tracing it on the list of numbers shown in the margin. %\begin{table}[h] %\begin{center} \marginpar{ \UNTB \begin{tabular}{rl} \multicolumn{2}{l}{ \color{cyan} \rule{8pc}{1pt}}\\[2pt] %%RAM\UNTBCH{Sum} &\UNTBCH{List of Numbers} {Sum} & {List of Numbers} \\[-4pt]\multicolumn{2}{l}{ \color{cyan} \rule{8pc}{0.5pt}}\\[2pt] 0 &54 30 20\cr 54 &30 20\cr 84 &20\cr 104 &- \\[-4pt]\multicolumn{2}{l}{ \color{cyan} \rule{8pc}{1pt}} \end{tabular} \endUNTB } %\end{center} %\end{table} Initially, the sum starts out at 0 and the list of numbers contains 54, 30, and 20. On each iteration through the algorithm, the sum increases by the amount of the next number, and the list diminishes in size. The algorithm stops with the correct total left under the sum column. While this trace didn't turn up any errors, it is frequently possible to find flaws in an algorithm by tracing it in this way. \subsection{Coding into Java} \noindent Once a sufficiently detailed design has been developed, it is time to start generating Java code. The wrong way to do this would be to type the entire program and then compile and run it. This generally leads to dozens of errors that can be both demoralizing and difficult to fix. The right way to code is to use the principle of {\bf stepwise refinement}. \marginnote{Stepwise refinement} The program is coded in small stages, and after each stage the code is compiled and tested. For example, you could write the code for a single method and test that method before moving on to another part of the program. In this way, small errors are caught before moving on to the next stage. The code for the {\tt Riddle} class is shown in Figure~\ref{fig:riddleclass}. Even though we have not yet begun learning the details of the Java language, you can easily pick out the key parts in this program: the instance variables {\tt question} and {\tt answer} of type {\tt String}, which are used to store the riddle's data; the {\tt Riddle()} constructor and the {\tt getQuestion()} and {\tt getAnswer()} methods make up the interface. The specific language details needed to understand each of these elements will be covered in this and the following chapter. %% proglist ch1/riddle/Riddle.java \begin{figure}[tb] \jjjprogstart \begin{jjjlisting} \begin{lstlisting} :code:`/*` * File: Riddle.java * Author: Java, Java, Java * Description: Defines a simple riddle. :code:`*/` public class Riddle extends Object // Class header { // Begin class body private String question; // Instance variables private String answer; public Riddle(String q, String a) // Constructor method { question = q; answer = a; } // Riddle() public String getQuestion() // Instance method { return question; } // getQuestion() public String getAnswer() // Instance method { return answer; } //getAnswer() } // Riddle class // End class body \end{lstlisting} \end{jjjlisting} \jjjprogstop{The {\tt Riddle} class definition.} {fig:riddleclass} \end{figure} \subsection{Syntax and Semantics} \noindent Writing Java code requires that you know its syntax and semantics. A language's {\bf syntax} is the set of rules \marginnote{Syntax} that determines whether a particular statement is correctly formulated. As an example of a syntax rule, consider the following two English statements: \begin{jjjlisting} \begin{lstlisting} The rain in Spain falls mainly on the plain. // Valid Spain rain the mainly in on the falls plain. // Invalid \end{lstlisting} \end{jjjlisting} \noindent The first sentence follows the rules of English syntax (grammar), and it means that it rains a lot on the Spanish plain. The second sentence does not follow English syntax, and, as a result, it is rendered meaningless. An example of a Java syntax rule is that a Java statement must end with a semicolon. However, unlike in English, where one can still be understood even when one breaks a syntax rule, in a programming language the syntax rules are very strict. If you break even the slightest syntax rule---for example, if you forget just a single semicolon---the program won't work at all. Similarly, the programmer must know the \marginnote{Semantics} {\bf semantics} of the language---that is, the meaning of each statement. In a programming language, a statement's meaning is determined by what effect it will have on the program. For example, to set the {\tt sum} to 0 in the preceding algorithm, an assignment statement is used to store the value 0 into the memory location named {\tt sum}. Thus, we say that the statement \begin{jjjlisting} \begin{lstlisting} sum = 0; \end{lstlisting} \end{jjjlisting} \noindent assigns 0 to the memory location {\tt sum}, where it will be stored until some other part of the program needs it. Learning Java's syntax and semantics is a major part of learning to program. This aspect of learning to program is a lot like learning a foreign language. The more quickly you become fluent in the new language (Java), the better you will be at expressing solutions to interesting programming problems. The longer you struggle with Java's rules and conventions, the more difficult it will be to talk about problems in a common language. Also, computers are a lot fussier about correct language than humans, and even the smallest syntax or semantic error can cause tremendous frustration. So, try to be very precise in learning Java's syntax and semantics. \subsection{Testing, Debugging, and Revising} \noindent Coding, testing, and revising a program is an repetitive process, one that may require you to repeat the different program-development stages shown in (Fig.~\ref{fig:progdev}). According to the stepwise-refinement principle, the process of developing a program should proceed in small, incremental steps, where the solution becomes more refined at each step. However, no matter how much care you take, things can still go wrong during the coding process. A {\it syntax error} is an error that breaks one of Java's syntax rules. Such errors will be detected by the Java compiler. Syntax errors \marginnote{Syntax errors} are relatively easy to fix once you understand the error messages provided by the compiler. As long as a program contains syntax errors, the programmer must correct them and recompile the program. Once all the syntax errors are corrected, the compiler will produce an executable version of the program, which can then be run. When a program is run, the computer carries out the steps specified in the program and produces results. However, just because a program runs does not mean that its actions and results are correct. A running program can contain {\it semantic errors}, also called \marginnote{Semantic errors} {\it logic errors}. A semantic error is caused by an error in the logical design of the program causing it to behave incorrectly, producing incorrect results. Unlike syntax errors, semantic errors cannot be detected automatically. For example, suppose that a program contains the following statement for calculating the area of a rectangle: \begin{jjjlisting} \begin{lstlisting} return length + width; \end{lstlisting} \end{jjjlisting} \noindent Because we are adding length and width instead of multiplying them, the area calculation will be incorrect. Because there is nothing syntactically wrong with the expression {\tt length + width}, the compiler won't detect an error in this statement. Thus, the computer will still execute this statement and compute the incorrect area. Semantic errors can only be discovered by testing the program and they are sometimes very hard to detect. Just because a program appears to run correctly on one test doesn't guarantee that it contains no semantic errors. It might just mean that it has not been adequately tested. Fixing semantic errors is known as {\it debugging} a program, and when subtle errors occur it can be the most frustrating part of the whole program development process. The various examples presented will occasionally provide hints and suggestions on how to track down {\it bugs}, or errors, in your code. One point to remember when you are trying to find a very subtle bug is that no matter how convinced you are that your code is correct and that the bug must be caused by some kind of error in the computer, the error is almost certainly caused by your code! \subsection{Writing Readable Programs} \noindent Becoming a proficient programmer goes beyond simply writing a program that produces correct output. It also involves \marginnote{Programming style} developing good {\it programming style}, which includes how readable and understandable your code is. Our goal is to help you develop a programming style that satisfies the following principles: \begin{BL} \item {\bf Readability.} Programs should be easy to read and understand. Comments should be used to document and explain the program's code. \item {\bf Clarity.} Programs should employ well-known constructs and standard conventions and should avoid programming tricks and unnecessarily obscure or complex code. \item {\bf Flexibility.} Programs should be designed and written so that they are easy to modify. \end{BL} \section*{{\color{cyan}Special Topic:} Grace Hopper and \\ \hspace*{20pt}the First Computer Bug} {\color{cyan}Rear Admiral} Grace Murray Hopper (1906--1992) was a pioneer computer programmer and one of the original developers of the COBOL programming language, which stands for {\it CO}mmon {\it B}usiness-{\it O}riented {\it L}anguage. Among her many achievements and distinctions, Admiral Hopper also had a role in coining the term {\it computer bug}. In August 1945, she and a group of other programmers were working on the Mark I, an electro-mechanical computer developed at Harvard that was one of the ancestors of today's electronic computers. After several hours of trying to figure out why the machine was malfunctioning, someone located and removed a two-inch moth from one of the computer's circuits. From then on whenever anything went wrong with a computer, Admiral Hopper and others would say ``it had bugs in it.'' The first bug itself is still taped to Admiral Hopper's 1945 log book, which is now in the collection of the Naval Surface Weapons Center. In 1991, Admiral Hopper was awarded the National Medal of Technology by President George Bush. To commemorate and honor Admiral Hopper's many contributions, the U.S.~Navy recently named a warship after her. For more information on Admiral Hopper, see the Web site at \WWWleft \begin{jjjlisting} \begin{lstlisting}[commentstyle=\color{black}\small] http://www.chips.navy.mil/ \end{lstlisting} \end{jjjlisting} \section{Java Language Elements} \noindent In this section we will introduce some of the key elements of the Java language by describing the details of a small program. We will look at how a program is organized and what the various parts do. Our intent is to introduce important language elements, many of which will be explained in greater detail in later sections. The program we will study is a Java version of the traditional HelloWorld program---''traditional'' because practically every introductory programming text begins with it. When it is run, the HelloWorld program (Fig.~\ref{fig:helloworld}) just displays the greeting ``Hello, World!'' on the console. %% proglist ch1/helloapplication/HelloWorld.java \begin{figure}[hb] \jjjprogstart \begin{jjjlisting} \begin{lstlisting}[numberstyle=\small,numbers=left] :code:`/*` * File: HelloWorld.java * Author: Java Java Java * Description: Prints Hello, World! greeting. :code:`*/` public class HelloWorld extends Object // Class header { // Start class body private String greeting = "Hello, World!"; public void greet() // Method definition { // Start method body System.out.println(greeting); // Output statement } // greet() // End method body public static void main(String args[])// Method header { HelloWorld helloworld; // declare helloworld = new HelloWorld(); // create helloworld.greet(); // Method call } // main() } // HelloWorld // End class body \end{lstlisting} \end{jjjlisting} \jjjprogstop{The :code:`HelloWorld` application program.} {fig:helloworld} \end{figure} \subsection{Comments} \noindent The first thing to notice about the :code:`HelloWorld` program is the use of comments. A {\bf comment} is a non-executable portion of a program that is used to document the program. Because comments are not executable instructions they are just ignored by the compiler. Their sole purpose is to make the program easier for the programmer to read and understand. The :code:`HelloWorld` program contains examples of two types of Java comments. Any text contained within :code:`/*` and :code:`*/` is considered a comment. As you can see in :code:`HelloWorld`, this kind of comment can extend over several lines and is sometimes called a {\em multiline} comment. A second type of comment is any text that follows double slashes (//) on a line. This is known as a {\it single-line comment} because it cannot extend beyond a single line. When the compiler encounters the beginning marker (:code:`/*`) of a multiline comment, it skips over everything until it finds a matching end marker (:code:`*/`). One implication of this is that it is not possible to put one multiline comment inside of another. That is, one comment cannot be {\it nested}, or contained, within another comment. The following code segment illustrates the rules that govern the use of :code:`/*` and :code:`*/`: \begin{jjjlisting} \begin{lstlisting} :code:`/*` This first comment begins and ends on the same line. :code:`*/` :code:`/*` A second comment starts on this line ... and goes on ... and this is the last line of the second comment. :code:`*/` :code:`/*` A third comment starts on this line ... :code:`/*` This is NOT a fourth comment. It is just part of the third comment. And this is the last line of the third comment. :code:`*/` :code:`*/` This is an error because it is an unmatched end marker. \end{lstlisting} \end{jjjlisting} \noindent As you can see from this example, it is impossible to begin a new comment inside an already-started comment because all text inside the first comment, including :code:`/*`, is ignored by the compiler. \JavaRule{Comments.}{Any text contained within :code:`/*` and :code:`*/`, which may span several lines, is considered a comment and is ignored by the compiler. Inserting double slashes (//) into a line turns the rest of the line into a comment.} Multiline comments are often used to create a **comment block** that provides useful documentation for the program. In :code:`HelloWorld`, the program begins with a comment block that identifies the name of file that contains the program and its author and provides a brief description of what the program does. For single-line comments, double slashes (//) can be inserted anywhere on a line of code. The result is that the rest of the line is ignored by the compiler. We use single-line comments throughout the :code:`HelloWorld` program to provide a running commentary of its language elements. \marginnote{Single-line comment} \JavaTIP[false]{PROGRAMMING TIP}{Use of Comments.}% {A well-written program should begin with a comment block that provides the name of the program, its author, and a description of what the program does.} \subsection{Program Layout} Another thing to notice about the program is how neatly it is arranged on the page. This is done deliberately so that the program is easy to read and understand. In Java, program expressions and statements may be arranged any way the programmer likes. They may occur one per line, several per line, or one per several lines. But the fact that the rules governing the layout of the program are so lax makes it all the more important that we adopt a good programming style, one that will help make programs easy to read. So look at how things are presented in :code:`HelloWorld`. Notice how beginning and ending braces, { and }, are aligned, and note how we use single-line comments to annotate ending braces. Braces are used to mark the beginning and end of different blocks of code in a Java program and it can sometimes be difficult to know which beginning and end braces are matched up. Proper indentation and the use of single-line comments make it easier to determine how the braces are matched up. Similarly, notice how indentation is used to show when one element of the program is contained within another element. Thus, the elements of the :code:`HelloWorld` class are indented inside of the braces that mark the beginning and end of the class. And the statements in the :code:`main()` method are indented to indicate that they belong to that method. Use of indentation in this way, to identify the program's structure, makes the program easier to read and understand. \JavaTIP{PROGRAMMING TIP}{Use of Indentation.} {Indent the code within a block and align the block's opening and closing braces. Use a comment to mark the end of a block of code.} \subsection{Keywords and Identifiers} \label{subsec:keywords} \noindent The Java language contains 48 predefined {\it keywords} (Table~\ref{tab:keywords}). These are words that have special meaning in the language and whose use is reserved for special purposes. For example, the keywords used in the HelloWorld program (Fig.~\ref{fig:helloworld}) are: {\tt class}, {\tt extends}, {\tt private}, {\tt public}, {\tt static}, and {\tt void}. \begin{table}[htb] %\hphantom{\caption{Java keywords}} {\caption{Java keywords.\label{tab:keywords}}} %\begin{tabular}{l} {\color{cyan}\rule{27pc}{1pt}}\par\vspace{-10pt} \begin{verbatim} \end{verbatim} \par\vspace{-14pt}{\color{cyan}\rule{27pc}{1pt}} %\end{tabular} \endTB \end{table} Because their use is restricted, keywords cannot be used as the names of methods, variables, or classes. However, the programmer can make up his or her own names for the classes, methods, and variables that occur in the program, provided that certain rules and conventions are followed. The names for classes, methods, and variables are called identifiers, which follow certain syntax rules: \JavaRule[false]{Identifier.}{ An **identifier** must begin with a capital or lowercase letter and may be followed by any number of letters, digits, underscores (_), or dollar signs ($). An identifier may not be identical to a Java keyword.} \noindent Names in Java are *case sensitive*, which means that two different identifiers may contain the same letters in the same order. For example, :code:`thisVar` and :code:`ThisVar` are two different identifiers. In addition to the syntax rule that governs identifiers, Java programmers follow certain style conventions in making up names for classes, variables, and methods. By convention, class names in Java begin with a capital letter and use capital letters to distinguish the individual words in the name---for example, :code:`HelloWorld`. Variable and method names begin with a lowercase letter but also use capital letters to distinguish the words in the name---for example, :code:`main()`, and :code:`greet()`. The advantage of this convention is that it is easy to distinguish the different elements in a program---classes, methods, variables---just by how they are written. Another important style convention followed by Java programmers is to choose descriptive identifiers when naming classes, variables, and methods. This helps to make the program more readable. \JavaTIP{PROGRAMMING TIP}{Choice of Identifiers.}% {To make your program more readable, choose names that describe the purpose of the class, variable, or method.} \subsection{Data Types and Variables} \label{subsec:primitives} \noindent A computer program wouldn't be very useful if it couldn't manipulate different kinds of data, such as numbers and strings. The operations that one can do on a piece of data depend on the data's type. For example, you can divide and multiply numbers, but you cannot do this with strings. Thus, every piece of data in a Java program is classified according to its **data type**. Broadly speaking, there are two categories of data in Java: various types of objects and eight different types of built-in **primitive data types**. In addition to new types of objects that are created by programmers, Java has many different types of built-in objects. Two types that we will encounter in this chapter are the {\tt String} and {\tt PrintStream} objects. Java's primitive types include three \marginnote{Primitive types} integer types, three real number types, a character type, and a boolean type with values true and false. The names of the primitive types are keywords like {\tt int} for one integer type, {\tt double} for one real number type, and {\tt boolean}. As we noted in Chapter~\ref{chapter-intro}, a variable is a named storage location that can store a value of a particular type. Practically speaking, you can think of a variable as a special container into which you can place values, but only values of a certain type (Fig.~\ref{fig:vars}). For example, an {\tt int} variable can store values like 5 or -100. A {\tt String} variable can store values like ``Hello''. (Actually, this is not the full story, which is a little more complicated, but we will get to that in Chapter~\ref{chapter-objects}.) In the :code:`HelloWorld` class, the instance variable {\tt greeting} \marginfig{chptr01/vars.eps}{Variables are like {\it typed} containers.} {fig:vars} (line 8) stores a value of type {\tt String}. In the :code:`main()` method, the variable :code:`HelloWorld` is assigned a :code:`HelloWorld` object (line 16). A {\bf literal value} is an actual value of some type that occurs in a program. For example, a string enclosed in double quotes, such as "Hello, World!", is known as a {\tt String} literal. A number such as 45.2 would be an example of a literal of type {\tt double}, and -72 would be an example of a literal of type {\tt int}. Our HelloWorld program contains just a single literal value, the "HelloWorld!" {\tt String}. \subsection{Statements} \label{subsec:statements} A Java program is a collection of statements. A {\bf statement} is a \marginnote{Executing a program} segment of code that takes some action in the program. As a program runs, we say it {\em executes} statements, meaning it carries out the actions specified by those statements. In our :code:`HelloWorld` program, statements of various types occur on lines 8, 11, 15, 16, and 17. Notice that all of these lines end with a semicolon. The rule in Java is that statements must end with a semicolon. Forgetting to do so would cause a syntax error. A {\bf declaration statement} is a statement that declares a variable of a particular type. In Java, a variable must be declared before it can be used in a program. Failure to do so would cause a syntax error. In its simplest form, a declaration statement begins with the \marginnote{Declaration statement} variable's type, which is followed by the variable's name, and ends with a semicolon: \begin{extract} {\it Type VariableName} ; \end{extract} \noindent A variable's type is either one of the primitive types we mentioned, such as {\tt int}, {\tt double}, or {\tt boolean}, or for objects, it is the name of the object's class, such as {\tt String} or :code:`HelloWorld`. A variable's name may be any legal identifier, as defined earlier, although the convention in Java is to begin variable names with a lowercase letter. In our :code:`HelloWorld` program, an example a simple declaration statement occurs on line 15: \begin{jjjlisting} \begin{lstlisting} HelloWorld helloworld; \end{lstlisting} \end{jjjlisting} \noindent This example declares a variable for an object. The variable's name is :code:`HelloWorld` and its type is :code:`HelloWorld`, the name of the class that is being defined in our example. To take another example the following statements declare two {\tt int} variables, named {\tt int1} and {\tt int2}: \begin{jjjlisting} \begin{lstlisting} int int1; int int2; \end{lstlisting} \end{jjjlisting} \noindent As we noted, an {\tt int} is one of Java's primitive types and the word {\it int} is a Java keyword. Without going into too much detail at this point, declaring a variable causes the program to set aside enough memory for the type of data that will be stored in that variable. So in this example, Java would reserve enough space to store an {\tt int}. An {\bf assignment statement} is a statement that stores (assigns) a value in a variable. An assignment statement uses the equal sign ($=$) as an assignment operator. In its simplest form, an assignment statement has a variable on the left hand side of the equals sign and some type of value on the right hand side. Like other statements, an assignment statement ends with a semicolon: \begin{extract} {\it VariableName} = {\it Value} ; \end{extract} \noindent When it executes an assignment statement, Java will first determine what value is given on the right hand side and then assign (store) that value to (in) the variable on the left hand side. Here are some simple examples: \marginfig{chptr01/assign.eps}{This illustrates how the state of the variables {\tt num1} and {\tt num2} changes over the course of the three assignments, (a), (b), (c), given in the text.} {fig:assign} \begin{jjjlisting} \begin{lstlisting} greeting = "Hello, World"; num1 = 50; // (a) Assign 50 to num1 num2 = 10 + 15; // (b) Assign 25 to num2 num1 = num2; // (c) Copy num2's value (25) into num1 \end{lstlisting} \end{jjjlisting} \noindent In the first case, the value on the right hand side is the string literal "Hello, World!", which gets stored in {\tt greeting}. Of course, {\tt greeting} has to be the right type of container--in this case, a {\tt String} variable. In the next case, the value on the right hand side is 50. So that is the value that gets stored in {\tt num1}, assuming that {\tt num1} is an {\tt int} variable. The situation after this assignment is shown in the top drawing in Figure~\ref{fig:assign}. In the third case, the value on the right hand side is 25, which is determined by adding 10 and 15. So the value that gets assigned to {\tt num2} is 25. After this assignment we have the situation shown in the middle drawing in the figure. Of course, this assumes that {\tt num2} is an {\tt int} variable. In the last case, the value on the right hand side is 25, the value that we just stored in the variable {\tt num2}. So, 25 gets stored in {\tt num1}. This is the bottom drawing in the accompanying figure. The last of these examples \begin{jjjlisting} \begin{lstlisting} num1 = num2; // Copy num2's value into num1 \end{lstlisting} \end{jjjlisting} \noindent can be confusing to beginning programmers, so it is worth some additional comment. In this case, there are variables on both the left and right of the assignment operator. But they have very different meaning. The variable on the right is treated as a value. If that variable is storing 25, then that is its value. In fact, whatever occurs on the right hand side of an assignment operator is treated as a value. The variable on the left hand side is treated as a memory location. It is where the value 25 will be stored as a result of executing this statement. The effect of this statement is to copy the value stored in {\it num2} into {\it num1}, as illustrated \marginfig{chptr01/assign2.eps}{In the assignment {\it num1 = num2;}, {\it num2}'s value is copied into {\it num1}.} {fig:assign2} in Figure~\ref{fig:assign2}. Java has many other kinds of statements and we will be learning about these in subsequent examples. The following examples from the :code:`HelloWorld` program are examples of statements in which a method is called: \begin{jjjlisting} \begin{lstlisting} System.out.println(greeting);// Call println() method helloworld.greet(); // Call greet() method \end{lstlisting} \end{jjjlisting} \noindent We will discuss these kinds of statements in greater detail as we go along. One final type of statement that should be mentioned at this point is the {\bf compound statement} (or {\bf block}), which is a sequence of statements contained within braces ({}). We see three examples of this in the :code:`HelloWorld` program. The body of a class definition extends from lines 7 through 19. The body of the :code:`greet()` method is a block that extends from lines 10 through 12. The body of the :code:`main()` method is a block that extends from lines 14 to 19. \subsection{Expressions and Operators} \label{subsec:expressions} \noindent The manipulation of data in a program is done by using some kind of {\em expression} that specifies the action. An {\bf expression} is Java code that specifies or produces a value in the program. For example, if you want to add two numbers, you would use an arithmetic expression, such as $num1 + num2$. If you want to compare two numbers, you would use a relation expression such as $num1 < num2$. As you can see, these and many other expressions in Java involve the use of special symbols called {\bf operators}. Here we see the addition operator ($+$) and the less-than operator ($<$). We have already talked about the assignment operator ($=$). Java expressions and operators have a type that depends on the type of data that is being manipulated. For example, when adding two {\tt int} values, such as $5 + 10$, the expression itself produces an {\tt int} result. When comparing two numbers with the less than operator, $num1 < num2$, the expression itself produces a {\tt boolean} type, either true or false. It is important to note that expressions cannot occur on their own. Rather they occur as part of the program's statements. Here are some additional examples of expressions: \begin{jjjlisting} \begin{lstlisting} num = 7 // An assignment expression of type int num = square(7) // An method call expression of type int num == 7 // An equality expression of type boolean \end{lstlisting} \end{jjjlisting} \noindent The first of these is an assignment expression. It has a value of {\tt 7}, because it is assigning {\tt 7} to {\tt num}. The second example is also an assignment expression, but this one has a method call, {\tt square(7)}, on its right hand side. (We can assume that a method named {\tt square()} has been appropriately defined in the program.) A method call is just another kind of expression. In this case, it has the value 49. Note that an assignment expression can be turned into a stand-alone assignment statement by placing a semicolon after it. The third expression is an equality expression, which has the value {\tt true}, assuming that the variable on its left is storing the value 7. It is important to note the difference between the assignment operator ($=$) and the equality operator ($==$). \JavaRule{Equality and Assignment.} {Be careful not to confuse {\tt =} and {\tt ==}. The symbol {\tt =} is the assignment operator. It assigns the value on its right-hand side to the variable on its left-hand side. The symbol {\tt ==} is the equality operator. It evaluates whether the expressions on its left- and right-hand sides have the same value and returns either {\tt true} or {\tt false}.} \secEXRHone{Self-Study Exercises} \begin{SSTUDY} \item What is stored in the variable {\tt num} after the following two statements are executed? \small \begin{verbatim} int num = 11; num = 23 - num; \end{verbatim} \normalsize \item Write a statement that will declare a variable of type {\tt int} called {\tt num2}, and store in it the sum of 711 and 712. \end{SSTUDY} \subsection{Class Definition} \noindent A Java program consists of one or more class definitions. In the :code:`HelloWorld` example, we are defining the :code:`HelloWorld` class, but there are also three predefined classes involved in the program. These are the {\tt Object}, {\tt String}, and {\tt System} classes all of which are defined in the Java class library. Predefined classes, such as these, can be used in any program. As the :code:`HelloWorld` program's comments indicate, a class definition \marginnote{Class header} has two parts: a {\it class header} and a {\it class body}. In general, a class header takes the following form, some parts of which are optional ({\em opt}): $$ \hbox{\it ClassModifiers}_{\hbox{\scriptsize\it opt}}\quad \hbox{\tt class}\quad \hbox{\it ClassName}\quad \hbox{\it Pedigree}_{\hbox{\scriptsize\it opt}} $$ \noindent The class header for the :code:`HelloWorld` class is: \begin{jjjlisting} \begin{lstlisting} public class HelloWorld extends Object \end{lstlisting} \end{jjjlisting} \noindent The purpose of the header is to give the class its name (:code:`HelloWorld`), identify its accessibility ({\tt public} as opposed to {\tt private}), and describe where it fits into the Java class hierarchy (as an extension of the {\tt Object} class). In this case, the header begins with the optional access modifier, {\tt public}, which declares that this class can be accessed by any other classes. The next part of the declaration identifies the name of the class, :code:`HelloWorld`. And the last part declares that :code:`HelloWorld` is a subclass of the {\tt Object} class. We call this part of the definition the class's pedigree. As you recall from Chapter~\ref{chapter-intro}, the {\tt Object} class is the top class of the entire Java hierarchy. By declaring that {\tt HelloWorld extends Object}, we are saying that :code:`HelloWorld` is a direct {\em subclass} of {\tt Object}. In fact, it is not necessary to declare explicitly that :code:`HelloWorld` extends {\tt Object} because that is Java's default assumption. That is, if you omit the extends clause in the class header, Java will automatically assume that the class is a subclass of {\tt Object}. The class's body, which is enclosed within curly brackets ({}), \marginnote{Class body} contains the declaration and definition of the elements that make up the objects of the class. This is where the object's attributes and actions are defined. \subsection{Declaring an Instance Variable} \label{subsec:vardecl} There are generally two kinds of elements declared and defined in the class body: variables and methods. As we described in Chapter~\ref{chapter-intro}, an instance variable is a variable that belongs to each object, or instance, of the class. That is, each instance of a class has its own copies of the class's instance variables. The :code:`HelloWorld` class has a single instance variable, ({\tt greeting}), which is declared as follows: \begin{jjjlisting} \begin{lstlisting} private String greeting = "Hello, World!"; \end{lstlisting} \end{jjjlisting} \noindent In general, an instance variable declaration has the following syntax, some parts of which are optional: $$ \hbox{\it Modifiers}_{\hbox{\scriptsize\it opt}}\quad \hbox{\it Type}\quad \hbox{\it VariableName}\quad \hbox{\it InitializerExpression}_{\hbox{\scriptsize\it opt}} $$ \noindent Thus, a variable declaration begins with optional modifiers. In declaring the {\tt greeting} variable, we use the access modifier, {\tt private}, to declare that {\tt greeting}, which belongs to the :code:`HelloWorld` class, cannot be directly accessed by other objects. The next part of the declaration is the variable's type. In \marginnote{Information hiding} this case, the {\tt greeting} variable is a {\tt String}, which means that it can store a string object. The type is followed by the name of the variable, in this case ({\tt greeting}). This is the name that is used to refer to this memory location throughout the class. For example, notice that the variable is referred to on line 11 where it is used in a {\tt println()} statement. The last part of the declaration is an optional initializer expression. In this example, we use it to assign an initial value, ``Hello, World!,'' to the {\tt greeting} variable. \subsection{Defining an Instance Method} Recall that a method is a named section of code that can be called or invoked to carry out an action or operation. In a Java class, the methods correspond to the object's behaviors or actions. The :code:`HelloWorld` program has two method definitions: the :code:`greet()` method and the :code:`main()` method. A method definition consists of two parts: the method header and the method body. In general, a method header takes the following form, including some parts which are optional: $$ \hbox{\it Modifiers}_{\hbox{\scriptsize\it opt}}\quad \hbox{\it ReturnType}\quad \hbox{\it MethodName}\quad \hbox{\tt (}\quad \hbox{\it ParameterList}_{\hbox{\scriptsize\it opt}} \hbox{\tt )}\quad $$ \noindent As with a variable declaration, a method definition begins with optional modifiers. For example, the definition of the :code:`greet()` method on line 9 uses the access modifier, {\tt public}, to declare that this method can be accessed or referred to by other classes. The :code:`main()` method, whose definition begins on line 13, is a special method, and is explained in the next section. The next part of the method header is the method's return type. This is the type of value, if any, that the method returns. Both of the methods in :code:`HelloWorld` have a return type of {\tt void}. This means that they don't return any kind of value. Void methods just execute the sequence of statements given in their bodies. For an example of a method that does return a value, take a look again at the declaration of the {\tt getQuestion()} method in the {\tt Riddle} class, which returns a {\tt String} (Fig.~\ref{fig:riddleclass}). The method's name follows the method's return type. This is the name that is used when the method is called. For example, the :code:`greet()` method is called on line 17. Following the method's name is the method's parameter list. A {\bf parameter} is a variable that temporarily stores data values that are being passed to the method when the method is called. Some methods, such as the :code:`greet()` method, do not have parameters, because they are not passed any information. For an example of a method that does have parameters, see the {\tt Riddle()} constructor, which contains parameters for the riddle's question and answer (Fig.~\ref{fig:riddleclass}). The last part of method definition is its body, which contains a sequence of executable statements. An {\bf executable statement} is a Java statement that takes some kind of action when the program is run. For example, the statement in the :code:`greet()` method, \begin{jjjlisting} \begin{lstlisting} System.out.println(greeting); // Output statement \end{lstlisting} \end{jjjlisting} \noindent prints a greeting on the console. \subsection{Java Application Programs} The HelloWorld program is an example of a Java {\bf application program}, or a Java application, for short. An application program is a stand-alone program, ``stand-alone'' in the sense that it does not depend on any other program, like a Web browser, for its execution. Every Java application program must contain a :code:`main()` method, which is where the program begins execution when it is run. For a program that contains several classes, it is up to the programmer to decide which class should contain the :code:`main()` method. We don't have to worry about that decision for the HelloWorld, because it contains just a single class. Because of its unique role as the starting point for every Java application program, it is very important that the header for the main method be declared exactly as shown in the :code:`HelloWorld` class: \begin{jjjlisting} \begin{lstlisting} public static void main(String args[]) \end{lstlisting} \end{jjjlisting} \noindent It must be declared {\tt public} so it can be accessed from outside the class that contains it. The {\tt static} modifier \marginnote{Class method} is used to designate :code:`main()` as a class method. As you might recall from Chapter 0, a class method is a method that is associated directly with the class that contains it rather than with the objects of the class. A class method is not part of the class's objects. Unlike instance methods, which are invoked through a class's objects, a class method is called through the class itself. Thus, a class method can be called even before the program has created objects of that class. Because of :code:`main()`'s special role as the program's starting point, it is necessary for :code:`main()` to be a class method because it is called, by the Java runtime system, before the program has created any objects. The :code:`main()` method has a {\tt void} return type, which means it does not return any kind of value. Finally, notice that :code:`main()`'s parameter list contains a declaration of some kind of {\tt String} parameter named {\it args}. This is actually an array that can be used to pass string arguments to the program when it is started up. We won't worry about this feature until our chapter on arrays. \subsection{Creating and Using Objects} \noindent The body of the :code:`main()` method is where the :code:`HelloWorld` program creates its one and only object. Recall that when it is run the :code:`HelloWorld` program just prints the ``Hello World!'' greeting. As we noted earlier, this action happens in the :code:`greet()` method. So in order to make this action happen, we need to call the :code:`greet()` method. However, because the :code:`greet()` method is an instance method that belongs to a :code:`HelloWorld` object, we first need to create a :code:`HelloWorld` instance. This is what happens in the body of the :code:`main()` method (Fig.~\ref{fig:helloworld}). The :code:`main()` method contains three statements: \begin{jjjlisting} \begin{lstlisting} HelloWorld helloworld; // Variable declaration helloworld = new HelloWorld(); // Object instantiation helloworld.greet(); // Method invocation \end{lstlisting} \end{jjjlisting} \noindent The first statement declares a variable of type :code:`HelloWorld`, which is then assigned a :code:`HelloWorld` object. The second statement creates a :code:`HelloWorld` object. This is done by invoking the {\tt HelloWorld()} constructor method. Creating an object is called {\bf object instantiation} because you are creating an instance of the object. Once a :code:`HelloWorld` instance is created, we can use one of its instance methods to perform some task or operation. Thus, in the third statement, we call the :code:`greet()` method, which will print ``Hello World!'' on the console. If you look back at the :code:`HelloWorld` program in Figure~\ref{fig:helloworld} you won't find a definition of a \marginnote{Default constructor} constructor method. This is not an error because Java will provide a default constructor if a class does not contain a constructor definition. The {\bf default constructor} is a trivial constructor method, ``trivial'' because its body contains no statements. Here is what the default {\tt HelloWorld()} constructor would look like: \begin{jjjlisting} \begin{lstlisting} public HelloWorld() { } // Default constructor \end{lstlisting} \end{jjjlisting} \noindent For most of the classes we design, we will design our own constructors, just as we did in the {\tt Riddle} class (Fig.~\ref{fig:riddleclass}). We will use constructors to assign initial values to an object's instance variables or to perform other kinds of tasks that are needed when an object is created. Because the :code:`HelloWorld` object doesn't require any startup tasks, we can make do with the default constructor. The :code:`HelloWorld` program illustrates the idea that an \marginnote{Interacting objects} object-oriented program is a collection of interacting objects. Although we create just a single :code:`HelloWorld` object in the {\tt main()} method, there are two other objects used in the program. One is the {\tt greeting}, which is a {\tt String} object consisting of the string ``Hello, World!''. The other is the {\tt System.out} object, which is a special Java system object used for printing. \subsection{Java JFrames} Java cann run a program in a {\bf JFrame} so that the output and interaction occurs in a Window (or Frame). Figure~\ref{fig:hellojframe} shows a Java program named {\tt HelloWorldSwing}. This program does more or less the same thing as %% proglist ch1/hellojframe/HelloWorldSwing.java \begin{figure}[h!] \jjjprogstart \begin{jjjlisting} \begin{lstlisting} :code:`/*`* File: HelloWorldSwing program :code:`*/` import javax.swing.JFrame; // Import class names import java.awt.Graphics; import java.awt.Canvas; public class HelloWorldCanvas extends Canvas // Class header { // Start of body public void paint(Graphics g) // The paint method { g.drawString("Hello, World!", 10, 10); } // End of paint public static void main(String[] args){ HelloWorldCanvas c = new HelloWorldCanvas(); JFrame f = new JFrame(); f.add(c); f.setSize(150,50); f.setVisible(true); } } // End of HelloWorldCanvas \end{lstlisting} \end{jjjlisting} \jjjprogstop{{\tt Hello\-World\-Canvas} program.} {fig:hellojframe} \end{figure} the :code:`HelloWorld` application---it displays the ``Hello, World!'' greeting. The difference is that it displays the greeting within a Window rather than directly on the console. As in the case of the :code:`HelloWorld` console application program, {\tt Hello\-World\-Canvas} consists of a class definition. It contains a single method definition, the {\tt paint()} method, which contains a single executable statement: \begin{jjjlisting} \begin{lstlisting} g.drawString("Hello, World!",10,10); \end{lstlisting} \end{jjjlisting} \noindent This statement displays the ``Hello, World!'' message directly in a Window. The {\tt drawString()} method is one of the many drawing and painting methods defined in the {\tt Graphics} class. Every Java Canvas comes with its own {\tt Graphics} object, which is referred to here simply as {\tt g}. Thus, we are using that object's {\tt drawString()} method to draw on the window. Don't worry if this seems a bit mysterious now. We'll explain it more fully when we take up graphics examples again. The {\tt HelloWorldSwing} also contains some elements, such as the {\tt import} statements, that we did not find in the :code:`HelloWorld` application. We will now discuss those features. \subsection{Java Library Packages} Recall that the :code:`HelloWorld` application program used two pre-defined classes, the {\tt String} and the {\tt System} classes. Both of these classes are basic language classes in Java. The {\tt HelloWorldSwing} program also uses pre-defined classes, such as {\tt JFrame} and {\tt Graphics}. However, these two classes are not part of Java's basic language classes. To understand the difference between these classes, it will be necessary to talk briefly about how the Java class library is organized. A {\bf package} is a collection a inter-related classes in the Java class library. For example, the {\tt java.lang} package contains classes, such as {\tt Object}, {\tt String}, and {\tt System}, that are central to the Java language. Just about all Java programs use classes in this package. The {\tt java.awt} package provides classes, such as {\tt Button}, {\tt TextField}, and {\tt Graphics}, that are used in graphical user interfaces (GUIs). The {\tt java.net} package provides classes used for networking tasks, and the {\tt java.io} package provides classes used for input and output operations. All Java classes belong to some package, including those that are programmer defined. To assign a class to a package, you would provide a {\tt package} statement as the first statement in the file that contains the class definition. For example, the files containing the definitions of the classes in the {\tt java.lang} package all begin with the following statement. \begin{jjjlisting} \begin{lstlisting} package java.lang; \end{lstlisting} \end{jjjlisting} \noindent If you omit {\tt package} statement, as we do for the programs in this book, Java places such classes into an unnamed default package. Thus, for any Java class, its full name includes the name of the package that contains it. For example, the full name for the {\tt System} class is {\tt java.lang.System} and the full name for the {\tt String} class is {\tt java.lang.String}. Similarly, the full name for the {\tt Graphics} class is {\tt java.awt.Graphics}. In short, the full name for a Java class takes the following form: $$ \hbox{\it package.class}\quad $$ \noindent In other words, the full name of any class provides its package name as a prefix. Of all the packages in the Java library, the {\tt java.lang} package is the only one whose classes are available by their shorthand names to all Java programs. This means that when a program uses a class from the {\tt java.lang} package, it can refer to it simply by its class name. For example, in the :code:`HelloWorld` program we referred directly to the {\tt String} class rather than to {\tt java.lang.String}. \subsection{The {\tt import} Statement} The {\tt import} statement makes Java classes available to programs under their abbreviated names. Any public class in the Java class library is available to a program by its fully qualified name. Thus, if a program was using the {\tt Graphics} class, it could always refer to it as {\tt java.awt.Graphics}. However, being able to refer to {\tt Graphics} by its shorthand name, makes the program a bit shorter and more readable. The {\tt import} statement doesn't actually load classes into the program. It just makes their abbreviated names available. For example, the import statements in {\tt HelloWorldSwing} allow us to refer to the {\tt JFrame}, {\tt Canvas}, and {\tt Graphics} classes by their abbreviated names (Fig.~\ref{fig:hellojframe}). The {\tt import} statement takes two possible forms: $$ \hbox{\tt import }\quad \hbox{\it package.class}\quad $$ $$ \hbox{\tt import }\quad \hbox{\it package.*}\quad\\ $$ \noindent The first form allows a specific class to be known by its abbreviated name. The second form, which uses the asterisk as a wildcard characters ('*'), allows all the classes in the specified package to be known by their short names. The {\tt import} statements in {\tt HelloWorldSwing} are examples of the first form. The following example, \begin{jjjlisting} \begin{lstlisting} import java.lang.*; \end{lstlisting} \end{jjjlisting} \noindent allows all classes in the {\tt java.lang} package to be referred to by their class names alone. In fact, this particular {\tt import} statement is implicit in every Java program. \subsection{Qualified Names in Java} \label{subsec:qualifiednames} \noindent In the previous subsections we have seen several examples of names in Java programs that used {\it dot notation}. A {\bf qualified name} is a name that is separated into parts using Java's dot notation. Examples include package names, such as {\tt java.awt}, class names, such as {\tt javax.swing.JFrame}, and even method names, such as {\tt helloworld.greet()}. Just as in our natural language, the meaning of a name within a Java program depends on the context. For example, the expression {\tt helloworld.greet()} refers to the :code:`greet()` method, which belongs to the :code:`HelloWorld` class. If we were using this expression from within that class, you wouldn't need to qualify the name in this way. You could just refer to :code:`greet()` and it would be clear from the context which method you meant. This is no different than using someone's first name (``Kim'') when there's only one Kim around, but using a full name (``Kim Smith'') when the first name alone would be too vague or ambiguous. One thing that complicates the use of qualified names is that they are used to refer to different kinds of things within a Java program. But this is no different, really, than in our natural language, where names (``George Washington'') can refer to people, bridges, universities, and so on. Here again, just as in our natural language, Java uses the context to understand the meaning of the name. For example, the expression {\tt java.lang.System} refers to the {\tt System} class in the {\tt java.lang} package, whereas the expression {\tt System.out.print()} refers to a method in the {\tt System.out} object. How can you tell these apart? Java can tell them apart because the first one occurs as part of an {\tt import} statement, so it must be referring to something that belongs to a package. The second expression would only be valid in a context where a method invocation is allowed. You will have to learn a bit more about the Java language before you'll be able to completely understand these names, but the following provide some naming rules to get you started. \JavaRule{Library Class Names.}{By convention, class names in Java begin with an uppercase letter. When referenced as part of a package, the class name is the last part of the name. For example, {\tt java.lang.System} refers to the {\tt System} class in the {\tt java.lang} package.} \JavaRule{Dot Notation.}{Names expressed in Java's {\it dot notation} depend for their meaning on the context in which they are used. In qualified names---that is, names of the form X.Y.Z---the last item in the name (Z) is the {\it referent}---that is, the element being referred to. The items that precede it (X.Y.) are used to qualify or clarify the referent.} \noindent The fact that names are context dependent in this way certainly complicates the task of learning what's what in a Java program. Part of learning to use Java's built-in classes is learning where a particular object or method is defined. It is a syntax error if the Java compiler can't find the object or method that you are referencing. \JavaTIP{DEBUGGING TIP}{Not Found Error.}{If Java cannot find the item you are referring to, it will report an ``X not found'' error, where X is the class, method, variable, or package being referred to.} \section{Editing, Compiling, and Running a Java Program} \noindent In this section we discuss the nuts and bolts of how to compile and run a Java program. Because we are exploring two different varieties of Java programs, console applications and Swing applications, the process differs slightly for each variety. We have already discussed some of the main language features of console and Swing applications, so in this section we focus more on features of the programming environment itself. Because we do not assume any particular programming environment in this book, our discussion will be somewhat generic. However, we do begin with a brief overview of the types of programming environments one might encounter. \subsection{Java Development Environments} \noindent A Java programming environment typically consists of several programs that perform different tasks required to edit, compile, and run a Java program. The following description will be based on the software development environment provided by Oracle, the company that owns and maintains Java. It is currently known as the {\it Java Platform, Standard Edition 8.0 (Java SE 8)}. Versions of Java SE are available for various platforms, including Linux, Windows, and macOS computers. Free downloads are available at Sun's Web site at {\tt http://www.oracle.com/technetwork/java/}. (For more details about the Java SE, see Appendix~\ref{appendix-jdk}.) In some cases, the individual programs that make up the Java SE are available in a single program development environment, known as an {\it integrated development environment (IDE)}. Some examples include Eclipse, jGrasp, and Oracle's own NetBeans IDE. Each of these provides a complete development package for editing, compiling, and running Java applications on a variety of platforms, including Linux, macOS, and Windows. Figure~\ref{fig:compile} illustrates the process involved in creating and running a Java program. The discussion that follows here assumes \begin{figure}[tb] \figaleft{chptr01/compile.eps}{Editing, compiling, and running %%%\figa{chptr01/compile.eps}{Editing, compiling, and running {\tt HelloWorld.java}. } {fig:compile} \end{figure} that you are using the Java SE as your development environment to edit, compile and run the example program. If you are using some other environment, you will need to read the documentation provided with the software to determine exactly how to edit, compile, and run Java programs in that environment. %\begin{SLlist} %\item {\bf Step 1. Editing a Program} \subsection{Editing a Program} \noindent Any text editor may be used to edit the program by merely typing the program and making corrections as needed. Popular Unix and Linux editors include {\tt vim} and {\tt emacs}. These editors are also available on macOS and Windows. However, free macOS editors include {\tt TextMate} and {\tt TextWrangler}, and Windows has {\tt Notepad++} for free. As we have seen, a Java program consists of one or more class definitions. We will follow the convention of placing each class definition in its own file. (The rule in Java is that a source file may contain only one {\tt public} class definition.) The files containing these classes' definitions must be named {\it ClassName.java} where {\it ClassName} is the name of the {\tt public} Java class contained in the file. \JavaRule[false]{File Names.}{A file that defines a {\tt public} Java class named {\tt ClassName} must be saved in a text file named {\tt ClassName.java}. Otherwise an error will result.} \noindent For example, in the case of our :code:`HelloWorld` application program, the file must be named {\tt HelloWorld.java}, and for {\tt HelloWorldSwing}, it must be named {\tt HelloWorldSwing.java}. Because Java is {\em case sensitive}, which means that Java pays attention to whether a letter is typed uppercase or lowercase, it would be an error if the file containing the :code:`HelloWorld` class were named {\tt helloworld.java} or {\tt Helloworld.java}. The error in this case would be a semantic error. Java would not be able to find the :code:`HelloWorld` class because it will be looking for a file named {\tt HelloWorld.java}. \JavaRule{Case Sensitivity.} {Java is case sensitive, which means that it treats :code:`HelloWorld` and :code:`HelloWorld` as different names.} \subsection{Compiling a Program} \noindent Recall that before you can run a Java source program you have to compile it into the Java bytecode, the intermediate code understood by the Java Virtual Machine (JVM). Source code for both applets and applications must be compiled. To run a Java program, whether an applet or an application, the JVM is then used to interpret and execute the bytecode. The Java SE comes in two parts, a runtime program, called the {\it Java Runtime Environment (JRE)} and a development package, called the {\em Software Development Kit (SDK)}. If you are just going to run Java programs, you need only install the JRE on your computer. In order to run Java applets, browsers, such as Internet Explorer and Netscape Navigator, must contain a plugin version of the JRE. On the other hand, if you are going to be developing Java programs, you will need to install the SDK as well. The Java SDK compiler is named {\tt javac}. In some environments---such as within Linux or at the Windows command prompt ---{\tt HelloWorld.java} would be compiled by typing the following command at the system prompt: \begin{jjjlisting} \begin{lstlisting} javac HelloWorld.java \end{lstlisting} \end{jjjlisting} \noindent As Figure~\ref{fig:compile} illustrates, if the {\tt HelloWorld.java} program does not contain errors, the result of this command is the creation of a Java bytecode file named {\tt HelloWorld.class}---a file that has the same prefix as the source file but with the suffix {\tt .class} rather than {\tt .java}. By default, the bytecode file will be placed in the same directory as the source file. If {\tt javac} detects errors in the Java code, a list of error messages will be printed. \subsection{Running a Java Application Program} \noindent In order to run (or execute) a program on any computer, the program's {\it executable code} must be loaded into the computer's main memory. For Java environments, this means that the program's {\tt .class} file must be loaded into the computer's memory, where it is then interpreted by the Java Virtual Machine. To run a Java program on Linux systems or at the Windows command prompt, type \begin{jjjlisting} \begin{lstlisting} java HelloWorld \end{lstlisting} \end{jjjlisting} \noindent on the command line. This command loads the JVM, which will then load and interpret the application's bytecode ({\tt HelloWorld.class}). The ``HelloWorld'' string will be displayed on the command line. On Macintosh systems, or within an IDE, which do not typically have a command line interface, you would select the compile and run commands from a menu. Once the code is compiled, the run command will cause the JVM to be loaded and the bytecode to be interpreted. The ``Hello, World!'' output would appear in a text-based window that automatically pops up on your computer screen. In any case, regardless of the system you use, running the :code:`HelloWorld` console application program will cause the ``Hello, World!'' message to be displayed on some kind of standard output device (Fig.~\ref{fig:stdout}). \marginfigscaled{chptr01/1f4.png}{0.5}{Compiling and Running the {\tt HelloWorld.java} console application program.} {fig:stdout} \subsection{Running a Java Swing Program} \label{subsec:swing} When you run a Java Swing Program, there is typically no console output. You only see your output in the Window (JFrame) that your Graphics are displayed in. This makes automated testing more difficult since you need to visually inspect that the program is working correctly. When you run \begin{jjjlisting} \begin{lstlisting} java HelloWorldSwing \end{lstlisting} \end{jjjlisting} \noindent A window will open, and you won't be able to type in the console until you close the window, quit the program, or type ctl-c to send a kill signal to the Swing program. The result of running, as shown in Figure~\ref{fig:hello}, is that the ``Hello, World!'' message will be displayed within it's own window. \vspace*{2pc} \section{From the Java Library: System and \\PrintStream} \label{sec:systemclass} \WWWjava Java comes with a library of classes that can be used to perform common tasks. The Java class library is organized into a set of packages, where each package contains a collection of related classes. Throughout the book we will identify library classes and explain how to use them. In this section we introduce the {\tt System} and {\tt PrintStream} classes, which are used for printing a program's output. Java programs need to be able to accept input and to display output. Deciding how a program will handle input and output (I/O) is part of designing its {\em user interface}, a topic we take up in detail in Chapter 4. The simplest type of user interface is a {\it command-line interface}, in which input is taken from the command line through the keyboard, and output is displayed on the console. Some Java applications use this type of interface. Another type of user interface is a {\it Graphical User Interface (GUI)}, which uses buttons, text fields, and other graphical components for input and output. Java applets use GUIs as do many Java applications. Because we want to be able to write programs that generate output, this \marginfig{chptr01/1f6.png}{Running {\tt HelloWorldSwing.java} graphical program.} {fig:hello} section describes how Java handles simple console output. In Java, any source or destination for I/O is considered a {\it stream} of bytes or characters. To perform output, we insert bytes or characters into the stream. To perform input, we extract bytes or characters from the stream. Even characters entered at a keyboard, if considered as a sequence of keystrokes, can be represented as a stream. There are no I/O statements in the Java language. Instead, I/O is handled through methods that belong to classes contained in the {\tt java.io} package\index{java.io package}. We have already seen how the output method {\tt println()} is used to output a string to the console. For example, the following {\tt println()} statement \begin{jjjlisting} \begin{lstlisting} System.out.println("Hello, World"); \end{lstlisting} \end{jjjlisting} \noindent prints the message ``Hello, World'' on the Java console. Let's now examine this statement more carefully to see how it makes use of the Java I/O classes. The {\tt java.io.PrintStream} class is Java's printing expert, so to speak. It contains a variety of {\tt print()} and {\tt println()} methods that can be used to print all of the various types of data we find in a Java program. A partial definition of {\tt PrintStream} is shown in Figure~\ref{fig:printstreamUML}. Note %\begin{figure}[tb] %\begin{graphic} %%\marginfig{CHPTR01:printstreamUML.eps} \marginfig{chptr01/printstr.eps} %\begin{fig} {A UML class diagram of the {\tt PrintStream} class.} {fig:printstreamUML} %\figa %\end{fig} %} %\end{graphic} %\end{figure} that in this case the {\tt PrintStream} class has no attributes, just operations or methods. Because the various {\tt print()} and {\tt println()} methods are instance methods of a {\tt PrintStream} object, we can only use them by finding a {\tt PrintStream} object and ``telling'' it to print data for us. As shown in Figure~1.15, Java's {\tt java.lang.System} class contains three predefined streams, including two {\tt PrintStream} objects. This class has public ($+$) attributes. None of its public methods are shown here. Both the {\tt System.out} and {\tt System.err} objects can be used to write output to the console. As its name suggests, the {\tt err} stream is used primarily for error messages, whereas the {\tt out} stream is used for other printed output. Similarly, as its name suggests, the {\tt System.in} object can be used to handle input, which will be covered in Chapter~2. The only difference between the {\tt print()} and {\tt println()} methods is that {\tt println()} will also print a carriage return and line feed after printing its data, thereby allowing subsequent output to be printed on a new line. For example, the following statements \begin{jjjlisting} \begin{lstlisting} System.out.print("hello"); System.out.println("hello again"); System.out.println("goodbye"); \end{lstlisting} \end{jjjlisting} \noindent would produce the following output: \begin{jjjlisting} \begin{lstlisting} hellohello again goodbye \end{lstlisting} \end{jjjlisting} %\begin{figure} %\begin{graphic} %%\marginfig{CHPTR01:systemUML.eps}% \marginfig{chptr01/systemum.eps}% %\begin{fig} {The {\tt System} class.} {fig:systemUML} %\figa %\end{fig} %} %\end{graphic} %\end{figure} \noindent Now that we know how to use Java's printing expert, let's use it to ``sing'' a version of ``Old MacDonald Had a Farm.'' As you might guess, this program will simply consist of a sequence of {\tt System.out.println()} statements each of which prints a line of the verse. The complete Java application program is shown in Figure~\ref{fig:oldmac}. \begin{figure}[h] \jjjprogstart \begin{jjjlisting} \begin{lstlisting} public class OldMacDonald { public static void main(String args[]) // Main method { System.out.println("Old MacDonald had a farm"); System.out.println("E I E I O."); System.out.println("And on his farm he had a duck."); System.out.println("E I E I O."); System.out.println("With a quack quack here."); System.out.println("And a quack quack there."); System.out.println("Here a quack, there a quack,"); System.out.println("Everywhere a quack quack."); System.out.println("Old MacDonald had a farm"); System.out.println("E I E I O."); } // End of main } // End of OldMacDonald \end{lstlisting} \end{jjjlisting} \jjjprogstop{The {\tt Old\-Mac\-Donald.java} class.} {fig:oldmac} \end{figure} This example illustrates the importance of using the Java class library. If there's a particular task we want to perform, one of the first things we should ask is whether there is already an ``expert'' in Java's class library that performs that task. If so, we can use methods provided by the expert to perform that particular task. \JavaTIP[false]{EFFECTIVE DESIGN}{Using the Java Library.} {Learning how to use classes and objects from the Java class library is an important part of object-oriented programming in Java.} \secEXRHone{Self-Study Exercises} \begin{SSTUDY} \marginnote{\small\tt **********\\ \mbox{*}\mbox{ }**\mbox{ }\mbox{ }**\mbox{ }*\\ \mbox{*}\mbox{ }\mbox{ }\mbox{ }**\mbox{ }\mbox{ }\mbox{ }*\\ \mbox{*}\mbox{ }*\mbox{ }\mbox{ }\mbox{ }\mbox{ }*\mbox{ }*\\ \mbox{*}\mbox{ }\mbox{ }****\mbox{ }\mbox{ }*\\ \mbox{*}********* } \item One good way to learn how to write programs is to modify existing programs. Modify the {\tt OldMacDonald} class to ``sing'' one more verse of the song. \item Write a Java class that prints the design shown on the left. \end{SSTUDY} \secSMH{Chapter Summary} \secKTH{Technical Terms} \begin{KT} algorithm applet application program assignment statement comment compound statement (block) data type declaration statement default constructor executable statement expression identifier literal value object instantiation operator package parameter primitive data type pseudocode qualified name semantics statement stepwise refinement syntax \end{KT} \secSMHtwo{Summary of Important Points} \begin{BL} \item Good program design requires that each object and method have a well-defined role and clear definition of what information is needed for the task and what results will be produced. \item Good program design is important; the sooner you start coding, the longer the program will take to finish. Good program design strives for readability, clarity, and flexibility. \item Testing a program is very important and must be done with care, but it can only reveal the presence of bugs, not their absence. \item An algorithm is a step-by-step process that solves some problem. Algorithms are often described in pseudocode, a hybrid language that combines English and programming language constructs. \item A syntax error occurs when a statement breaks a Java syntax rules. Syntax errors are detected by the compiler. A semantic error is an error in the program's design and cannot be detected by the compiler. \item Writing Java code should follow the stepwise refinement process. \item Double slashes (//) are used to make a single-line comment. Comments that extend over several lines must begin with :code:`/*` and end with :code:`*/`. \item An {\it identifier} must begin with a letter of the alphabet and may consist of any number of letters, digits, and the special characters \_ and \$. An identifier cannot be identical to a Java keyword. Identifiers are case sensitive. \item A {\it keyword} is a term that has special meaning in the Java language (Table~1.1). \item Examples of Java's {\it primitive data types} include the {\tt int}, {\tt boolean}, and {\tt double} types. \item A variable is a named storage location. In Java, a variable must be declared before it can be used. \item A literal value is an actual value of some type, such as a {\tt String} ("Hello") or an {\tt int} (5). \item A declaration statement has the form: \hbox{\it Type} \hbox{\it VariableName}\ ; \item An assignment statement has the form:\hbox{\it VariableName} = \hbox{\it Expression}\ ; When it is executed it determines the value of the {\it Expression} on the right of the assignment operator ($=$) and stores the value in the variable named on the left. \item Java's operators are type dependent, where the type is dependent on the data being manipulated. When adding two {\tt int} values ($7 + 8$), the $+$ operation produces an {\tt int} result. \item A class definition has two parts: a class header and a class body. A class header takes the form of optional modifiers followed by the word {\tt class} followed by an identifier naming the class followed, optionally, by the keyword {\tt extends} and the name of the class's superclass. \item There are generally two kinds of elements declared and defined in the class body: variables and methods. \item Object instantiation is the process of creating an instance of a class using the {\tt new} operator in conjunction with one of the class's constructors. \item Dot notation takes the form {\it qualifiers.elementName}. The expression {\tt System.out.print("hello")} uses Java dot notation to invoke the {\tt print()} method of the {\tt System.out} object. \item A Java application program runs in stand-alone mode. A Java applet is a program that runs within the context of a Java-enabled browser. Java applets are identified in HTML documents by using the {\tt <applet>} tag. \item A Java source program must be stored in a file that has a {\tt .java} extension. A Java bytecode file has the same name as the source file but a {\tt .class} extension. It is an error in Java if the name of the source file is not identical to the name of the public Java class defined within the file. \item Java programs are first compiled into bytecode and then interpreted by the Java Virtual Machine (JVM). \end{BL} \pagebreak \secANSH% %%%\secANSH% \begin{ANS} \item The value 12 is stored in {\tt num}. \item {\tt int num2 = 711 + 712;} \item The definition of the {\tt OldMacDonald} class is: \begin{jjjlisting} \begin{lstlisting} public class OldMacDonald { public static void main(String args[]) // Main method { System.out.println("Old MacDonald had a farm"); System.out.println("E I E I O."); System.out.println("And on his farm he had a duck."); System.out.println("E I E I O."); System.out.println("With a quack quack here."); System.out.println("And a quack quack there."); System.out.println("Here a quack, there a quack,"); System.out.println("Everywhere a quack quack."); System.out.println("Old MacDonald had a farm"); System.out.println("E I E I O."); System.out.println("Old MacDonald had a farm"); System.out.println("E I E I O."); System.out.println("And on his farm he had a pig."); System.out.println("E I E I O."); System.out.println("With an oink oink here."); System.out.println("And an oink oink there."); System.out.println("Here an oink, there an oink,"); System.out.println("Everywhere an oink oink."); System.out.println("Old MacDonald had a farm"); System.out.println("E I E I O."); } // End of main } // End of OldMacDonald \end{lstlisting} \end{jjjlisting} %%Exercise 1.2 %% proglist ch1/ssx/pattern/Pattern.java \item The definition of the {\tt Pattern} class is: \begin{jjjlisting} \begin{lstlisting} public class Pattern { public static void main(String args[])// Main method { System.out.println("**********"); System.out.println("* ** ** *"); System.out.println("* ** *"); System.out.println("* * * *"); System.out.println("* **** *"); System.out.println("**********"); } // End of main } // End of Pattern \end{lstlisting} \end{jjjlisting} \end{ANS} \newpage %%\section{Exercises} \secEXRHtwoleft{Exercises} %%%\secEXRHtwo{Exercises} \begin{EXRtwo} \item Fill in the blanks in each of the following statements. \begin{EXRtwoLL} \baselineskip=14pt\item A Java class definition contains an object's \rule{20pt}{0.5pt} and \rule{20pt}{0.5pt}. \item A method definition contains two parts, a \rule{20pt}{0.5pt} and a \rule{20pt}{0.5pt}. \end{EXRtwoLL} \baselineskip=11pt %% 2 \item Explain the difference between each of the following pairs of concepts. \begin{EXRtwoLL} \item {\it Application} and {\it applet}. \item {\it Single-line} and {\it multiline} comment. \item {\it Compiling} and {\it running} a program. \item {\it Source code} file and {\it bytecode} file. \item {\it Syntax} and {\it semantics}. \item {\it Syntax error} and {\it semantic error}. \item {\it Data} and {\it methods}. \item {\it Variable} and {\it method}. \item {\it Algorithm} and {\it method}. \item {\it Pseudocode} and {\it Java code}. \item {\it Method definition} and {\it method invocation}. \end{EXRtwoLL} %% 3 \item For each of the following, identify it as either a syntax error or a semantic error. Justify your answers. \begin{EXRtwoLL} \item Write a class header as {\tt public Class MyClass}. \item Define the {\tt init()} header as {\tt public vid init()}. \item Print a string of five asterisks by {\tt System.out.println("***");}. \item Forget the semicolon at the end of a {\tt println()} statement. \item Calculate the sum of two numbers as {\tt N $-$ M}. \end{EXRtwoLL} %\epage %% 4 \item Suppose you have a Java program stored in a file named {\tt Test.java}. Describe the compilation and execution process for this program, naming any other files that would be created. %% 5 \item Suppose {\it N} is 15. What numbers would be output by the following pseudocode algorithm? Suppose {\it N} is 6. What would be output by the algorithm in that case? \begin{jjjlisting} \begin{lstlisting} 0. Print N. 1. If N equals 1, stop. 2. If N is even, divide it by 2. 3. If N is odd, triple it and add 1. 4. Go to step 0. \end{lstlisting} \end{jjjlisting} %% 6 \item Suppose {\it N} is 5 and {\it M} is 3. What value would be reported by the following pseudocode algorithm? In general, what quantity does this algorithm calculate? \begin{jjjlisting} \begin{lstlisting} 0. Write 0 on a piece of paper. 1. If M equals 0, report what's on the paper and stop. 2. Add N to the quantity written on the paper. 3. Subtract 1 from M. 4. Go to step 1. \end{lstlisting} \end{jjjlisting} \item {\bf Puzzle Problem}: You are given two different length ropes that have the characteristic that they both take exactly one hour to burn. However, neither rope burns at a constant rate. Some sections of the ropes burn very fast; other sections burn very slowly. All you have to work with is a box of matches and the two ropes. Describe an algorithm that uses the ropes and the matches to calculate when exactly 45 minutes have elapsed. \item {\bf Puzzle Problem}: A polar bear that lives right at the North Pole can walk due south for one hour, due east for one hour, and due north for one hour, and end up right back where it started. Is it possible to do this anywhere else on earth? Explain. \item {\bf Puzzle Problem}: Lewis Carroll, the author of {\it Alice in Wonderland}, used the following puzzle to entertain his guests: A captive queen weighing 195 pounds, her son weighing 90 pounds, and her daughter weighing 165 pounds, were trapped in a very high tower. Outside their window was a pulley and rope with a basket fastened on each end. They managed to escape by using the baskets and a 75-pound weight they found in the tower. How did they do it? The problem is that anytime the difference in weight between the two baskets is more than 15 pounds, someone might get hurt. Describe an algorithm that gets them down safely. \item {\bf Puzzle Problem}: Here's another Carroll favorite: A farmer needs to cross a river with his fox, goose, and a bag of corn. There's a rowboat that will hold the farmer and one other passenger. The problem is that the fox will eat the goose if they are left alone on the river bank, and the goose will eat the corn if they are left alone on the river bank. Write an algorithm that describes how he got across without losing any of his possessions. \item {\bf Puzzle Problem}: Have you heard this one? A farmer lent the mechanic next door a 40-pound weight. Unfortunately, the mechanic dropped the weight and it broke into four pieces. The good news is that, according to the mechanic, it is still possible to use the four pieces to weigh any quantity between one and 40 pounds on a balance scale. How much did each of the four pieces weigh? ({\it Hint}: You can weigh a 4-pound object on a balance by putting a 5-pound weight on one side and a 1-pound weight on the other.) %\epage \item Suppose your little sister asks you to show her how to use a pocket calculator so that she can calculate her homework average in her science course. Describe an algorithm that she can use to find the average of 10 homework grades. \item A Caesar cipher is a secret code in which each letter of the alphabet is shifted by {\it N} letters to the right, with the letters at the end of the alphabet wrapping around to the beginning. For example, if {\it N} is 1, when we shift each letter to the right, the word {\it daze} would be written as {\it ebaf}. Note that the {\it z} has wrapped around to the beginning of the alphabet. Describe an algorithm that can be used to create a Caesar encoded message with a shift of 5. \item Suppose you received the message, ``sxccohv duh ixq,'' which you know to be a Caesar cipher. Figure out what it says and then describe an algorithm that will always find what the message said regardless of the size of the shift that was used. \item Suppose you're talking to your little brother on the phone and he wants you to calculate his homework average. All you have to work with is a piece of chalk and a very small chalkboard---big enough to write one four-digit number. What's more, although your little brother knows how to read numbers, he doesn't know how to count very well so he can't tell you how many grades there are. All he can do is read the numbers to you. Describe an algorithm that will calculate the correct average under these conditions. \item Write a {\it header} for a public applet named {\tt SampleApplet}. \item Write a {\it header} for a public method named {\tt getName}. \item Design a class to represent a geometric rectangle with a given length and width, such that it is capable of calculating the area and the perimeter of the rectangle. \item Modify the {\tt OldMacDonald} class to ``sing'' either ``Mary Had a Little Lamb'' or your favorite nursery rhyme. \item Define a Java class, called {\tt Patterns}, modeled after {\tt Old\-Mac\-Donald}, that will print the following patterns of asterisks, one after the other heading down the page: \begin{jjjlisting} \begin{lstlisting} ***** ***** ***** **** * * * * * *** * * * * ** * * * * * * ***** ***** \end{lstlisting} \end{jjjlisting} \item Write a Java class that prints your initials as block letters, as shown in the example in the margin. \marginnote{\small\tt \mbox{*}*****\mbox{ }*\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }*\\ \mbox{*}\mbox{ }\mbox{ }\mbox{ }\mbox{ }*\mbox{ }**\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }**\\ \mbox{*}\mbox{ }\mbox{ }\mbox{ }\mbox{ }*\mbox{ }*\mbox{ }*\mbox{ }\mbox{ }\mbox{ }*\mbox{ }*\\ \mbox{*}*****\mbox{ }*\mbox{ }\mbox{ }*\mbox{ }*\mbox{ }\mbox{ }*\\ \mbox{**}\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }*\mbox{ }\mbox{ }\mbox{ }*\mbox{ }\mbox{ }\mbox{ }*\\ \mbox{*}\mbox{ }*\mbox{ }\mbox{ }\mbox{ }\mbox{ }*\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }*\\ \mbox{*}\mbox{ }\mbox{ }*\mbox{ }\mbox{ }\mbox{ }*\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }*\\ \mbox{*}\mbox{ }\mbox{ }\mbox{ }*\mbox{ }\mbox{ }*\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }* } \item {\bf Challenge:} Define a class that represents a {\tt Temperature} object. It should store the current temperature in an instance variable of type {\tt double}, and it should have two {\tt public} methods, {\tt setTemp(double t)}, which assigns {\tt t} to the instance variable, and {\tt getTemp()}, which {\tt return}s the value of the instance variable. Use the {\tt Riddle} class as a model. %\epage \item {\bf Challenge:} Define a class named {\tt TaxWhiz} that computes the sales tax for a purchase. It should store the current tax rate as an instance variable. Following the model of the {\tt Riddle} class, you can initialize the rate using a {\tt TaxWhiz()} method. This class should have one {\tt public} method, {\tt calcTax(double purchase)}, which {\tt return}s a {\tt double}, whose value is {\tt purchases} times the tax rate. For example, if the tax rate is 4 percent, 0.04, and the purchase is \$100, then {\tt calcTax()} should return 4.0. \item What is stored in the variables {\tt num1} and {\tt num2} after the following statements are executed? \small \begin{verbatim} int num1 = 5; int num2 = 8; num1 = num1 + num2; num2 = nmm1 + num2; \end{verbatim} \normalsize \item Write a series of statements that will declare a variable of type {\tt int} called {\tt num} and store in it the difference between 61 and 51. \secEXRHone{UML Exercises} \item Modify the UML diagram of the {\tt Riddle} class to contain a method named {\tt getRiddle()} that would return both the riddle's question and answer. \item Draw a UML class diagram representing the following class: The name of the class is {\tt Circle}. It has one attribute, a {\tt radius} that is represented by a {\tt double} value. It has one operation, {\tt calculateArea()}, which returns a {\tt double}. Its attributes should be designated as private and its method as public. \item To represent a triangle we need attributes for each of its three sides and operations to create a triangle, calculate its area, and calculate its perimeter. Draw a UML diagram to represent this triangle. \item Try to give the Java class definition for the class described in \marginfig{chptr01/umlexerc.eps}% {The {\tt Person} class.} {fig:person} the UML diagram shown in Figure~1.17. \end{EXRtwo} % LocalWords: applet PrintStream ch %% Ch1 Bold-faced terms in sequential order. %% algorithm %% pseudocode %% stepwise refinement %% syntax %% semantics %% comment %% identifier %% data type %% primitive data type %% literal value %% operator %% expression %% statement %% declaration statement %% assignment statement %% block %% compound statement %% parameter %% executable statement %% application program %% object instantiation %% default constructor %% applet %% package %% qualified name
{ "alphanum_fraction": 0.7569112724, "avg_line_length": 42.2854870775, "ext": "tex", "hexsha": "1573327110d36454aeed8c6a37c8b3707cfe4b5d", "lang": "TeX", "max_forks_count": 105, "max_forks_repo_forks_event_max_datetime": "2022-03-19T00:51:45.000Z", "max_forks_repo_forks_event_min_datetime": "2015-01-03T08:55:00.000Z", "max_forks_repo_head_hexsha": "e012925896070a86bd7c3a4cbb75fa5682d9b9e2", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "dwgillies/OpenDSA", "max_forks_repo_path": "RST/en/IntroToSoftwareDesign/TexFiles/JJJC1.tex", "max_issues_count": 119, "max_issues_repo_head_hexsha": "e012925896070a86bd7c3a4cbb75fa5682d9b9e2", "max_issues_repo_issues_event_max_datetime": "2022-03-15T04:38:52.000Z", "max_issues_repo_issues_event_min_datetime": "2015-03-22T22:38:21.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "dwgillies/OpenDSA", "max_issues_repo_path": "RST/en/IntroToSoftwareDesign/TexFiles/JJJC1.tex", "max_line_length": 673, "max_stars_count": 200, "max_stars_repo_head_hexsha": "e012925896070a86bd7c3a4cbb75fa5682d9b9e2", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "dwgillies/OpenDSA", "max_stars_repo_path": "RST/en/IntroToSoftwareDesign/TexFiles/JJJC1.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-23T02:44:38.000Z", "max_stars_repo_stars_event_min_datetime": "2015-02-08T05:27:52.000Z", "num_tokens": 26209, "size": 106348 }
\section{First Order Differential Equations}\label{sec:first order differential equations} We start by considering equations in which only the first derivative of the function appears. \begin{definition}{First Order Differential Equation}{First Order Differential Equation}\label{First Order Differential Equation} A \deffont{first order differential equation} is an equation of the form $F(t, y, y')=0$. A solution of a first order differential equation is a function $f(t)$ that makes $\ds F(t,f(t),f'(t))=0$ for every value of $t$. \end{definition} Here, $F$ is a function of three variables which we label $t$, $y$, and $y'$. It is understood that $y' $ will explicitly appear in the equation although $t$ and $y$ need not. The term ``first order'' means that the first derivative of $y$ appears, but no higher order derivatives do. \begin{example}{Newton's Law of Cooling}{Newton's Law of Cooling}\label{Newton's Law of Cooling} The equation from Newton's law of cooling, $y'=k(y-T)$ is a first order differential equation; $F(t,y,y')=k(y-T)-y'$. \end{example} \begin{example}{A First Order Differential Equation}{A First Order Differential Equation}\label{A First Order Differential Equation} $\ds y'=t^2+1$ is a first order differential equation; $\ds F(t,y,y')= y'-t^2-1$. All solutions to this equation are of the form $\ds t^3/3+t+C$. \end{example} \begin{definition}{First Order Initial Value Problem}{First Order Initial Value Problem}\label{First Order Initial Value Problem} A \deffont{first order initial value problem} is a system of equations of the form $F(t, y, y')=0$, $y(t_0)=y_0$. Here $t_0 $ is a fixed time and $y_0$ is a number. A solution of an initial value problem is a solution $f(t)$ of the differential equation that also satisfies the \deffont{initial condition} $f(t_0) = y_0$. \end{definition} \begin{example}{An Initial Value Problem}{An Initial Value Problem}\label{An Initial Value Problem} Verify that the initial value problem $\ds y'=t^2+1$, $y(1)=4$ has solution $\ds f(t)=t^3/3+t+8/3$. \end{example} \begin{solution} Observe that $f'(t)=t^2+1$ and $f(1)=1^3/2+1+8/3=4$ as required. \end{solution} The general first order equation is too general, so we can't describe methods that will work on them all, or even a large portion of them. We can make progress with specific kinds of first order differential equations. For example, much can be said about equations of the form $\ds y' = \phi (t, y)$ where $\phi $ is a function of the two variables $t$ and $y$. Under reasonable conditions on $\phi$, such an equation has a solution and the corresponding initial value problem has a unique solution. However, in general, these equations can be very difficult or impossible to solve explicitly. A special case for which we do have a well defined method is that of separable differential equations. \subsection{Separable Differential Equations} \begin{definition}{Separable Differential Equations}{Separable Differential Equations}\label{Separable Differential Equations} A first order differential equation is \deffont{separable} if it can be written in the form $$y' = f(t) g(y) \;\;\text{ or, }\;\;\frac{dy}{dt} = f(t) g(y).$$ \end{definition} For example, the differential equation \[ \frac{\; d y}{\; d x} = \sin(x) \bigl(1+y^2\bigr) \] is separable, and one has $F(x) = \sin x$ and $G(y) = 1+y^2$. On the other hand, the differential equation \[ \frac{\; d y}{\; d x} = x+y \] is not separable. The general approach to separable equations is as follows: Suppose we wish to solve $y' = f(t) g(y) $ where $f$ and $g$ are continuous functions. If $g(a)=0$ for some $a$ then $y(t)=a$ is a constant solution of the equation, since in this case $y' = 0 = f(t)g(a)$. For example, $y' =y^2 -1$ has constant solutions $y(t)=1$ and $y(t)=-1$. Such constant solutions to a differential equation are called \textit{equilibrium solutions}. To find the nonconstant solutions, we divide by $g(y)$ to get \begin{equation} \label{eq:separated} \frac{1}{ g(y)} \frac{\; d y}{\; d t} = f(t). \end{equation} Next find a function $H(y)$ whose derivative with respect to $y$ is \begin{equation}\label{eq:separable-3} H'(y) = \frac{1}{g(y)} \quad\left(\text{solution: } H(y) = \int {\frac{dy}{g(y)}}.\right) \end{equation} Then the chain rule implies that the left hand side in (\ref{eq:separated}) can be written as \[ \frac{1}{ g(y)} \frac{\; d y}{\; d t} = H'(y) \frac{\; d y}{\; d t} = \frac{\; d H(y)}{\; d t}. \] Thus \eqref{eq:separated} is equivalent with \[ \frac{\; d H(y)}{\; d t} = f(t). \] In words: $H(y)$ is an antiderivative of $f(t)$, which means we can find $H(y)$ by integrating $f(t)$: \begin{equation} \label{eq:separable-solution} H(y) = \int f(t) dt +C. \end{equation} Once we have found the integral of $f(t)$ this gives us $y(t)$ in implicit form: the equation (\ref{eq:separable-solution}) gives us $y(t)$ as an \textit{implicit function} of $t$. To get $y(t)$ itself we must solve the equation (\ref{eq:separable-solution}) for $y(t)$. A quick way of organizing the calculation goes like this: \begin{quote} To solve \( \ds \frac{dy}{ dt} = f(t)g(y)\) we first \textit{separate the variables}, \[ \frac{d y}{g(y)} = f(t)\,d t, \] and then integrate, \[ \int\frac{d y}{g(y)} = \int f(t)\, dt. \] The result is an implicit equation for the solution $y$ with one undetermined integration constant. \end{quote} This technique is called \dfont{separation of variables}. As we have seen so far, a differential equation typically has an infinite number of solutions. Such a solution is called a \dfont{general solution}. A corresponding initial value problem will give rise to just one solution. Such a solution in which there are no unknown constants remaining is called a \dfont{particular solution}. \begin{example}{}{} Find all functions $y$ that are solutions to the differential equation $$\frac{dy}{dt}= \frac{t}{y^2}.$$ \end{example} \begin{solution} We begin by separating the variables and writing $$ y^2 dy = t\; dt. $$ Integrating both sides of the equation with respect to the independent variable $t$ shows that $$ \int y^2\frac{dy}{dt}~dt = \int t~dt. $$ Next, we notice that the left-hand side allows us to change the variable of antidifferentiation\footnote{This is why we required that the left-hand side be written as a product in which $dy/dt$ is one of the terms.} from $t$ to $y$. In particular, $dy = \frac{dy}{dt}~dt$, so we now have $$ \int y^2 ~dy = \int t~dt. $$ This most recent equation says that two families of antiderivatives are equal to one another. Therefore, when we find representative antiderivatives of both sides, we know they must differ by arbitrary constant $C$. Antidifferentiating and including the integration constant $C$ on the right, we find that $$ \frac{y^3}{3} = \frac{t^2}{2} + C. $$ Again, note that it is not necessary to include an arbitrary constant on both sides of the equation; we know that $y^3/3$ and $t^2/2$ are in the same family of antiderivatives and must therefore differ by a single constant. Finally, we may now solve the last equation above for $y$ as a function of $t$, which gives $$ y(t) = \sqrt[3]{\frac 32 \thinspace t^2 + 3C}. $$ Of course, the term $3C$ on the right-hand side represents 3 times an unknown constant. It is, therefore, still an unknown constant, which we will rewrite as $C$. We thus conclude that the funtion $$ y(t) = \sqrt[3]{\frac 32 \thinspace t^2 + C} $$ is a solution to the original differential equation for any value of $C$. \end{solution} Notice that because this solution depends on the arbitrary constant $C$, we have found an infinite family of solutions. This makes sense because we expect to find a unique solution that corresponds to any given initial value. For example, if we want to solve the initial value problem $$ \frac{dy}{dt} = \frac{t}{y^2}, \ y(0) = 2, $$ we know that the solution has the form $y(t) = \sqrt[3]{\frac32\thinspace t^2 + C}$ for some constant $C$. We therefore must find the appropriate value for $C$ that gives the initial value $y(0)=2$. Hence, $$ 2 = y(0) \sqrt[3]{\frac 32 \thinspace 0^2 + C} = \sqrt[3]{C}, $$ which shows that $C = 2^3 = 8$. The solution to the initial value problem is then $$ y(t) = \sqrt[3]{\frac32\thinspace t^2+8}. $$ \begin{example}{Solving an IVP}{Solving an IVP}\label{Solving an IVP} Solve the IVP: $\ds y' = 2t(25-y)$, $ y(0)= 20 $. \end{example} \begin{solution} We begin by finding the general solution to the differential equation. This is almost identical to the previous example. As before, $y(t)=25$ is a solution. If $y\not=25$, \begin{eqnarray} \int {1\over 25-y}\,dy &=& \int 2t\,dt\cr (-1)\ln|25-y| &=& t^2+C_0\cr \ln|25-y| &=& -t^2 - C_0 = -t^2 + C\cr |25-y| &=& e^{-t^2+C}=e^{-t^2} e^C\cr y-25 &=& \pm\, e^C e^{-t^2} \cr y &=& 25 \pm e^C e^{-t^2} =25+Ae^{-t^2}. \label{eqn:solveIVP} \end{eqnarray} As before, all solutions are represented by $\ds y=25+Ae^{-t^2}$, allowing $A$ to be zero. To solve the IVP, we let $ y= 20$, and $ t=0 $ in Equation \ref{eq:solveIVP} to get $$ 20=25+A $$ which immediately gives $ A=-20 $. So the particular solution to the IVP is \[ y=25-20e^{-t^2} \] \end{solution} One application often discussed when introducing Separable Equations is that of \textbf{mixing problems}. A typical mixing problems involves: A tank of fixed capacity; a completely mixed solution of some substance in the tank; a solution of a certain concentration entering the tank at a (usually) fixed rate; the solution immediately becomes completely stirred; and the mixture leaves at the other end at a (usually fixed) rate. We illustrate with an example. \begin{example} A tank contains 20 kg of salt dissolved in 5000 L of water. Brine that contains 0.03 kg of salt per liter of water enters the tank at a rate of 25 L/min. The solution is kept thoroughly mixed and drains from the tank at the same rate. {How much salt is in the tank after half an hour?} \end{example} \begin{solution} Let $y(t)$ denote the amount of salt (kg) in the tank after $t$ minutes. {Given: $y(0) = {20.}$} {We want to know: $ {y(30).}$} \[ \frac{d y}{d t} = \textrm{(rate in) $-$ (rate out)}% \] \[ \textrm{rate in} = \textrm{( {concentration in})( {rate of volume in})}% =\left( { {0.03\; \frac{\textrm{kg}}{\textrm{L}}}}\right)\left( { {25\; \frac{\textrm{L}}{\textrm{min}}}}\right)% { = } 0.75\;\; \frac{\textrm{kg}}{\textrm{min}}% \] \[ \textrm{rate out}% = \textrm{( {concentration out})( {rate of volume out})}% =\left( { {\frac{y(t)}{5000}\; \frac{\textrm{kg}}{\textrm{L}}}}\right)\left( { {25\; \frac{\textrm{L}}{\textrm{min}}}}\right)% =\frac{y(t)}{200} \;\; \frac{\textrm{kg}}{\textrm{min}}% \] Therefore we have \[ \frac{dy}{dt}= \frac{150 - y(t)}{200} \] Separating variables we get \[ \int \frac{1}{150-y} \; dy =\int \frac{1}{200}\; dt \] which gives \[ -\ln |150 - y| = t /200 {+ C}% \] $ y(0) = 20 $, so $ C = {-\ln 130} $. Also observe that since $ y<150 (= 0.3\cdot 5000)$, so $ |150-y|=150-y$, so after simplification we get \[ y=150 - 130e^{-t/200} \] and therefore $ y(30)=150 - 130e^{-30/200} \approx 38.1 $kg. \end{solution} \begin{example}{}{} Solve the differential equation $$\frac{dy}{dt} =3y.$$ \end{example} \begin{solution} Following the same strategy as in Example~\ref{Ex:7.4.1}, we have $$ \frac 1y \frac{dy}{dt} = 3. $$ Integrating both sides with respect to $t$, $$ \int \frac 1y\frac{dy}{dt}~dt = \int 3~dt,$$ and thus $$ \int \frac 1y~dy = \int 3~dt.$$ Antidifferentiating and including the integration constant, we find that $$ \ln|y| = 3t + C_1$$ where $ C_1 $ is an arbitrary constant. Finally, we need to solve for $y$. Here, one point deserves careful attention. By the definition of the natural logarithm function, it follows that $$ |y| = e^{3t+C_1} = e^{3t}e^{C_1}. $$ Since $C$ is an unknown constant, $e^C$ is as well, though we do know that it is positive (because $e^x$ is positive for any $x$). When we remove the absolute value in order to solve for $y$ we obtain $$ y = \pm e^{C_1} e^{3t}. $$ As $ \pm e^{C_1} $ may be either positive or negative, we will denote this by $C$ to obtain $$ y(t) = Ce^{3t}. $$ There is one technical point to make here. Notice that $y=0$ is an equilibrium solution to this differential equation. In solving the equation above, we begin by dividing both sides by $y$, which is not allowed if $y=0$. To be perfectly careful, therefore, we will typically consider these equilibrium solutions separately. In this case, notice that the final form of our solution captures the equilibrium solution by allowing $C=0$. \end{solution} \subsection{Exponential Growth and Decay} The differential equation in the previous example ($ y'=3y $) describes a quantity $ y $ whose rate of change is directly proportional to the quantity itself. Such a differential equation is said to model exponential growth. \begin{example}{Population Growth and Radioactive Decay}{Population Growth and Radioactive Decay}\label{Population Growth and Radioactive Decay} Analyze the differential equation $y'=ky$. \end{example} \begin{solution} When $k>0$, this describes certain simple cases of (exponential) population growth: It says that the change in the population $y$ is proportional to the population. The underlying assumption is that each organism in the current population reproduces at a fixed rate, so the larger the population the more new organisms are produced. While this is too simple to model most real populations, it is useful in some cases over a limited time. The parameter $ k $ is called the \textit{proportionality constant}. When $k<0$, the differential equation describes a quantity that decreases in proportion to the current value (exponential decay); this can be used to model radioactive decay. The constant solution is $y(t)=0$; of course this will not be the solution to any interesting initial value problem. For the non-constant solutions, we proceed much as before: \begin{eqnarray*} \int {1\over y}\,dy&=&\int k\,dt\cr \ln|y| &=& kt+C\cr |y| &=& e^{kt} e^C\cr y &=& \pm \,e^C e^{kt} \cr y&=& Ae^{kt}. \end{eqnarray*} Again, if we allow $A=0$ this includes the equilibrium solution, and we can simply say that $\ds y=Ae^{kt}$ is the general solution. With an initial value we can easily solve for $A$ to get the solution of the initial value problem. In particular, if the initial value is given for time $t=0$, $y(0)=y_0$, then $A=y_0$ and the solution is $\ds y= y_0 e^{kt}$. \end{solution} In general, the work in the previous example shows the following to hold true. \begin{formulabox}[\label{expDE} ] The solution of the initial value problem \[ \frac{dy}{dt}=ky,\;\;\;\;y(0)=y_0 \] is $ \ds y=y_0e^{kt} $ \end{formulabox} \begin{example}{Global Population Growth}{} Assuming that the growth rate is proportional to population size, use the fact the world population in 1900 is 1650 million and the 1910 is 1750 million to estimate population in the year 2000. \end{example} \begin{solution} Since growth rate is proportional to population, we know that the population $P(t)$ will be given by a function of the form: \[ P=P_0e^{kt} \] taking $t$ to be the number of years after 1900. We are asked to find the population in the year 2000, in other words, find $P(100)$. We know $P_0=1650$ (in millions), so \begin{equation} \label{eq:pop} P=1650e^{kt} \end{equation} So we must solve for the growth constant $k$. In 1910, (when $t=10$) the population was 1750 (million), so $ P(10)=1750 $: \[ 1750=1650e^{10k} \] which gives \[ k=\frac{1}{10}\ln\left(\frac{175}{165}\right) \] Substituting into equation \ref{eq:pop} and simplifying gives \[ P=1650\left(\frac{175}{165}\right)^{\frac{t}{10}} \] Therefore, after 100 years, the population will be \[ P(100)=1650\left(\frac{175}{165}\right)^{10} \approx 2972 \textrm{ million.} \] \end{solution} As mentioned previously, radioactive decay also follows an exponential model, $ y=y_0e^{kt} $ (where $ k<0 $). The \textit{half-life} of a material is the time required for half of a given amount to decay. That is, the time for which $ \frac12y_0 = y_0e^{kt} $. Solving for $ t $ gives $ t=-\frac{\ln(2)}{k} $. \begin{formulabox}[\label{halflife}Half Life ] Radioactive decay of a material with decay constant $ k$ is modelled by $ y=y_0e^{kt} $, and has a half-life of $ \ds -\frac{ln(2)}{k} $ \end{formulabox} \begin{example}{}{} The half-life of radium-226 is 1590 years. A sample of radium has a mass of 100 mg. \begin{enumerate} \item Find a formula for the mass of radium after $ t $ years. \item Find the mass after 1000 years. \item When will the mass be reduced to 30mg? \end{enumerate} \end{example} \begin{solution} \begin{enumerate} \item As this model is one of exponential decay, we know the formula for the mass after $t$ years will have the form: \[ y=y_0e^{kt}, \] where $k<0$. We are told $100$mg are initially present, so $ y_0=100$. To determine the decay constant, we use the given half-life with the formula in Key Idea \ref{halflife}: \[ k = -\frac{ln(2)}{1590}. \] Therefore, after simplifications we have \[ y=100e^{t\cdot -\frac{ln(2)}{1590}} = 100\cdot \left(\frac12\right)^\frac{t}{1590}. \] \item From part (a) we see that after $ 1000 $ years the amount remaining will be \[ y(1000)=100\cdot \left(\frac12\right)^{\frac{1000}{1590}}\approx 64.67\textrm{mg}. \] \item We wish to find $t$ when $y=30$, so we solve: $\ds 30=100\cdot \left(\frac12\right)^{\frac{t}{1590}}$. Dividing by $ 100 $, taking the natural logarithm of both sides, and solving for $ t $ gives \[ \ln\left(\frac{3}{10}\right)= \frac{t}{1590} \ln\left(\frac12\right) \to\;t= 1590\cdot\frac{\ln\left(\frac{3}{10}\right)}{\ln\left(\frac12\right)}\approx 2762 \textrm{ years.} \] \end{enumerate} \end{solution} More generally, a quantity $y$ may grow (or shrink) with rate of change proportional to a difference $y-b$. Such is the case with Newton's Law of Cooling. \begin{formulabox}[\label{NLC} Newton's Law of Cooling ] The rate of cooling of an object is directly proportional to the difference between the temperature $y(t)$ of the object and the ambient temperature $T$ ( i.e. the temperature $T$ of its surroundings.) \[ \frac{dy}{dt}=k(y-T) \] where $ k $ is called the cooling constant (in units of $ (\text{time})^{-1} $), and depends on the physical properties of the materials involved. This differential equation may be solved in the same manner as in Example \ref{exa:Population Growth and Radioactive Decay} to give \[ y= T+y_0e^{kt} \] \end{formulabox} More generally, if $\frac{dy}{dt} = k(y-b) $ for some constant $ b $, then $ y=b+Ce^{kt} $, where $ C=y(0)$. \begin{example}{IVP for Newton's Law of Cooling}{IVP for Newton's Law of Cooling}\label{IVP for Newton's Law of Cooling} Consider this specific example of an initial value problem for Newton's law of cooling: $y' = -2(y-25)$, $y(0)=40$. Discuss the solutions for this initial value problem. \end{example} \begin{solution} We first note the zero of the equation: If $y = 25$, the right hand side of the differential equation is zero, and so the constant function $y(t)=25$ is a solution to the differential equation. It is not a solution to the initial value problem, since $y(0)\neq 25$. (The physical interpretation of this constant solution is that if a liquid is at the same temperature as its surroundings, then the liquid will stay at that temperature.) At this point we may appeal to the key idea \ref{Newton's Law of Cooling}, taking $ T=25 $, and $ k=-2 $ to solve the differential equation. However, just for practice we will derive the result directly. Separating variables, so long as $y\ne 25$, we can rewrite the differential equation as \begin{eqnarray*} {dy\over dt}{1\over 25-y}&=&2\cr {1\over 25-y}\,dy&=&2\,dt, \end{eqnarray*} so $$\int {1\over 25-y}\,dy = \int 2\,dt,$$ We can calculate these anti-derivatives and rearrange the results: \begin{eqnarray*} \int {1\over 25-y}\,dy &=& \int 2\,dt\cr (-1)\ln|25-y| &=& 2t+C_0\cr \ln|25-y| &=& -2t - C_0 = -2t + C_1\cr |25-y| &=& e^{-2t+C_1}=e^{-2t} e^{C_1}\cr y-25 &=& \pm\, e^{C_1} e^{-2t} \cr y &=& 25 \pm e^{C_1} e^{-2t} =25+Ce^{-2t}. \end{eqnarray*} Here $\ds C = \pm\, e^{C_1} = \pm\, e^{-C_0}$ is some non-zero constant. Note that this agrees with the solution we would have obtained directly from Key Idea \ref{Newton's Law of Cooling}. Since we require $y(0)=40$, we substitute and solve for $C$: \begin{eqnarray*} 40&=&25+Ce^0\cr 15&=&C, \end{eqnarray*} and so $\ds y=25+15 e^{-2t}$ is a solution to the initial value problem. Note that $y$ is never $ 25 $, so this makes sense for all values of $t$. However, if we allow $C=0$ we get the solution $y=25$ to the differential equation, which would be the solution to the initial value problem if we were to require $y(0)=25$. Thus, $\ds y=25+Ce^{-2t}$ describes all solutions to the differential equation $\ds y' = 2(25-y)$, and all solutions to the associated initial value problems. \end{solution} \begin{example}{}{} If an object takes $ 40 $ minutes to cool from $ 30 $ degrees to $ 24 $ degrees in a $ 20 $ degree room, how long will it take the object to cool to $ 21 $ degrees? \end{example} \begin{solution} From the above discussion we know that the model for the temperature $ y $, $ t $ minutes after the first temperature measurement is given by \[ y=20+10e^{kt}. \] To solve for $ k $ we use the fact that $ y(40) =24$. Substituting $ t=40, $ and $ y=24 $ into the last equation, and simplifying gives \[ 24=20 10e^{40k}\;\;\to\;\; k= \ln\left[\left(\frac{2}{5}\right)^{\frac{1}{40}}\right] \] Therefore, \[ y=10e^{t\cdot \ln\left[\left(\frac{2}{5}\right)^{\frac{1}{40}}\right]}+20 =10e^{\ln\left[\left(\frac{2}{5}\right)^{\frac{t}{40}}\right]}+20=10\left(\frac{2}{5}\right)^{\frac{t}{40}}+20 \] So when the temperature is $ 21 $ degrees, we have \[ 21=10\left(\frac{2}{5}\right)^{\frac{t}{40}}+20\Rightarrow t= \frac{-40\ln(10)}{\ln\left( \frac{2}{5} \right)} \approx 100.52 \text{min.} \] Therefore, even though it took only $ 40 $ minutes to cool from $ 30 $ degrees to $ 24 $ degrees (a difference of $ 6 $ degrees), it will take over $ 100 $ minutes to cool to $ 21 $ degrees (the last $ 3 $ degrees add more than an hour to the time required!). \end{solution} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \Opensolutionfile{solutions}[ex] \section*{Exercises for \ref{sec:first order differential equations}} \begin{enumialphparenastyle} %%%%%%%%%% \begin{ex} Which of the following equations are separable? \begin{enumerate} \item $\ds y' = \sin (ty)$ \item $\ds y' = e^t e^y $ \item $\ds yy' = t $ \item $\ds y' = (t^3 -t) \arcsin(y)$ \item $\ds y' = t^2 \ln y + 4t^3 \ln y $ \end{enumerate} \end{ex} %%%%%%%%%% \begin{ex} Solve $\ds y' = 1/(1+t^2)$. \begin{sol} $\ds y=\arctan t + C$ \end{sol} \end{ex} %%%%%%%%%% \begin{ex} Solve the initial value problem $y' = t^n$ with $y(0)=1$ and $n\ge 0$. \begin{sol} $\ds y={t^{n+1}\over n+1}+1$ \end{sol} \end{ex} %%%%%%%%%% \begin{ex} Solve $y' = \ln t$. \begin{sol} $\ds y=t\ln t-t+C$ \end{sol} \end{ex} %%%%%%%%%% \begin{ex} Identify the constant solutions (if any) of $y' =t\sin y$. \begin{sol} $y=n\pi$, for any integer $n$. \end{sol} \end{ex} %%%%%%%%%% \begin{ex} Identify the constant solutions (if any) of $\ds y'=te^y$. \begin{sol} none \end{sol} \end{ex} %%%%%%%%%% \begin{ex} Solve $y' = t/y$. \begin{sol} $\ds y=\pm\sqrt{t^2+C}$ \end{sol} \end{ex} %%%%%%%%%% \begin{ex} Solve $\ds y' = y^2 -1$. \begin{sol} $\ds y=\pm 1$, $\ds y=(1+Ae^{2t})/(1-Ae^{2t})$ \end{sol} \end{ex} %%%%%%%%%% \begin{ex} Solve $\ds y' = t/(y^3 - 5)$. You may leave your solution in implicit form: that is, you may stop once you have done the integration, without solving for $y$. \begin{sol} $\ds y^4/4-5y=t^2/2+C$ \end{sol} \end{ex} %%%%%%%%%% \begin{ex} Find a non-constant solution of the initial value problem $y' = y^{1/3}$, $y(0)=0$, using separation of variables. Note that the constant function $y(t)=0 $ also solves the initial value problem. This shows that an initial value problem can have more than one solution. \begin{sol} $\ds y=(2t/3)^{3/2}$ \end{sol} \end{ex} %%%%%%%%%% \begin{ex} Solve the equation for Newton's law of cooling leaving $M$ and $k$ unknown. \begin{sol} $\ds y=M+Ae^{-kt}$ \end{sol} \end{ex} %%%%%%%%%% \begin{ex} After 10 minutes in Jean-Luc's room, his tea has cooled to $40^\circ $ Celsius from $100^\circ$ Celsius. The room temperature is $25^\circ$ Celsius. How much longer will it take to cool to $35^\circ$? \begin{sol} $\ds {10\ln(15/2)\over\ln 5}\approx 2.52$ minutes \end{sol} \end{ex} %%%%%%%%%% \begin{ex} Solve the \dfont{logistic equation} $y' = ky(M-y)$. (This is a somewhat more reasonable population model in most cases than the simpler $y'=ky$.) Sketch the graph of the solution to this equation when $M=1000$, $k=0.002$, $y(0)=1$. \begin{sol} $\ds y={M\over 1+Ae^{-Mkt}}$ \end{sol} \end{ex} %%%%%%%%%% \begin{ex} Suppose that $y' = ky$, $y(0)=2$, and $y'(0)=3$. What is $y$? \begin{sol} $\ds y=2e^{3t/2}$ \end{sol} \end{ex} %%%%%%%%%% \begin{ex} A radioactive substance obeys the equation $y' =ky$ where $k< 0 $ and $y$ is the mass of the substance at time $t$. Suppose that initially, the mass of the substance is $y(0)=M>0$. At what time does half of the mass remain? (This is known as the half life. Note that the half life depends on $k$ but not on $M$.) \begin{sol} $\ds t=-{\ln 2\over k}$ \end{sol} \end{ex} %%%%%%%%%% \begin{ex} Bismuth-210 has a half life of five days. If there is initially 600 milligrams, how much is left after 6 days? When will there be only 2 milligrams left? \begin{sol} $\ds 600e^{-6\ln 2/5}\approx 261$ mg; $\ds {5\ln 300\over\ln2}\approx 41$ days \end{sol} \end{ex} %%%%%%%%%% \begin{ex} The half life of carbon-14 is 5730 years. If one starts with 100 milligrams of carbon-14, how much is left after 6000 years? How long do we have to wait before there is less than 2 milligrams? \begin{sol} $\ds 100e^{-200\ln 2/191}\approx 48$ mg; $\ds {5730\ln 50\over\ln2}\approx 32339$ years \end{sol} \end{ex} %%%%%%%%%% \begin{ex} A certain species of bacteria doubles its population (or its mass) every hour in the lab. The differential equation that models this phenomenon is $y' =ky$, where $k>0 $ and $y$ is the population of bacteria at time $t$. What is $y$? \begin{sol} $\ds y=y_0e^{t\ln 2}$ \end{sol} \end{ex} %%%%%%%%%% \begin{ex} If a certain microbe doubles its population every 4 hours and after 5 hours the total population has mass 500 grams, what was the initial mass? \begin{sol} $\ds 500e^{-5\ln2/4}\approx 210$ g \end{sol} \end{ex} \end{enumialphparenastyle}
{ "alphanum_fraction": 0.6732289281, "avg_line_length": 32.9768574909, "ext": "tex", "hexsha": "7e17de43e52efd88bd30319908cd56b09f731dfa", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "TimAlderson/OpenCalc", "max_forks_repo_path": "10-differential-equations/10-1-first-order-de.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "TimAlderson/OpenCalc", "max_issues_repo_path": "10-differential-equations/10-1-first-order-de.tex", "max_line_length": 354, "max_stars_count": null, "max_stars_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "TimAlderson/OpenCalc", "max_stars_repo_path": "10-differential-equations/10-1-first-order-de.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 8817, "size": 27074 }
\textbf{Consider again the approximation of $f(x)=3/(5−4cos(x))$, $x\in[-\pi,\pi]$. Let $N$ be the number of nodes in Fourier and polynomial interpolation of this function. \begin{enumerate}[label=\alph*)] \item Plot the error as a function of $N$ (on the same figure) for both Chebyshev and Fourier. Notice that Fourier converges at a faster rate in this case. \item Now consider that maximum spacing between nodes: $h=max|x_{i+1}-x_i|$. Plot the error for polynomial and Fourier approximations as a function of $h$ and notice that the rates of convergence are now nearly the same. \item Show that the ratio $h_{cheb}/h_{Fourier}$ is about $\pi/2$. \end{enumerate} $~$} \newline For the first part we look at the next figure. We can see that, in fact, Fourier converges at a faster rate in this function. \begin{figure}[H] \centering \includegraphics[scale=0.75]{P7_a.png}\caption{Convergence of the Chebyshev and Fourier interpolants to $f(x)= \frac{3}{54\cos{x}}$.} \end{figure} We continue by scaling the Chebyshev points to be withing $[-\pi,\pi]$ and calculate $h$ for both Chebishev and Fourier, for each $N$. We obtain the following figure, which shows that the rates of convergence are nearly the same. \begin{figure}[H] \centering \includegraphics[scale=0.75]{P7_b.png}\caption{Convergence of the Chebyshev and Fourier interpolants to $f(x)= \frac{3}{54\cos{x}}$.} \end{figure} Lastly, in the following figure we see that, once is large enough, $h_{Cheb}/h_{Fourier}\approx \pi/2$. \begin{figure}[H] \centering \includegraphics[scale=0.75]{P7_c.png}\caption{$h_{Cheb}/h_{Fourier}$ for $f(x)= \frac{3}{54\cos{x}}$.} \end{figure} \subsection*{Matlab code for this problem} \begin{verbatim} %% Problem 7 close all f = chebfun('3/(5-4*cos(x))',[-pi,pi]); plot(f) grid on N = 2:2:50; for k = 1:length(N) fcheb = chebfun('3/(5-4*cos(x))',[-pi,pi],N(k)); ffour = chebfun('3/(5-4*cos(x))',[-pi,pi],N(k),"trig"); errcheb(k) = norm(f-fcheb,inf); errfour(k) = norm(f-ffour,inf); % b [~,x] = cheb(N(k)); x = pi*x; hcheb(k) = max(abs(x(2:end)-x(1:end-1))); hfour(k) = 2*pi/(N(k)); end % a figure semilogy(N,errcheb,'b',N,errfour,'r') hold on semilogy(N,errcheb,'b*',N,errfour,'r*') grid on xlabel('$N$','interpreter','latex') ylabel('$Error$','interpreter','latex') set(gca,'fontsize',labelfontsize) legend('Chebishev', 'Fourier') txt='Latex/FIGURES/P7_a'; saveas(gcf,txt,figformat) % b figure semilogy(hcheb.^(-1),errcheb,'b',hfour.^(-1),errfour,'r') hold on semilogy(hcheb.^(-1),errcheb,'b*',hfour.^(-1),errfour,'r*') grid on xlabel('$1/h$','interpreter','latex') ylabel('$Error$','interpreter','latex') set(gca,'fontsize',labelfontsize) legend('Chebishev', 'Fourier') txt='Latex/FIGURES/P7_b'; saveas(gcf,txt,figformat) % c figure plot(N,hcheb./hfour,'r*') grid on axis([0 50 0 pi]) xlabel('$N$','interpreter','latex') ylabel('$h_{Cheb}/h_{Fourier}$','interpreter','latex') set(gca,'fontsize',labelfontsize) legend('Chebishev', 'Fourier') txt='Latex/FIGURES/P7_c'; saveas(gcf,txt,figformat) \end{verbatim}
{ "alphanum_fraction": 0.6910943148, "avg_line_length": 35.3837209302, "ext": "tex", "hexsha": "2e74dc5f7d63587459d0ac7afc666fb58032e25c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "12ab3e86a4a44270877e09715eeab713da45519d", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "fjcasti1/Courses", "max_forks_repo_path": "SpectralMethods/Homework3/Latex/problem7.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "12ab3e86a4a44270877e09715eeab713da45519d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "fjcasti1/Courses", "max_issues_repo_path": "SpectralMethods/Homework3/Latex/problem7.tex", "max_line_length": 229, "max_stars_count": null, "max_stars_repo_head_hexsha": "12ab3e86a4a44270877e09715eeab713da45519d", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "fjcasti1/Courses", "max_stars_repo_path": "SpectralMethods/Homework3/Latex/problem7.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1026, "size": 3043 }
\documentclass[epsfig,10pt,fullpage]{article} \newcommand{\LabNum}{9} \newcommand{\CommonDocsPath}{../../../common/docs} \input{\CommonDocsPath/preamble.tex} \begin{document} \centerline{\huge Digital Logic} ~\\ \centerline{\huge Laboratory Exercise \LabNum} ~\\ \centerline{\large A Simple Processor} ~\\ Figure~\ref{fig:fig1} shows a digital system that contains a number of 16-bit registers, a multiplexer, an adder/subtracter, and a control unit (finite state machine). Information is input to this system via the 16-bit {\it DIN} input, which is loaded into the {\it IR} register. Data can be transferred through the 16-bit wide multiplexer from one register in the system to another, such as from register {\it IR} into one of the {\it general purpose} registers $r0, \ldots, r7$. The multiplexer's output is called {\it Buswires} in the figure because the term {\it bus} is often used for wiring that allows data to be transferred from one location in a system to another. The FSM controls the {\it Select} lines of the multiplexer, which allows any of its inputs to be transferred to any register that is connected to the bus wires. ~\\ The system can perform different operations in each clock cycle, as governed by the FSM. It determines when particular data is placed onto the bus wires and controls which of the registers is to be loaded with this data. For example, if the FSM selects $r0$ as the output of the bus multiplexer and also asserts $A_{in}$, then the contents of register $r0$ will be loaded on the next active clock edge into register {\it A}. ~\\ Addition or subtraction of signed numbers is performed by using the multiplexer to first place one 16-bit number onto the bus wires, and then loading this number into register {\it A}. Once this is done, a second 16-bit number is placed onto the bus, the adder/subtracter performs the required operation, and the result is loaded into register {\it G}. The data in {\it G} can then be transferred via the multiplexer to one of the other registers, as required. \begin{figure}[H] \begin{center} \includegraphics[scale = 0.8]{figures/figure1.pdf} \end{center} \caption{A digital system.} \label{fig:fig1} \end{figure} \newpage \noindent A system like the one in Figure~\ref{fig:fig1} is often called a {\it processor}. It executes operations specified in the form of {\it instructions}. Table~\ref{tab:instructions} lists the instructions that this processor supports. The left column shows the name of an instruction and its operands. The meaning of the syntax {\it rX} $\leftarrow$ {\it Op2} is that the second operand, {\it Op2}, is loaded into register {\it rX}. The operand {\it Op2} can be either a register, {\it rY}, or {\it immediate data}, \#{\it D}. \begin{table}[H] \begin{center} \begin{tabular}{rl|c} \multicolumn{2}{c|}{Instruction} & Function performed \\ \hline \rule[0.01in]{0in}{0.15in}{\it mv} & {\it rX}, $Op2$ & {\it rX} $\leftarrow Op2$ \\ \rule[-0.075in]{0in}{0.2in}{\it mvt} & {\it rX,} \#{\it D} & {\it rX$_{15-8}$} $\leftarrow$ {\it D$_{15-8}$}\\ \rule[-0.075in]{0in}{0.2in}{\it add} & {\it rX}, $Op2$ & {\it rX} $\leftarrow$ {\it rX} + $Op2$ \\ \rule[-0.075in]{0in}{0.2in}{\it sub} & {\it rX}, $Op2$ & {\it rX} $\leftarrow$ {\it rX} $-$ $Op2$ \\ \end{tabular} \caption{Instructions performed in the processor.} \label{tab:instructions} \end{center} \end{table} \noindent Instructions are loaded from the external input {\it DIN}, and stored into the {\it IR} register, using the connection indicated in Figure~\ref{fig:fig1}. Each instruction is {\it encoded} using a 16-bit format. If $Op2$ specifies a register, then the instruction encoding is \texttt{III0XXX000000YYY}, where \texttt{III} specifies the instruction, \texttt{XXX} gives the {\it rX} register, and \texttt{YYY} gives the {\it rY} register. If $Op2$ specifies immediate data \#{\it D}, then the encoding is \texttt{III1XXXDDDDDDDDD}, where the 9-bit field \texttt{DDDDDDDDD} represents the constant data. Although only two bits are needed to encode our four instructions, we are using three bits because other instructions will be added to the processor later. Assume that \texttt{III} $= 000$ for the {\it mv} instruction, $001$ for {\it mvt}, $010$ for {\it add}, and $011$ for {\it sub}. ~\\ The {\it mv} instruction ({\it move}) copies the contents of one register into another, using the syntax \texttt{mv} \texttt{rX,rY}. It can also be used to initialize a register with immediate data, as in \texttt{mv} \texttt{rX,\#D}. Since the data {\it D} is represented inside the encoded instruction using only nine bits, the processor has to {\it zero-extend} the data, as in \texttt{0000000D$_{8-0}$}, before loading it into register~{\it rX}. The {\it mvt} instruction ({\it move top}) is used to initialize the most-significant byte of a register. For {\it mvt}, only eight bits of the {\it D} field in the instruction are used, so that \texttt{mvt} \texttt{rX,\#D} loads the value \texttt{D$_{15-8}$00000000} into {\it rX}. As an example, to load register $r0$ with the value \texttt{0xFF00}, you would use the instruction \texttt{mvt r0,\#0xFF00}. The instruction \texttt{add} \texttt{rX,rY} produces the sum {\it rX} $+$ {\it rY} and loads the result into {\it rX}. The instruction \texttt{add} \texttt{rX,\#D} produces the sum {\it rX} $+$ {\it D}, where {\it D} is zero-extended to 16 bits, and saves the result in {\it rX}. Similarly, the {\it sub} instruction generates either {\it rX} $-$ {\it rY}, or {\it rX} $-$ \#{\it D} and loads the result into {\it rX}. ~\\ Some instructions, such as an {\it add} or {\it sub}, take a few clock cycles to complete, because multiple transfers have to be performed across the bus. The finite state machine in the processor ``steps through'' such instructions, asserting the control signals needed in successive clock cycles until the instruction has completed. The processor starts executing the instruction on the {\it DIN} input when the {\it Run} signal is asserted and the processor asserts the {\it Done} output when the instruction is finished. Table~\ref{tab:control_signals} indicates the control signals from Figure~\ref{fig:fig1} that have to be asserted in each time step to implement the instructions in Table~\ref{tab:instructions}. The only control signal asserted in time step $T_0$, for all instructions, is {\it IR}$_{in}$. The meaning of {\it Select = rY} or {\it IR} in the table is that the multiplexer selects either register {\it rY} or the immediate data in {\it IR}, depending on the value of $Op2$. For the {\it mv} instruction, when {\it IR} is selected the multiplexer outputs \texttt{0000000DDDDDDDDD}, and for {\it mvt} the multiplexer outputs \texttt{DDDDDDDD00000000}. Only signals from Figure~\ref{fig:fig1} that have to be asserted in each time step are listed in Table~\ref{tab:instructions}; all other signals are not asserted. The meaning of {\it AddSub} in step $T_2$ of the {\it sub} instruction is that this signal is set to 1, and this setting causes the adder/subtracter unit to perform subtraction using 2's-complement arithmetic. ~\\ The processor in Figure~\ref{fig:fig1} can perform various tasks by using a sequence of instructions. For example, the sequence below loads the number 28 into register $r0$ and then calculates, in register $r1$, the 2's complement value $-28$. \begin{minipage}[t]{15 cm} \begin{lstlisting} mv r0, #28 // original number = 28 mvt r1, #0xFF00 add r1, #0x00FF // r1 = 0xFFFF sub r1, r0 // r1 = 1's-complement of r0 add r1, #1 // r1 = 2's-complement of r0 = -28 \end{lstlisting} \end{minipage} \begin{table}[H] \begin{center} \begin{tabular}{r|c|c|c|c|} \multicolumn{1}{c}{~} & \multicolumn{1}{c}{$T_0$} & \multicolumn{1}{c}{$T_1$} & \multicolumn{1}{c}{$T_2$} & \multicolumn{1}{c}{$T_3$} \rule[-0.075in]{0in}{0.25in}\\ \cline{2-5} {\it mv~} & {\it IR}$_{in}$ & \rule[-0.075in]{0in}{0.25in}{\it Select} = {\it rY} or {\it IR}, & & \\ ~ & ~ & {\it rX$_{in}$}, {\it Done} & & \\ \cline{2-5} {\it mvt~} & {\it IR}$_{in}$ & \rule[-0.075in]{0in}{0.25in}{\it Select} = {\it IR}, & & \\ ~ & ~ & {\it rX$_{in}$}, {\it Done} & & \\ \cline{2-5} \rule[-0.075in]{0in}{0.25in}{\it add~} & {\it IR}$_{in}$ & {\it Select} = {\it rX}, & {\it Select} = {\it rY} or {\it IR}, & {\it Select = G}, {\it rX$_{in}$}, \\ ~ & ~ & {\it A$_{in}$} & {\it G$_{in}$} & {\it Done} \\ \cline{2-5} \rule[-0.075in]{0in}{0.25in}{\it sub~} & {\it IR}$_{in}$ & {\it Select} = {\it rX}, & {\it Select} = {\it rY} or {\it IR}, & {\it Select = G}, {\it rX$_{in}$}, \\ ~ & ~ & {\it A$_{in}$} & {\it AddSub}, {\it G$_{in}$} & {\it Done} \\ \cline{2-5} \end{tabular} \caption{Control signals asserted in each instruction/time step.} \label{tab:control_signals} \end{center} \end{table} \section*{Part I} \addcontentsline{toc}{1}{Part I} Implement the processor shown in Figure~\ref{fig:fig1} using Verilog code, as follows: \begin{enumerate} \item Make a new folder for this part of the exercise. Part of the Verilog code for the processor is shown in parts $a$ to $c$ of Figure~\ref{fig:fig2}, and a more complete version of the code is provided with this exercise, in a file named {\it proc.v}. You can modify this code to suit your own coding style if desired---the provided code is just a suggested solution. Fill in the missing parts of the Verilog code to complete the design of the processor. \lstset{language=Verilog,numbers=none,escapechar=|} \begin{figure}[h] \begin{center} \begin{minipage}[t]{15 cm} \begin{lstlisting}[name=proc] |\label{line:module}| module proc(DIN, Resetn, Clock, Run, Done); input [15:0] DIN; input Resetn, Clock, Run; output Done; parameter T0 = 2'b00, T1 = 2'b01, T2 = 2'b10, T3 = 2'b11; |$\ldots$| declare variables assign III = IR[15:13]; assign IMM = IR[12]; assign rX = IR[11:9]; assign rY = IR[2:0]; dec3to8 decX (IR[4:6], 1'b1, Xreg); // Control FSM state table always @(Tstep_Q, Run, Done) case (Tstep_Q) T0: // data is loaded into IR in this time step if (~Run) Tstep_D = T0; else Tstep_D = T1; T1: |$\ldots$| |$\ldots$| endcase \end{lstlisting} \end{minipage} \caption{Skeleton Verilog code for the processor. (Part $a$)} \label{fig:fig2} \end{center} \end{figure} \begin{center} \begin{minipage}[t]{15 cm} \begin{lstlisting}[name=proc] parameter mv = 3'b000, mvt = 3'b001, add = 3'b010, sub = 3'b011; // selectors for the BusWires multiplexer parameter Sel_R0 = 4'b0000, Sel_R1 = 4'b0001, |$\ldots$|, Sel_R7 = 4'b0111, Sel_G = 4'b1000, Sel_D = 4'b1001, Sel_D8 = 4'b1010; // control FSM outputs always @(*) begin Done = 1'b0; Ain = 1'b0; |$\ldots$| // default values for variables case (Tstep_Q) T0: // store DIN into IR IRin = 1'b1; T1: // define signals in time step T1 case (III) mv: begin if (!IMM) Sel = rY; // mv rX, rY else Sel = Sel_D; // mv rX, #D Rin = Xreg; Done = 1'b1; end mvt: // mvt rX, #D |$\ldots$| endcase T2: // define signals in time step T2 case (III) |$\ldots$| endcase T3: // define signals in time step T3 case (III) |$\ldots$| endcase default: ; endcase end // Control FSM flip-flops always @(posedge Clock, negedge Resetn) if (!Resetn) |$\ldots$| regn reg_0 (BusWires, Rin[0], Clock, R0); regn reg_1 (BusWires, Rin[0], Clock, R1); |$\ldots$| regn reg_7 (BusWires, Rin[0], Clock, R7); |$\ldots$| instantiate other registers |and| the adder/subtracter unit \end{lstlisting} \end{minipage} \end{center} \begin{center} Figure 2: Skeleton Verilog code for the processor. (Part $b$) \end{center} \begin{center} \begin{minipage}[t]{15 cm} \begin{lstlisting}[name=proc] // define the internal processor bus always @(*) case (Sel) Sel_R0: BusWires = R0; Sel_R1: BusWires = R1; |$\ldots$| Sel_G: BusWires = G; Sel_D: BusWires = |$\ldots$|; // used for mv, add, ..., with #D Sel_D8: BusWires = |$\ldots$|; // used for mvt default: BusWires = 16'bxxxxxxxxxxxxxxxx; endcase endmodule module dec3to8(W, Y); input [2:0] W; output [0:7] Y; reg [0:7] Y; always @(*) case (W) 3'b000: Y = 8'b10000000; 3'b001: Y = 8'b01000000; 3'b010: Y = 8'b00100000; 3'b011: Y = 8'b00010000; 3'b100: Y = 8'b00001000; 3'b101: Y = 8'b00000100; 3'b110: Y = 8'b00000010; 3'b111: Y = 8'b00000001; endcase endmodule \end{lstlisting} \end{minipage} \end{center} \begin{center} Figure 2: Skeleton Verilog code for the processor. (Part $c$) \end{center} ~\\ \item Set up the required subfolder and files so that your Verilog code can be compiled and simulated using the ModelSim Simulator to verify that your processor works properly. An example result produced by using {\it ModelSim} for a correctly-designed circuit is given in Figure~\ref{fig:fig3}. It shows the value \texttt{0x101C} being loaded into {\it IR} from {\it DIN} at time 30 ns. This pattern represents the instruction \texttt{mv r0,\#28}, where the immediate value $D = 28$ (\texttt{0x1C}) is loaded into $r0$ on the clock edge at 50 ns. The simulation results then show the instruction \texttt{mvt~r1,\#0xFF00} at 70 ns, \texttt{add r0,\#0xFF} at 110 ns, and \texttt{sub r1,r0} at 190 ns. You should perform a thorough simulation of your processor with the ModelSim simulator. A sample Verilog testbench file, {\it testbench.v}, execution script, {\it testbench.tcl}, and waveform file, {\it wave.do} are provided along with this exercise. \end{enumerate} \begin{figure}[H] \begin{center} \includegraphics[scale=1.0]{figures/figure3.png} \end{center} \caption{Simulation results for the processor.} \label{fig:fig3} \end{figure} \section*{Part II} \addcontentsline{toc}{2}{Part II} In this part we will implement the circuit depicted in Figure~\ref{fig:fig4}, in which a memory module and counter are connected to the processor. The counter is used to read the contents of successive locations in the memory, and this data is provided to the processor as a stream of instructions. To simplify the design and testing of this circuit we have used separate clock signals, {\it PClock} and {\it MClock}, for the processor and memory. Do the following: \begin{enumerate} \item A Quartus project file is provided along with this part of the exercise. Use the Quartus software to open this project, which is called {\it part2.qpf}. \item A sample top-level Verilog file that instantiates the processor, memory module, and counter is shown in Figure~\ref{fig:procmem}. This code is provided in a file named {\it part2.v}; it is the top-level file for the Quartus project {\it part2.qpf}. The code instantiates a memory module called {\it inst\_mem}. You have to create a Verilog file that represents this memory module by using the Quartus software, as described below. ~\\ \begin{figure}[H] \begin{center} \includegraphics[]{figures/figure4.pdf} \end{center} \caption{Connecting the processor to a memory module and counter.} \label{fig:fig4} \end{figure} \newpage \lstset{language=Verilog,numbers=none,escapechar=|} \begin{figure}[h] \begin{center} \begin{minipage}[t]{12.5 cm} \begin{lstlisting}[name=proc] |\label{line:module}| module part2 (KEY, SW, LEDR); input [1:0] KEY; input [9:0] SW; output [9:0] LEDR; wire Done, Resetn, PClock, MClock, Run; wire [15:0] DIN; wire [4:0] pc; assign Resetn = SW[0]; assign MClock = KEY[0]; assign PClock = KEY[1]; assign Run = SW[9]; proc U1 (DIN, Resetn, PClock, Run, Done); assign LEDR[9] = Done; inst_mem U2 (pc, MClock, DIN); count5 U3 (Resetn, MClock, pc); endmodule module count5 (Resetn, Clock, Q); input Resetn, Clock; output reg [4:0] Q; always @ (posedge Clock, negedge Resetn) if (Resetn == 0) Q <= 5'b00000; else Q <= Q + 1'b1; endmodule \end{lstlisting} \end{minipage} \caption{Verilog code for the top-level module.} \label{fig:procmem} \end{center} \end{figure} \item A diagram of the memory module that you need to create is depicted in Figure~\ref{fig:fig_ROM}. Since this memory module has only a read port, and no write port, it is called a {\it synchronous read-only memory (synchronous ROM)}. Note that the memory module includes a register for synchronously loading addresses. This register is required due to the design of the memory resources in the Intel FPGA chip. Use the Quartus IP Catalog tool to create the memory module, by clicking on {\sf Tools} $>$ {\sf IP Catalog} in the Quartus software. In the IP Catalog window choose the {\it ROM:~1-PORT} module, which is found under the {\sf Basic Functions $>$ On Chip Memory} category. Select {\sf Verilog HDL} as the type of output file to create, and give the file the name {\it inst\_mem.v}. Follow through the provided dialogue to create a memory that has one 16-bit wide read data port and is 32 words deep. Figures~\ref{fig:fig5} and ~\ref{fig:fig6} show the relevant pages and how to properly configure the memory. \begin{figure}[t] \begin{center} \includegraphics[]{figures/figure_ROM.pdf} \end{center} \caption{The 32 {\sf x} 16 ROM with address register.} \label{fig:fig_ROM} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[scale=1.0]{figures/figure5.png} \end{center} \caption{{Specifying memory size.}} \label{fig:fig5} \end{figure} To place processor instructions into the memory, you need to specify {\it initial values} that should be stored in the memory when your circuit is programmed into the FPGA chip. This can be done by initializing the memory using the contents of a {\it memory initialization file (MIF)}. The appropriate screen is illustrated in Figure~\ref{fig:fig7}. We have specified a file named {\it inst\_mem.mif}, which then has to be created in the folder that contains the Quartus project. Clicking \texttt{Next} two more times will advance to the \texttt{Summary} screen, which lists the names of files that will be created for the memory IP. You should select {\it only} the Verilog file {\it inst\_mem.v}. Make sure that none of the other types of files are selected, and then click \texttt{Finish}. An example of a memory initialization file is given in Figure~\ref{fig:fig_MIF}. Note that comments (\% $\ldots$ \%) are included in this file as a way of documenting the meaning of the provided instructions. Set the contents of your {\it MIF} file such that it provides enough processor instructions to test your circuit. \item The code in Figure~\ref{fig:procmem}, and the Quartus project, includes the necessary port names and pin location assignments to implement the circuit on a DE-series board. The switch {\it SW}$_{9}$ drives the processor's {\it Run} input, {\it SW}$_0$ is connected to {\it Resetn}, {\it KEY}$_0$ to {\it MClock}, and {\it KEY}$_1$ to {\it PClock}. The Run signal is displayed on {\it LEDR}$_{0}$ and {\it Done} is connected to {\it LEDR}$_{9}$. \begin{figure}[H] \begin{center} \includegraphics[scale=1.0]{figures/figure6.png} \end{center} \caption{Specifying which memory ports are registered.} \label{fig:fig6} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[scale=1.0]{figures/figure7.png} \end{center} \caption{Specifying a memory initialization file (MIF).} \label{fig:fig7} \end{figure} \item Use the ModelSim Simulator to test your Verilog code. Ensure that instructions are read properly out of the ROM and executed by the processor. An example of simulation results produced using ModelSim with the MIF file from Figure~\ref{fig:fig_MIF} is shown in Figure~\ref{fig:fig_sim2}. The corresponding ModelSim setup files are provided along with this exercise. \item Once your simulations show a properly-working circuit, you may wish to download it into a DE-series board. The functionality of the circuit on the board can be tested by toggling the switches and observing the LEDs. Since the circuit's clock inputs are controlled by pushbutton switches, it is possible to step through the execution of instructions and observe the behavior of the circuit. \end{enumerate} \begin{figure}[H] \begin{center} \begin{minipage}[t]{12.5 cm} \begin{tabbing} {\bf DEPTH} = 32;\\ {\bf WIDTH} = 16;\\ {\bf ADDRESS\_RADIX} = HEX;\\ {\bf DATA\_RADIX} = BIN;\\ {\bf CONTENT}\\ {\bf BEGIN}\\ 00 : 0001000000011100;~~~~~~\=\%~~mv \=r0, \#0xFF00~~\=\% \kill 00 : 0001000000011100; \>\% mv \>r0, \#28\>\%\\ 01 : 0011001011111111; \>\% mvt \>r1, \#0xFF00\>\%\\ 02 : 0101001011111111; \>\% add \>r1, \#0xFF\>\%\\ 03 : 0110001000000000; \>\% sub \>r1, r0\>\%\\ 04 : 0101001000000001; \>\% add \>r1, \#1\>\%\\ 05 : 0000000000000000;\\ $\ldots$ (some lines not shown)\\ 1F : 0000000000000000;\\ {\bf END}; \end{tabbing} \end{minipage} \end{center} \caption{An example memory initialization file (MIF).} \label{fig:fig_MIF} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[scale=1.0]{figures/figure8.png} \end{center} \caption{An example simulation output using the MIF in Figure~\ref{fig:fig_MIF}.} \label{fig:fig_sim2} \end{figure} \section*{Enhanced Processor} \addcontentsline{toc}{3}{Enhanced Processor} It is possible to enhance the capability of the processor so that the counter in Figure~\ref{fig:fig4} is no longer needed, and so that the processor has the ability to perform read and write operations using memory or other devices. These enhancements involve adding new instructions to the processor, as well as other capabilities---they are discussed in the next lab exercise. \end{document}
{ "alphanum_fraction": 0.6793298092, "avg_line_length": 43.0926640927, "ext": "tex", "hexsha": "b265325e6be0bbdc0f8f79755966c76c1171e4ac", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-12-15T16:44:27.000Z", "max_forks_repo_forks_event_min_datetime": "2021-12-15T16:44:27.000Z", "max_forks_repo_head_hexsha": "f4119b617a5af228a032f8f0ff27a299b496ad78", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "fpgacademy/Lab_Exercises_Digital_Logic", "max_forks_repo_path": "verilog/lab9/doc/verilog_lab9.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f4119b617a5af228a032f8f0ff27a299b496ad78", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "fpgacademy/Lab_Exercises_Digital_Logic", "max_issues_repo_path": "verilog/lab9/doc/verilog_lab9.tex", "max_line_length": 176, "max_stars_count": 1, "max_stars_repo_head_hexsha": "f4119b617a5af228a032f8f0ff27a299b496ad78", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "fpgacademy/Lab_Exercises_Digital_Logic", "max_stars_repo_path": "verilog/lab9/doc/verilog_lab9.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-09T23:21:40.000Z", "max_stars_repo_stars_event_min_datetime": "2022-03-09T23:21:40.000Z", "num_tokens": 6890, "size": 22322 }
\subsection{Cross Products} \noindent A cross product is a way of multiplying two vectors so that the result is a vector. Although the cross product technically only works for 3D vectors, we will first look a a ``fake'' 2D version to build an intuition. \begin{equation*} \vec{a}\times\vec{b} = a_1b_1-a_2b_2. \end{equation*} This ``fake'' 2D cross product gives the area of the parallelogram spanned by $\vec{a}$ and $\vec{b}$. \begin{equation*} \vec{a}\times\vec{b} = \norm{\vec{a}}\norm{\vec{b}}\sin{\theta} \end{equation*} where $\theta$ is the angle between $\vec{a}$ and $\vec{b}$. Another way to think of the magnitude of the cross product, both in 2D and 3D, is as a measure of how perpendicular two vectors are. \begin{figure}[H] \centering \includegraphics[width=0.5\textwidth]{../common/vectorsMatrices/CrossProduct.png} \caption{Visualization of the cross product} \end{figure} \noindent In 3D, $\vec{a}\times\vec{b}$ is a vector, and similar to the 2D case, the magnitude of $\vec{a}\times\vec{b}$ is equal to the area of the parallelogram spanned by $\vec{a}$ and $\vec{b}$. \begin{equation*} \vec{a}\times\vec{b} = \langle a_2b_3-b_2a_3,a_3b_1-b_3a_1,a_1b_2-b_1a_2 \rangle \end{equation*} and \begin{equation*} \norm{\vec{a}\times\vec{b}}=\norm{\vec{a}}\norm{\vec{b}}\sin{\theta} \end{equation*} where $\theta$ is the angle between $\vec{a}$ and $\vec{b}$. Each component of $\vec{a}\times\vec{b}$ gives the area of the parallelogram spanned by $\vec{a}$ and $\vec{b}$ in some plane: The $x$-component of $\vec{a}\times\vec{b}$ gives the area in the yz-plane ($x$ = 0 plane). $\vec{a}\times\vec{b}$ is perpendicular, also called ``normal,'' to the plane containing $\vec{a}$ and $\vec{b}$. It's direction, is determined by the right hand rule.\\ \noindent This cross product table of the standard basis vectors is useful for providing some insight into the properties of the cross product. \begin{table}[H] \centering \renewcommand{\arraystretch}{1.5} \begin{tabular}{|c||c|c|c|} \hline $\overrightarrow{\text{row}}\times\overrightarrow{\text{col}}$ & $\hat{i}$ & $\hat{j}$ & $\hat{k}$ \\ \hline\hline $\hat{i}$ & $0$ & $\hat{k}$ & $-\hat{j}$ \\ \hline $\hat{j}$ & $-\hat{k}$ & $0$ & $\hat{i}$ \\ \hline $\hat{k}$ & $\hat{j}$ & $-\hat{i}$ & $0$ \\ \hline \end{tabular} \end{table} \begin{enumerate}[label=] \item \textbf{\underline{NOT} Commutative}, but is antisymmetric \begin{equation*} \vec{a}\times\vec{b} = -\left(\vec{b}\times\vec{a}\right) \end{equation*} \item \textbf{Scalar Associative} \begin{equation*} \left(c\cdot\vec{a}\right)\times\vec{b}=\vec{a}\times\left(c\cdot\vec{b}\right) \end{equation*} \item \textbf{Distributive} \begin{equation*} \vec{a}\times\left(\vec{b}\times\vec{c}\right) = \vec{a}\times\vec{b} + \vec{a}\times\vec{c} \end{equation*} \end{enumerate} \noindent One can also think of the cross product as the determinant of a matrix. \begin{equation*} \vec{a}\times\vec{b} = \det\begin{bmatrix} \hat{i}& \hat{j} & \hat{k} \\ a_1 & a_2 & a_2\\ b_1 & b_2 & b_3 \end{bmatrix} \end{equation*}
{ "alphanum_fraction": 0.6342874543, "avg_line_length": 45.6111111111, "ext": "tex", "hexsha": "0c6ae2cf28bf1679e2ead49081fcd86d8481a07a", "lang": "TeX", "max_forks_count": 10, "max_forks_repo_forks_event_max_datetime": "2021-08-17T15:21:12.000Z", "max_forks_repo_forks_event_min_datetime": "2020-04-10T05:41:17.000Z", "max_forks_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "aneziac/Math-Summaries", "max_forks_repo_path": "common/vectorsMatrices/crossProducts.tex", "max_issues_count": 26, "max_issues_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_issues_repo_issues_event_max_datetime": "2021-10-07T04:47:03.000Z", "max_issues_repo_issues_event_min_datetime": "2020-03-28T17:44:18.000Z", "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "aneziac/Math-Summaries", "max_issues_repo_path": "common/vectorsMatrices/crossProducts.tex", "max_line_length": 189, "max_stars_count": 39, "max_stars_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "aneziac/Math-Summaries", "max_stars_repo_path": "common/vectorsMatrices/crossProducts.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-17T17:38:45.000Z", "max_stars_repo_stars_event_min_datetime": "2020-03-26T06:20:36.000Z", "num_tokens": 1164, "size": 3284 }
%% Copyright 2008, The TPIE development team %% %% This file is part of TPIE. %% %% TPIE is free software: you can redistribute it and/or modify it under %% the terms of the GNU Lesser General Public License as published by the %% Free Software Foundation, either version 3 of the License, or (at your %% option) any later version. %% %% TPIE is distributed in the hope that it will be useful, but WITHOUT ANY %% WARRANTY; without even the implied warranty of MERCHANTABILITY or %% FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public %% License for more details. %% %% You should have received a copy of the GNU Lesser General Public License %% along with TPIE. If not, see <http:%%www.gnu.org/licenses/> \chapter{Additional Examples} \label{ch:examples} \index{examples} This chapter contains some additional annotated examples of TPIE application code.\comment{LA: Is this chapter still ok?} \section{Convex Hull} \label{sec:convex-hull} \index{convex hull|(} The convex hull of a set of points in the plane is the smallest convex polygon which encloses all of the points. Graham's scan is a simple algorithm for computing convex hulls. It should be discussed in any introductory book on computational geometry, such as~\cite{preparata:computational}. Although Graham's scan was not originally designed for external memory, it can be implemented optimally in this setting. What is interesting about this implementation is that external memory stacks are used within the implementation of a scan management object. First, we need a data type for storing points. We use the following simple class, which is templated to handle any numeric type. \lstinputlisting[numbers=left,basicstyle=\ttfamily\small,firstline=15,lastline=47,caption={Code taken from \texttt{tpie\_\version/apps/convex\_hull/point.h}}]{../apps/convex_hull/point.h} Once the points are s by their $x$ values, we simply scan them to produce the upper and lower hulls, each of which are stored as a stack pointed to by the scan management object. We then concatenate the stacks to produce the final hull. The code for computing the convex hull of a set of points is thus \lstinputlisting[numbers=left,basicstyle=\ttfamily\small,firstline=204,lastline=258,caption={Code taken from \texttt{tpie\_\version/apps/convex\_hull/convex\_hull.cpp}}]{../apps/convex_hull/convex_hull.cpp} The only thing that remains is to define a scan management object that is capable of producing the upper and lower hulls by scanning the points. According to the Graham's scan algorithm, we produce the upper hull by moving forward in the $x$ direction, adding each point we encounter to the upper hull, until we find one that induces a concave turn on the surface of the hull. We then move backwards through the list of points that have been added to the hull, eliminating points until a convex path is reestablished. This process is made efficient by storing the points on the hull so far in a stack. The code for the scan management object, which relies on the function \lstinline|ccw()| to actually determine whether a corner is convex or not, is as follows: \lstinputlisting[numbers=left,basicstyle=\ttfamily\small,firstline=30,lastline=199,caption={Code taken from \texttt{tpie\_\version/apps/convex\_hull/convex\_hull.cpp}}]{../apps/convex_hull/convex_hull.cpp} The function \lstinline|ccw()| computes twice the signed area of a triangle in the plane by evaluating a 3 by 3 determinant. The result is positive if and only if the the three points in order form a counterclockwise cycle. \lstinputlisting[numbers=left,basicstyle=\ttfamily\small,firstline=61,lastline=74,caption={Code taken from \texttt{tpie\_\version/apps/convex\_hull/point.h}}]{../apps/convex_hull/point.h} \index{convex hull|)} \section{List-Ranking} \label{sec:list-ranking} \index{list ranking|(} List ranking is a fundamental problem in graph theory. The problem is as follows: We are given the directed edges of a linked list in some arbitrary order. Each edge is an ordered pair of node ids. The first is the source of the edge and the second is the destination of the edge. Our goal is to assign a weight to each edge corresponding to the number of edges that would have to be traversed to get from the head of the list to that edge. The code given below solves the list ranking problem using a simple randomized algorithm due to Chiang {\em et al}.~\cite{chiang:external}. As was the case in the code examples in the tutorial in Chapter~\ref{ch:tutorial}, \lstinline|#include| statements for header files and definitions of some classes and functions as well as some error and consistency checking code are left out so that the reader can concentrate on the more important details of how TPIE is used. A complete ready to compile version of this code is included in the TPIE source distribution. First, we need a class to represent edges. Because the algorithm will set a flag for each edge and then assign weights to the edges, we include fields for these values. \lstinputlisting[numbers=left,basicstyle=\ttfamily\small,firstline=16,lastline=24,caption={Code taken from \texttt{tpie\_\version/apps/list\_rank/list\_edge.h}}]{../apps/list_rank/list_edge.h} As the algorithm runs, it will sort the edges. At times this will be done by their sources and at times by their destinations. The following simple functions are used to compare these values: \lstinputlisting[numbers=left,basicstyle=\ttfamily\small,firstline=28,lastline=36,caption={Code taken from \texttt{tpie\_\version/apps/list\_rank/list\_edge.h}}]{../apps/list_rank/list_edge.h} The first step of the algorithm is to assign a randomly chosen flag, whose value is 0 or 1 with equal probability, to each edge. This is done using \lstinline|AMI_scan()| with a scan management object of the class \lstinline|random_flag_scan|, which is defined as follows: \lstinputlisting[numbers=left,basicstyle=\ttfamily\small,firstline=199,lastline=220,caption={Code taken from \texttt{tpie\_\version/apps/list\_rank/lr.cpp}}]{../apps/list_rank/lr.cpp} The next step of the algorithm is to separate the edges into an active list and a cancel list. In order to do this, we sort one copy of the edges by their sources (using \lstinline|edgefromcmp|) and sort another copy by their destinations (using \lstinline|edgetocmp|). We then call \lstinline|AMI_scan()| to scan the two lists and produce an active list and a cancel list. A scan management object of class \lstinline|separate_active_from_cancel| is used. \lstinputlisting[numbers=left,basicstyle=\ttfamily\small,firstline=222,lastline=315,caption={Code taken from \texttt{tpie\_\version/apps/list\_rank/lr.cpp}}]{../apps/list_rank/lr.cpp} The next step of the algorithm is to strip the cancelled edges away from the list of all edges. The remaining active edges will form a recursive subproblem. Again, we use a scan management object, this time of the class \lstinline|strip_active_from_cancel|, which is defined as follows: \lstinputlisting[numbers=left,basicstyle=\ttfamily\small,firstline=317,lastline=385,caption={Code taken from \texttt{tpie\_\version/apps/list\_rank/lr.cpp}}]{../apps/list_rank/lr.cpp} After recursion, we must patch the cancelled edges back into the recursively ranked list of active edges. This is done using a scan with a scan management object of the class \lstinline|interleave_active_cancel|, which is implemented as follows: \lstinputlisting[numbers=left,basicstyle=\ttfamily\small,firstline=388,lastline=464,caption={Code taken from \texttt{tpie\_\version/apps/list\_rank/lr.cpp}}]{../apps/list_rank/lr.cpp} Finally, here is the actual function to rank the list. \lstinputlisting[numbers=left,basicstyle=\ttfamily\small,firstline=468,lastline=656,caption={Code taken from \texttt{tpie\_\version/apps/list\_rank/lr.cpp}}]{../apps/list_rank/lr.cpp} Our recursion bottoms out when the problem is small enough to fit entirely in main memory, in which case we read it in and call a function to rank a list in main memory. The details of this function are omitted here. \begin{lstlisting}[basicstyle=\ttfamily\small,caption={Code taken from \texttt{tpie\_\version/apps/list\_rank/lr.cpp}}] //////////////////////////////////////////////////////////////////////// // main_mem_list_rank() // // This function ranks a list that can fit in main memory. It is used // when the recursion bottoms out. // //////////////////////////////////////////////////////////////////////// int main_mem_list_rank(edge *edges, size_t count) { // Rank the list in main memory ... return 0; } \end{lstlisting} \index{list ranking|)} \section{NAS Parallel Benchmarks} \tobeextended Code designed to implement external memory versions of a number of the NAS parallel benchmarks is included with the TPIE distribution. Examine this code for examples of how the various primitives TPIE provides can be combined into powerful applications capable of solving real-world problems. Detailed descriptions of the parallel benchmarks are available from the NAS Parallel Benchmark Report at URL \href{http://www.nas.nasa.gov/Research/Reports/Techreports/1994/HTML/npbspec.html}{\path"http://www.nas.nasa.gov/Research/Reports/Techreports/1994/HTML/npbspec.html"}. \section{Spatial Join} \tobewritten \comment{LA: Distribution sweeping, SSSJ, ect} \comment{LA: Someting about R-tree building at some point} %%% Local Variables: %%% mode: latex %%% TeX-master: "tpie" %%% End:
{ "alphanum_fraction": 0.7706865734, "avg_line_length": 48.7743589744, "ext": "tex", "hexsha": "d60d77f786570805499725f4c4ed20133d65088c", "lang": "TeX", "max_forks_count": 34, "max_forks_repo_forks_event_max_datetime": "2021-01-05T18:43:57.000Z", "max_forks_repo_forks_event_min_datetime": "2015-10-09T06:55:52.000Z", "max_forks_repo_head_hexsha": "7ea51e4ae94d355c677cce349a1c17290f24252c", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "remusao/keyvi", "max_forks_repo_path": "keyvi/3rdparty/tpie/doc/olddoc/addexamples.tex", "max_issues_count": 148, "max_issues_repo_head_hexsha": "7ea51e4ae94d355c677cce349a1c17290f24252c", "max_issues_repo_issues_event_max_datetime": "2018-12-08T08:42:54.000Z", "max_issues_repo_issues_event_min_datetime": "2015-10-06T09:24:56.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "remusao/keyvi", "max_issues_repo_path": "keyvi/3rdparty/tpie/doc/olddoc/addexamples.tex", "max_line_length": 213, "max_stars_count": 147, "max_stars_repo_head_hexsha": "7ea51e4ae94d355c677cce349a1c17290f24252c", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "remusao/keyvi", "max_stars_repo_path": "keyvi/3rdparty/tpie/doc/olddoc/addexamples.tex", "max_stars_repo_stars_event_max_datetime": "2021-08-19T07:52:02.000Z", "max_stars_repo_stars_event_min_datetime": "2015-10-06T19:10:01.000Z", "num_tokens": 2411, "size": 9511 }
\section{Introduction} \begin{center} \includegraphics[keepaspectratio,width=.7\textwidth]{img/use_case.png} \end{center} \begin{center} \includegraphics[keepaspectratio, width=1\textwidth, height=\textheight]{img/activity.png} \end{center}
{ "alphanum_fraction": 0.772, "avg_line_length": 27.7777777778, "ext": "tex", "hexsha": "48bb3630f38560635322e8e007ab8c5f05879438", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-04-07T10:14:21.000Z", "max_forks_repo_forks_event_min_datetime": "2021-04-07T10:14:21.000Z", "max_forks_repo_head_hexsha": "fc6baca14cbcbb5534ce91572a35a9e4528f6979", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "SomeUserName1/goedb", "max_forks_repo_path": "doc/specification/01-intro.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "fc6baca14cbcbb5534ce91572a35a9e4528f6979", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "SomeUserName1/goedb", "max_issues_repo_path": "doc/specification/01-intro.tex", "max_line_length": 91, "max_stars_count": null, "max_stars_repo_head_hexsha": "fc6baca14cbcbb5534ce91572a35a9e4528f6979", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "SomeUserName1/goedb", "max_stars_repo_path": "doc/specification/01-intro.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 72, "size": 250 }
% This is a personal resume made by TUSHAR JAIN % This was last updated on 18-12-2021 \documentclass{article} \usepackage[utf8]{inputenc} \usepackage{titlesec} \usepackage{hyperref} \usepackage[margin=0.25in]{geometry} \usepackage{fontawesome} \usepackage[dvipsnames]{xcolor} \renewcommand{\maketitle}{ \begin{center} {\Huge\textbf{TUSHAR JAIN}} \\ \vspace{0.1in} {\Large\text{B.Tech Undergraduate in Computer Science and Engineering}} \\ \vspace{0.15in} {\color{RedOrange} \hrule} \vspace{0.15in} {{\faGithub} \tab \href{https://github.com/TusharJ3011}{\underline{TusharJ3011}}} \hspace{1in} {{\faLinkedin} \tab \href{https://www.linkedin.com/in/tushar-jain-9b2359201}{\underline{Tushar Jain}}} \hspace{1in} {{\faEnvelope} \tab \href{mailto:[email protected]}{\underline{[email protected]}}} \hspace{1in} {{\faPhone} \tab \text{+91 6350290184}} \end{center} } \titleformat{\section} {\huge} {} {0em} {\bfseries\color{RedOrange}} \titleformat{\subsection} {\Large} {} {1em} {\bfseries} \titlespacing{\subsection}{0em}{0.5em}{0em} \begin{document} \maketitle {\color{RedOrange} \hrule} \section{Education} \vspace{-0.35cm} \subsection{B.Tech Computer Science and Engineering} {{\faUniversity} \tab \href{https://www.lnmiit.ac.in/}{\underline{The LNM Institute of Information Technology}}} \hspace{0.25in} {{\faCalendar} \tab 2020-2024} \hspace{0.25in} {{\faMapMarker} \tab Jaipur, Rajasthan, India} \vspace{-0.2cm} \begin{itemize} \item Completed 3rd Semester \vspace{-0.3cm} \item Current GPA : 7.63 \end{itemize} \subsection{Class 12th} {{\faUniversity} \tab \href{https://www.vidhyanjaliacademy.com/}{\underline{Vidhyanjali Academy}}} \hspace{0.25in} {{\faCalendar} \tab 2019-2020} \hspace{0.25in} {{\faMapMarker} \tab Kota, Rajasthan, India} \vspace{-0.2cm} \begin{itemize} \item Final Percentage : 94.6 \end{itemize} \section{Technical Skills} \vspace{-0.35cm} \subsection{Languages} Python, HTML, CSS, JavaScript, Java, LaTeX, PHP, MIPS(Assembly Language) \subsection{Libraries/Frameworks} Tkinter, Flask, Request (APIs), Bootstrap, Beautiful Soup, Selenium, OpenCV \subsection{Databases} SQL, MySQL, SQLite \subsection{Developer Tools} VS Code, PyCharm, Eclipse IDE \subsection{Operating Systems} Windows, Linux \section{Projects} \vspace{-0.35cm} \subsection{\href{https://github.com/TusharJ3011/Shop-Management-System}{\underline{Shop Management System}}} {\faLaptop} \hspace{0.1in} Python, HTML, CSS, Bootstrap, MySQL \vspace{-0.2cm} \begin{itemize} \item It is a web-based platform to maintain the sales, employees, and available products of a store. \vspace{-0.3cm} \item It uses Flask as backend and MySQL as database and Werkzeug encoding for securing logins. \vspace{-0.3cm} \item It also sends the owner an email warning when the product goes out of stock. \end{itemize} \subsection{\href{https://github.com/TusharJ3011/Weather-Forecasting-App}{\underline{Weather Forecasting App}}} {\faLaptop} \hspace{0.1in} Python \vspace{-0.2cm} \begin{itemize} \item It is a weather forecasting app which gives the present and hourly weather forecast and the astronomical data of last 3 days. \vspace{-0.3cm} \item It uses Tkinter for user-interphases and APIs for getting Geo-Location and Weather Forecast. \end{itemize} \subsection{\href{https://github.com/TusharJ3011/Snake-Game-}{\underline{Snake Game}}} {\faLaptop} \hspace{0.1in} Python \vspace{-0.2cm} \begin{itemize} \item A replica of the famous Nokia Snake Game. \vspace{-0.3cm} \item It uses Turtle library for functionalities. \end{itemize} \section{Achievements} \vspace{-0.35cm} \begin{itemize} \item Cleared State Talent Search Examination (STSE) Rajasthan 2017. \vspace{-0.3cm} \item Cleared Pre - Regional Mathematics Examination in Kota 2019. \end{itemize} \end{document}
{ "alphanum_fraction": 0.7291720402, "avg_line_length": 31.2661290323, "ext": "tex", "hexsha": "02826e37a6221ae7d5f5cee343b186692408088d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "1c8812a5b982b3266db16d92010ba653a0b9b86b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "TusharJ3011/Resume-LaTeX", "max_forks_repo_path": "main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "1c8812a5b982b3266db16d92010ba653a0b9b86b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "TusharJ3011/Resume-LaTeX", "max_issues_repo_path": "main.tex", "max_line_length": 201, "max_stars_count": null, "max_stars_repo_head_hexsha": "1c8812a5b982b3266db16d92010ba653a0b9b86b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "TusharJ3011/Resume-LaTeX", "max_stars_repo_path": "main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1254, "size": 3877 }
\documentclass[../ewet_cwc_report.tex]{subfiles} \begin{document} \section*{Executive Summary} \label{sec:summary} \noindent This report summarizes collective knowledge and documents to date practical results in the field of wind power generation, achieved by WSU-EvCC cross institutional team. Additionally, a portion of this work concerning electrical engineering, signify a senior capstone project achievement for the WSU electrical engineering team. Current academic year's success recognizes and builds on the previous year's wind energy team's extensive research and is grateful for their intellectual legacy. This year, the team has taken a top-down approach in research and development of the prototype. It was decided to avoid extensive fundamental research and study of competitors achievements. Design priority was given to the commercially available components based on a trial-and-error approach. This permitted relative freedom from the burdens of predisposition to operate in the wake of someone else's success, allowing more experimental courage and satisfaction with the accomplishments. This year, the team has returned to the traditional horizontal-axis wind turbine design with autonomous pitch, yaw, and load control. Although autonomous yaw control was beyond the CWC requirements, the design experience was determined to be beneficial to the achievements within the scope of the senior capstone project, and perhaps future teams' research. The electrical team has expanded on the previous year's turbine and load control component ideas and developed its own robust approach to power management, voltage regulation and generator selection. The mechanical team had less luck with previous year's work since very few design solutions of last year's vertical-axis turbine design were applicable in the horizontal-axis design. After initial experiments and conceptual deliberation, the control team has settled with selecting the rotational speed of the machine as the primary pitch control input and wind speed as primary load control input, implementing separate controllers for each device connected via communication bus. Additionally, beyond the CWC requirements scope, some team member's time was dedicated to the development of the HMI, data acquisition and live power output monitoring systems, this was done with consideration for broader wind farm project development. For this purpose, MakerPlot software was chosen, and a suitable application was developed, however because of limited competence in this field of work and limited human resources it was not integrated into the final design. The immediate state of the prototype and the project progression is determined satisfactory. The team was able to achieve its selected objectives in control of the turbine and power generation. Work is continuing to finalize turbine-load communication, final blade, and foundation design along with revisions to the pitch actuator mechanism. \end{document}
{ "alphanum_fraction": 0.8316566063, "avg_line_length": 52.3157894737, "ext": "tex", "hexsha": "9c355461216e9548cb15312c247ce1ef91d13f18", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2c3f1b29aad28770e80d104a4ee553411ac27056", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "troberson/artemis-project", "max_forks_repo_path": "CWC_Report/summary/summary.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2c3f1b29aad28770e80d104a4ee553411ac27056", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "troberson/artemis-project", "max_issues_repo_path": "CWC_Report/summary/summary.tex", "max_line_length": 63, "max_stars_count": null, "max_stars_repo_head_hexsha": "2c3f1b29aad28770e80d104a4ee553411ac27056", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "troberson/artemis-project", "max_stars_repo_path": "CWC_Report/summary/summary.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 575, "size": 2982 }
% !TEX root = sample.tex \section{Data appendix} \label{sec:data_appendix} % ============================================================================= \subsection{Data description} \label{sub:data_description} % ============================================================================= \subsection{Sample selection} \label{sub:sample_selection} \begin{table}[h] \caption{'Sample selection'}\label{tab:selection} \input{\tabdir/sample_selection_XX7.tex} \end{table} % ============================================================================= \subsection{Data preprocessing} \label{sub:data_preprocessing} % ============================================================================= \subsection{Variable definitions} \label{sub:variable_definitions} % ============================================================================= \subsection{Summary statistics} \label{sub:summary_statistics} \begin{figure}[h]\centering \caption{Distribution of users by age \label{fig:age_distr}} \includegraphics[width=0.75\textwidth]{\figdir/sumstats_age_distr.png} \note{UK population proportions are obtained from \citet{ons2019mid}}. \end{figure}
{ "alphanum_fraction": 0.5158254919, "avg_line_length": 29.225, "ext": "tex", "hexsha": "473ebe4ac0bb588ce261c703f3ffb08ab24f6b83", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a5c87d0c3ff2f6ed39f3e3a18557c0ab439f6b42", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "fabiangunzinger/sample_project", "max_forks_repo_path": "text/sample_appendix.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a5c87d0c3ff2f6ed39f3e3a18557c0ab439f6b42", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "fabiangunzinger/sample_project", "max_issues_repo_path": "text/sample_appendix.tex", "max_line_length": 79, "max_stars_count": null, "max_stars_repo_head_hexsha": "a5c87d0c3ff2f6ed39f3e3a18557c0ab439f6b42", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "fabiangunzinger/sample_project", "max_stars_repo_path": "text/sample_appendix.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 216, "size": 1169 }
\documentclass{bioinfo} %\usepackage{doi} \usepackage{url} \copyrightyear{2015} \pubyear{2015} \begin{document} \firstpage{1} \title[lossy-compression]{Lossy compression of DNA sequencing quality data} \author[Hill \textit{et~al.}]{Christopher M. Hill\,$^{1}$, Andr\'{a}s Szolek\,$^{2}$, Mohamed El Hadidi\,$^{3}$, and Michael P. Cummings\,$^4$\footnote{to whom correspondence should be addressed}} \address{$^{1}$Department of Computer Science, University of Maryland, College Park, Maryland, 20742 USA\\ $^{2}$Department of Applied Bioinformatics, Center for Bioinformatics, Quantitative Biology Center, and Department of Computer Science, University of T\"{u}bingen, Sand 14, 72076 T\"{u}bingen, Germany\\ $^{3}$Department of Algorithms in Bioinformatics, Center for Bioinformatics, University of T\"{u}bingen, Sand 14, 72076 T\"{u}bingen, Germany \\ $^{4}$Center for Bioinformatics and Computational Biology, University of Maryland, College Park, Maryland, 20742 USA} \history{Received on XXXXX; revised on XXXXX; accepted on XXXX} \editor{Associate Editor: XXXXXXX} \maketitle \begin{abstract} \section{Motivation:} As the cost of sequencing continues to decrease, the rate of sequence data production is increasing, placing greater demands on storing and transferring these vast amount of data. Most methods of sequencing data compression focus on compressing nucleotide information without any loss of information. Quality data, however, have different properties than nucleotide data, and methods compressing nucleotide sequences efficiently do not perform as well on quality values. Furthermore, although lossless representation might be necessary for nucleotide sequences, it is not an essential requirement for quality values. Previous studies of quality value compression have mostly focus on minimizing the loss of information with less emphasis on the effects on bioinformatic analyses. In this paper, we evaluate several different compression methods for quality values, and assess the resulting impacts on common bioinformatic analyses using sequence read data: quality control, genome assembly, and alignment of short reads to a reference sequence. \section{Results:} Lossy compression of quality information can greatly decrease memory requirements, and our results demonstrate that some compression methods can result in transformed quality values that are quite useful, and in some cases advantageous, compared to original uncompressed values. \section{Contact:} \href{[email protected]}{[email protected]} \end{abstract} \section{Introduction} Read data from high-throughput sequencing constitutes the largest category of data in genomics research because of great redundancy, inclusion of quality values, and read-level naming and metadata. Because of this abundance, effective compression of read data has the potential to substantially improve data storage and transfer efficiency. Quality values comprise a standard component of \textsc{fastq} files~\citep{Cock:2010ve}, a very common format for sequence read data. At the level of the sequence read, the probability of error for each base-call is typically represented by a \textsc{phred} quality value, which is defined as $Q = -10\,log_{10}P$~\citep{Ewing:1998ly}. Depending on the sequencing technology, these quality values can range from 0 to 93, and are represented with the \textsc{ascii} characters 33 to 126 (with some offset). There is a single quality value per base-call for Illumina sequence reads. Quality values can be used throughout bioinformatics pipelines. Among the most fundamental uses of sequence quality values is as part of the quality assessment and quality control (\textsc{qa/qc}) processes prior to subsequent analysis steps. Quality control based on quality values generally includes two operations: \textit{i}.~filtering, which removes reads that on the whole do not meet arbitrary quality standards, thus reducing the total number of reads; and \textit{ii}.~trimming of low-quality base-calls from reads, which reduces the length of reads trimmed. Quality values can be used by genome assembly software to produce better assemblies~\cite[e.g.,][]{Bryant:2009uq,Gnerre:2011kx}. Short-read alignment programs, such as Bowtie2~\citep{Langmead:2012rw}, use quality values to weight mismatches between read and reference sequences. Software for detecting single nucleotide polymorphisms (\textsc{snp}s) can use quality values~\cite[e.g.,][]{McKenna:2010bh}, and identified \textsc{snp}s with high-quality calls are deemed more reliable than those with low-quality calls, particularly in low-coverage regions. Previous literature on sequence data compression has largely focused on lossless compression of base calls~\cite[reviewed in][]{Deorowicz:2013hq,Giancarlo:2009fk,Giancarlo:2014rw, Nalbantoglu:2010uq,Zhu:2013qr}, although some recent work has addressed compression of quality values~\cite[e.g.,][]{Canovas:2014fr,Hach:2012ys, janin2013adaptive,Kozanitis:2011kl,Ochoa:2013rt,Tembe:2010ys, Wan:2012kq,DBLP:conf/recomb/YuYB14,zhou2014compression}. Among the several challenges for compression of read data is dealing with different error profiles resulting from differences in underlying chemistries, signal detection and processing mechanisms, inherent biases, and other idiosyncratic properties of distinct high-throughput sequencing technologies. Although we recognize the need for lossless compression for some purposes and contexts (e.g., archiving, provenance), our perspective is largely pragmatic with a focus on the use of quality values in bioinformatic analyses. From this perspective, some loss of information is deemed acceptable if the inferences from analyses are relatively unaffected. Sequence reads generated by instruments such as an Illumina HiSeq, the focus of this research due to its preponderance in genomics, are characterized by having relatively few insertion and deletion errors, but substitution (miscall) errors are much more frequent and have context-specific patterns. These errors are non-uniformly distributed over the read length (e.g., error rates increase up to $\sim$16$\times$ at the 3$^{\prime}$ end, and 32.8 -- 67.9\% of reads have low-quality tails at the 3$^{\prime}$ end~\citep{Minoche:2011km}). Recognizing these properties of Illumina sequence reads motivates our exploration of three general classes of lossy compression methods -- binning, modeling, and profiling -- and our consideration of an exemplar of each class.~\cite{Canovas:2014fr} and~\cite{janin2013adaptive} evaluated the effects of lossy compression on identifying variants within a data set. Here we describe our research investigating lossy compression of sequence read quality values, specifically those associated with Illumina instruments, with the objective to provide some perspective on several strategies rather than to develop robust high-quality software for use. We assess the effects of quality value information loss resulting from compression on \textsc{dna} sequence data analyses, including read preprocessing (filtering and trimming), genome assembly, and read mapping. \begin{methods} \section{Methods} \subsection{Compression strategies} \subsubsection{Binning} Quality values can be binned, and the minimum number of bins that allows for any distinction among quality values is two; i.e., two categories of quality: ``good'' and ``bad''. We implement 2-bin encoding by setting a quality value threshold empirically determined by the distribution of quality values across reads. Base-calls are marked ``bad'' if their quality value falls below the first quartile minus 1.5 $\times$ the interquartile range (IQR), which is the difference between the first and third quartile; 1.5 $\times$ IQR is the value used by Tukey's box plot~\citep{mcgill1978variations}. The main benefit of this approach is that it is completely data-dependent, and no assumptions regarding the distribution of the quality values need to be made. We adopt the quality value assignments of the read preprocessing tool Sickle~\citep{sickle}, which correspond to the 40 for ``good'' and 10 for ``bad''. With 2-bin encoding, binary encoding is possible, allowing us to use a single bit to represent the quality of a base instead of the standard 8 bits used to store quality values in \textsc{ascii}. An additional possible benefit of 2-bin encoding is for increased adjacency of identical values and repeating patterns, properties that may increase effectiveness of subsequent compression using established algorithms~\cite[e.g.,][]{HUFFMAN:1952nr,Ziv77auniversal, DBLP:journals/tit/ZivL78}, although this potential benefit is not evaluated in this study. The economic costs of memory use for binning, in general terms, include no fixed costs, and marginal costs that are the product of the number of base-call quality values times the cost of the encoding. \cite{Wan:2012kq} provide three similar lossy compression strategies based on binning the base error probability distribution: UniBinning, Truncating, and LogBinning. UniBinning evenly splits the error probability distribution into a user-defined number of partitions. Truncating treats a user-defined number of highest quality values as a single bin. LogBinning works similarly to UniBinning, except it uses the \emph{log} of the error probability distribution, which effectively bins the \textsc{ascii} quality values evenly. The 2-bin encoding examined here can be viewed as something like a combination of LogBinning and Truncating in that we place the highest quality values (as defined above) from the distribution of log error probability values into a single bin. \begin{figure}[!tpb] \centerline{\includegraphics[width=3.35in]{profiles_128.eps}} \caption{Quality profiles obtained by $k$-means clustering on the fragment library from \textit{Rhodobacter sphaeroides} 2.4.1 data set using $k$ = 128, with each row corresponding to a quality profile. Dark to light colors represent low to high quality values. It is readily visible that the two most distinctive features of quality profiles is their drop-off position and average overall quality. Occaional low values occur early in some profiles, likely relecting intermittent problems in the sequencing process affecting many reads at a time.}\label{fig:profiles_128} \end{figure} Although we focus only on two bins in this work, a greater number of bins can be used in practice, albeit with a higher cost in memory usage. For example, an additional bin, thus giving bins of good, bad, and intermediate, could be used for quality values near the border between bins in the 2-bin encoding. Additional bins beyond two, and their resulting compressibility and effect on downstream analyses, are not considered here. \subsubsection{Modeling} If quality values are modeled, compression is conceivably possible by replacing the original quality values by a representation of the model. For example, quality values can be conceptualized as bivariate data with the ordered nucleotides (1 to read length) representing the abscissa, and quality values representing the ordinate. We model read quality values as polynomial functions obtained with least-squares fitting, as one approach to the compression of read quality values by modeling. Although polynomial functions have significantly fewer parameters than a read-length string of raw quality values (e.g., one to eight coefficients in our approach), the use of double-precision floating-point numbers to store coefficients greatly limits the compression potential of the models. The coefficients could be represented with fewer bits, perhaps with some loss of precision, but we have not considered alternative representations in this study. The economic costs of memory use for model-based compression, in general terms, include no fixed costs, and marginal costs that are the product of the number of reads times the cost of representing the model parameters. \subsubsection{Profiling} Sets of strings representing quality values show similar trends of over their length, and it is possible to identify common patterns in the data and use them as reference profiles to approximate individual sequences of quality values. Here we use $k$-means clustering, a vector quantization method that partitions a set of samples into $k$ sets that minimize within-cluster sum of squares~\citep{macqueen1967some}. We sampled 1 $\times$ 10$^{4}$ reads at random and computed cluster centers as read quality profiles using a heuristic iterative refinement approach that quickly converges to a locally optimal minimum~\citep{hartigan1979algorithm}. All reads are evaluated and the nearest quality profile in Euclidean space being assigned to every read as their compressed representation. The compressed quality file therefore consists of an index enumerating the $k$ quality profiles, and a binary part containing the assigned quality profile index for each read. Although this approach is not able to capture some read-specific differences in quality values, it does approximate the overall patterns in quality values. An example of 128 quality profiles is shown in Figure \ref{fig:profiles_128}. The economic costs of memory use for profile-based compression, in general terms, include fixed costs associated with representing the profiles, which is the product of the number of profiles times the cost of encoding them. These fixed costs are amortized over the entire set of reads to which they apply, and thus on a read basis vary as a reciprocal function of the number of reads. Additionally there are marginal costs that are the product of the number of reads encoded times the cost of the encoding. Profiling like modeling, but not binning, has the advantageous property that marginal costs are the reciprocal of the read length. Thus the costs of memory use for profile-based compression decrease proportionally with increases in either or both the number of reads and read length. Profiles can be obtained from techniques other than $k$-means clustering. For example, we could store the profiles of polynomial functions to increase the compressibility of polynomial regression modeling. In other words, we could use a spline (a function that is piecewise-defined by polynomial functions) to represent quality values. However, we have not explored this or other approaches here. \subsection{Data sets} We used several Illumina sequence read data sets taken from data from the \textsc{gage} (Genome Assembly Gold-Standard Evaluations)~\citep{Salzberg:2012rc}, except as noted. These data sets are as follows. \textit{Rhodobacter sphaeroides} 2.4.1, which are generated from a fragment library (insert size of 180 bp; 2,050,868 paired-end reads; read length 101 nt) and short-jump library (insert size of 3,500 bp; 2,050,868 reads; read length 101 nt). The corresponding reference sequence was obtained from the \textsc{ncbi} RefSeq database (NC\_007488.1, NC\_007489.1, NC\_007490.1, NC\_007493.1, NC\_007494.1, NC\_009007.1, NC\_009008.1). \textit{Homo sapiens} chromosome 14 data, which are generated from a fragment library (insert size of 155 bp; 36,504,800 paired-end reads) and short-jump library (insert sizes ranging from 2283-2803 bp; 22,669,408 reads; read length 101 nt). The corresponding reference sequence was obtained from the \textsc{ncbi} RefSeq database (NC\_000014.8). \textit{Escherichia coli} str. K-12 MG1655 MiSeq data was downloaded from \url{http://www.illumina.com/systems/miseq/scientific_data.html}, which are generated from a fragment library (insert size of 180 bp; 1,145,8940 paired-end reads; read length 151 nt). The corresponding reference sequence was obtained from the \textsc{ncbi} RefSeq database (NC\_000913.2). \textit{Mus musculus} data was downloaded from \url{http://trace.ddbj.nig.ac.jp/DRASearch/run?acc=SRR032209} (18,828,274 reads; length 36 nt). \subsection{Comparison to other methods} For comparison to other developed methods we use \textsc{QualComp}, a lossy compression tool~\citep{Ochoa:2013rt}. The program models quality values as a multivariate Gaussian distribution, computing the mean and covariance for each position in the read, and these values are stored by the decoder to later reconstruct a \emph{representative} quality value. \textsc{QualComp} takes as input a user-specified rate (bits/read), and these bits are apportioned among positions within the read to minimize the average error. The quality values can be clustered beforehand to produce more accurate models. The economic costs of memory use for compression with \textsc{QualComp}, in our interpretation, appear to have fixed costs for storing the cluster representatives, and the marginal costs that are the product of the number of reads times the cost of representing the model parameters. \subsection{Subsequent (secondary) compression} We secondarily compress all otherwise compressed data sets using the Burrows-Wheeler algorithm~\citep{bwt} via \textsc{bz}ip2, and it is the results of these subsequent compressions that we report. Although other algorithms~\cite[e.g.,][]{HUFFMAN:1952nr,Ziv77auniversal, DBLP:journals/tit/ZivL78} might be particularly effective for subsequent compression for some compressed data representations examined, we not explored such possibilities here, and instead restrict ourselves to \textsc{bz}ip2 because it is commonly used and readily available. Similarly, it may very well be possible to obtain increased secondary compression by reordering (e.g., sorting) compressed representations (e.g., sets of bin designations, model, or profiles), but these operations have also not been explored here. \subsection{Performance evaluation} As a measure of compression effectiveness, which reflects the sum of fixed and marginal costs, we use bits/base-call, defined as the size of the compressed representation of quality values (in bits) divided by the number of quality values represented. As a measure of information loss we use mean squared error (\textsc{mse}) as a loss function, and define it as $\frac{1}{n}\sum_{i=1}^{n}{(Q_i'-Q_i)^2}$, where $n$ is the number of quality values, $Q_i'$ is the compressed/decompressed quality value, and $Q_i$ is the original quality value associated with position $i$. We evaluate effects of information loss from quality value compression on quality control steps of trimming and read filtering, which were performed using Sickle~\citep{sickle}, and make comparison to original uncompressed data. Sickle starts at the ends of the read and uses a sliding window (0.1 $\times$ the read length) to find locations where the mean quality in the window falls below a given threshold (20 by default). The sequence is then trimmed from this position to the read end. If the trimmed sequence length is less than a certain threshold (20 by default), then the sequence is discarded. We evaluate effects of information loss from quality value compression on \emph{de novo} genome assembly performance using contiguity statistics, log average read probability (\textsc{lap})~\citep{Ghodsi:2013hb}, and a collection of reference-based metrics. The contiguity statistics include N50, which is defined as the median contig size (the length of largest contig $c$ such that the total size of the contigs larger than $c$ exceeds half of the assembly size) and corrected N50, which is the recalculated N50 size after the contigs are broken apart at locations of errors. The \textsc{lap} score can be viewed as a log-likelihood score, where a larger value is better. We use a script provided by \textsc{gage} reference-based evaluation to count single nucleotide polymorphisms (\textsc{snp}s), relocations, translations, and inversions. The reference-based metrics are normalized by the length of the assembly to facilitate comparison. For the genome assembly we used software that makes use of quality values in the assembly process: \textsc{allpaths-lg}~\citep{Gnerre:2011kx} version r50191 with default settings and 32 threads. \end{methods} \section{Results} \subsection{Compression effectiveness versus information loss} \begin{figure*}[!tb] \centerline{\includegraphics[width=7in]{compression_results.eps}} \caption{The relationship of bits/base-call and mean squared error for quality value compression methods applied to four data sets: \textit{Rhodobacter sphaeroides} 2.4.1; \textit{Homo sapiens} chromosome 14; \textit{Escherichia coli} str. K-12 MG1655; and \textit{Mus musculus}. Point labels correspond to different compression methods: 2B, 2-bin encoding; P$n$, profiling with $n$ profiles; R$n$, modeling with polynomial regression of degree $n$; Q$n$, \textsc{q}ual\textsc{c}omp with rate parameter of $n$. Arrows denote the corresponding lossless compression using \textsc{bz}ip2, with the black arrow corresponding to original data.} \label{fig:mse_vs_bpbp} \end{figure*} We compare the quality-value \textsc{mse} versus bits/base-call of the \textit{Rhodobacter sphaeroides}, \textit{Homo sapiens}, \textit{Escherichia coli}, and \textit{Mus musculus} data sets for quality values resulting from the compression methods examined (Figure \ref{fig:mse_vs_bpbp}). Here we present the fragment libraries for the \textit{Rhodobacter sphaeroides}, and \textit{Homo sapiens} data sets; the corresponding short-jump library results are available in Supplemental Table 1. Storing the original uncompressed quality values requires 1 byte/base-call because they are stored in \textsc{ascii} format, and the lossless compression of each original data set using \textsc{bz}ip2 ranges from 2.19--3.10 bits/base-call; these values provide a reference point using a widely available general compression program. Values from the same class of compression method tend to cluster together across the different data sets, with the 0-degree polynomial regression, profile encodings, and \textsc{q}ual\textsc{c}omp yielding the lowest bits/base-call values. \textsc{q}ual\textsc{c}omp with the rate parameter set to 100 bits/read has the lowest \textsc{mse}, but requires $\sim$10--15$\times$ more memory than the profile encoding methods for only a $\sim$2--3$\times$ reduction in error. When the rate parameter of \textsc{q}ual\textsc{c}omp is set to match the profile encoding methods, \textsc{q}ual\textsc{c}omp performs slightly worse in terms of \textsc{mse}. For the \textit{Rhodobacter sphaeroides} fragment library, \textsc{q}ual\textsc{c}omp with rate set to 10 bits/read (0.099 bits/base-call) has a \textsc{mse} of 17.29, whereas 256-profile encoding only requires 0.079 bits/base-call and has a \textsc{mse} of 11.85. As the degree of the polynomial increases, the bits/base-call increase and the \textsc{mse} decreases at an exponential rate. The 7th-degree polynomial regression results in the largest bits/base-call, as it requires 64 bytes per read before subsequent compression with \textsc{bz}ip2. For the \textit{Mus musculus} data set, which has read lengths of only 36 nt, the memory required for storage, even after subsequent compression with \textsc{bz}ip2 exceeds that for the original \textsc{ascii} quality values. As an additional point of reference regarding compression effectiveness, the recently published Read-Quality-Sparsifier~\citep{DBLP:conf/recomb/YuYB14} achieved best-case compression of 0.254 bits/base-call on \textit{Homo sapiens} chromosome 21 data (read lengths ranging from 50--110 nt), and a mean compression of 1.841 bits/base-call; both values are larger than all of the profile results and for some results of other methods presented here (Figure~\ref{fig:mse_vs_bpbp}). \subsection{Effects on sequence read preprocessing} Preprocessing involves two steps: \emph{discarding} reads that are deemed to be poor-quality overall, and \emph{trimming} the poor-quality regions of the reads. After trimming, the majority of compression methods resulted in retaining more base-calls than the original quality values (Figure \ref{fig:preprocessing}). In general, as a given compression method increases in complexity --- i.e., as the number of profiles, polynomial degrees, or rate increases --- the amount of base-calls retained more closely approximates the number of base-calls retained using the original quality values. The compression methods on the \textit{Mus musculus} data set has the greatest proportion of retained base-calls compared to the original quality values. The \textit{Escherichia coli} MiSeq data set has the smallest range. The 2-bin approach is the only compression method that results in a larger number of trimmed base-calls compared to the original uncompressed reads across all data sets. Sickle uses a sliding window approach to smooth the read quality values before it trims. In the 2-bin approach, there is an uneven distribution of values per bin. In other words, \emph{bad} quality values may range from 0--33, whereas \emph{good} values may only range from 34--40. Thus, mid-range quality values that are above the threshold (20 by default) are set below the quality threshold when compressed, resulting in an increased number of trimmed bases. The 0-degree polynomial regression results in the highest proportion of base-calls kept. If the mean quality value of the read is above the filtering threshold, then no base-calls are trimmed. Only reads that are comprised of mostly low quality values will be discarded. It is important to highlight that even though a compression method may result in the same number of trimmed base-calls as the uncompressed quality values, it does not mean the \emph{same} base-calls were retained. For example, pre-processing of the 1st-degree and 5th-degree polynomial regression model compressed reads of the \textit{Rhodobacter sphaeroides} fragment library retain approximately the same number of base-calls. However, if we examine the specific reads discarded, the 5th-degree model discards approximately two-thirds less reads than the 1st-degree model (4,640 and 12,066 reads, respectively; Supplemental Table 2), which means there are differences in the base-calls trimmed. \begin{figure}[!tbp] \centerline{\includegraphics[width=3.65in]{preprocessing_results.eps}} \caption{Preprocessing results of \textit{Rhodobacter sphaeroides} 2.4.1, and \textit{Homo sapiens} chromosome 14 fragment libraries, and \textit{Escherichia coli} str. K-12 MG1655, and \textit{Mus musculus} data sets. Reads were trimmed using Sickle. The total number of bases filtered by each compression method is compared with the number of bases filtered using the uncompressed quality values.} \label{fig:preprocessing} \end{figure} \subsection{Effects on genome assembly} No compression method resulted in the uniformly best assembly of the \textit{Rhodobacter sphaeroides} data set in all metrics, although several methods resulted in better assemblies than the original uncompressed data (Table \ref{fig:assembly_ranks}). Among the compression methods, the profile encoding performed best, polynomial regression modeling performed worst, and other methods were intermediate (Fig.~\ref{fig:assembly_ranks}). The lossy compression methods largely preserve the contiguity found in the assembly produced using the reads with the original quality values. All compression methods other than 0-degree polynomial regression produce an N50 ranging from 3.17--3.22 Mbp (see Supplemental Table 3). Despite the similar contiguity statistics, the different compression methods vary markedly in the number of \textsc{snp}s. The 2-bin and profile methods exhibited the fewest \textsc{snp}s compared to the reference genome, thus outperforming the assembly using the original quality values in this characteristic. A more in-depth evaluation is needed to determine whether these compression methods are missing actual \textsc{snp}s. It is important to highlight that using the original uncompressed quality values does not produce the best assembly in terms of any of the metrics. The assembly based on the original uncompressed quality values score worse than the top overall assembly (256-profile encoding) for number of assembled bases, missing reference bases, N50, \textsc{snp}s, indels $>$5bp, and relocations. The assembly using the original uncompressed quality values has an error rate of $\sim$8.75 errors/100 kb of assembled sequence, and the 256-profile encoding has an error rate of $\sim$8.02 errors/100 kbp (Supplemental Table 3). In general, the greater the polynomial degree, the better the overall assembly; however, the 5th-degree polynomial regression performs slightly worse than the 3rd-degree polynomial. The respective ranks in terms of N50 and relocations are fairly distant, which lowers the overall ranking of the 5th-degree polynomial slightly below that for the 3rd-degree polynomial model. The 1st- and 0-degree polynomial regression methods perform poorly in all metrics except assembled bases. One explanation for this observation is that the high-error portions of reads are being marked as high quality, so \textsc{allpaths-lg} is unable to trim or error-correct the reads. Assembled sequences that overlap may be unable to align across the errors at the end of the reads, artificially inflating the assembly size. Among the different \textsc{q}ual\textsc{c}omp rate parameters, the 10 bits/read rate ranked highest overall, outperforming the other rate parameters in terms of corrected N50, fewest missing reference bases, \textsc{snp}s, and indels $>$5bp. With the exception of the 6 bits/read rate, the assemblies decrease in rank with the increase in the rate parameter for corrected N50, and fewest missing reference bases. This trend runs counter to the decrease in \textsc{mse} of the different rates. \begin{figure}[!tbp] \centerline{\includegraphics[width=3.65in]{rhodo_assembly_results.eps}} \caption{Rankings of compression methods based on \textit{Rhodobacter sphaeroides} assembly attributes sorted by overall rank. Assemblies were constructed using \textsc{allpaths-lg}. Rankings above the median value are in cyan, those below the median value in magenta.} \label{fig:assembly_ranks} \end{figure} \begin{table*}[!tbhp] \centering \caption[]{Comparison of mapping for original reads and reads with compressed/decompressed quality values. Reads and reference genome are for \textit{Rhodobacter sphaeroides}, and mapping was performed using Bowtie2.} \begin{small} \begin{tabular}{lr|cc|cc|cc|cc|cc} & & \multicolumn{2}{c|}{max-qual} & \multicolumn{2}{c|}{min-qual} & \multicolumn{2}{c|}{2-bin} & \multicolumn{2}{c|}{regression (0)} & \multicolumn{2}{c}{regression (1)} \\ & & mapped & unmapped & mapped & unmapped & mapped & unmapped & mapped & unmapped & mapped & unmapped \\ \cline{2-12} & mapped & 746716 & 145897 & 892613 & 0 & 891864 & 749 & 851682 & 40931 & 883390 & 9223 \\ {\em original} & unmapped & 0 & 132821 & 10821 & 122000 & 186 & 132635 & 67 & 132754 & 55 & 132766 \\ \cline{2-12} & proportion & 0.728 & 0.272 & 0.881 & 0.119 & 0.870 & 0.130 & 0.831 & 0.169 & 0.862 & 0.138 \\ \end{tabular} \bigskip \begin{tabular}{lr|cc|cc|cc|cc|cc} & & \multicolumn{2}{c|}{regression (3)} & \multicolumn{2}{c|}{regression (5)} & \multicolumn{2}{c|}{regression (7)} & \multicolumn{2}{c|}{profile (64)} & \multicolumn{2}{c}{profile (128)} \\ & & mapped & unmapped & mapped & unmapped & mapped & unmapped & mapped & unmapped & mapped & unmapped \\ \cline{2-12} & mapped & 889537 & 3076 & 891019 & 1594 & 891479 & 1134 & 891753 & 860 & 891952 & 661 \\ {\em original} & unmapped & 117 & 132704 & 155 & 132666 & 154 & 132667 & 144 & 132677 & 143 & 132678 \\ \cline{2-12} & proportion & 0.868 & 0.132 & 0.869 & 0.131 & 0.870 & 0.130 & 0.870 & 0.130 & 0.870 & 0.130 \\ \end{tabular} \bigskip \begin{tabular}{lr|cc|cc|cc|cc|cc} & & \multicolumn{2}{c|}{profile (256)} & \multicolumn{2}{c|}{\textsc{q}ual\textsc{c}omp (6)} & \multicolumn{2}{c|}{\textsc{q}ual\textsc{c}omp (10)} & \multicolumn{2}{c|}{\textsc{q}ual\textsc{c}omp (30)} & \multicolumn{2}{c}{\textsc{q}ual\textsc{c}omp (100)} \\ & & mapped & unmapped & mapped & unmapped & mapped & unmapped & mapped & unmapped & mapped & unmapped \\ \cline{2-12} & mapped & 892051 & 562 & 891375 & 1238 & 891777 & 836 & 892233 & 380 & 892454 & 159 \\ {\em original} & unmapped & 119 & 132702 & 304 & 132517 & 265 & 132556 & 220 & 132601 & 172 & 132649 \\ \cline{2-12} & proportion & 0.870 & 0.130 & 0.870 & 0.130 & 0.870 & 0.130 & 0.870 & 0.130 & 0.870 & 0.130 \\ \end{tabular} \end{small} \label{tab:aligner} \end{table*} \subsection{Effects on read mapping} Some short read alignment tools, such as Bowtie2 (version 2.2.3), which was used here, utilize quality value information when evaluating potential alignments. The reads from with original uncompressed and compressed/decompressed quality vlaues were mapped with Bowtie2 to the \textit{Rhodobacter sphaeroides} reference genome. The total, shared, and unique proportion of mapped reads are calculated with respect to the results for the the original uncompressed quality values as shown in Table \ref{tab:aligner}. Additionally, to assess the effect of quality values on mapping in general, Bowtie2 was adjusted so that the maximum and minimum mismatch penalty were equivalent to maximum and minimum quality scores (with parameters: --mp 6,6 and --mp 2,2 respectively). We evaluate the compression methods using two approaches. In the first approach, we order the compression methods based on how similar the alignment results are using the original uncompressed quality values --- i.e., the number of reads aligned using the original uncompressed quality values and quality values resulting from compression, plus the number of reads unaligned using the original uncompressed quality values and quality values resulting from compression minus the amount of reads uniquely aligned using the original uncompressed quality values and quality values resulting from compression. In the second approach, we order the compression methods by total proportion of aligned reads. The best compression method in terms of similarity with the uncompressed reads is \textsc{q}ual\textsc{c}omp with rate 100 bits/read, followed by \textsc{q}ual\textsc{c}omp with rate 30 bits/read, 256-profile encoding, 128-profile encoding, 2-bin encoding, 64-profile encoding, \textsc{q}ual\textsc{c}omp with rate 10 bits/read, 7th-degree polynomial regression, \textsc{q}ual\textsc{c}omp with rate 6 bits/read, and finally, 5th-degree through 0-degree polynomial regression. Ranking the compression methods by overall proportion of reads aligned produces an identical ordering as above. Aside from 0-degree polynomial regression (0.831), all other compression methods have a read alignment rate between 0.87 and 0.861. The proportion of reads aligned for the uncompressed reads is 0.87. Most of the compression methods did not vary greatly in terms of the number of reads that were mapped \emph{only} using quality values resulting from compression; however, there is a sizable difference in the amount of reads that are originally mapped, but unmapped by the compressed methods. \textsc{q}ual\textsc{c}omp with rate 100 bits/read results in the fewest missing original read alignments (159). Increasing the regression model polynomial degree results in a decreasing amount of reads that are originally mapped, but unmapped by the regression model (40,931 and 1,134 reads for 0-degree and 7th-degree, respectively). There is no such trend for reads that are mapped only by the regression model. Note that the 2-bin method aligns a greater portion of reads than the various regression models. If a poor-quality base-call is flanked by high-quality base-calls, then the low-degree polynomial regression models tend to smooth out the quality valus, erroneously marking the low-quality base-calls as higher quality. During alignment, Bowtie2 penalizes mismatches at high-quality base-calls more so than mismatches at lower-quality base-calls. Thus, the polynomial regression models incur a high penalty for these base-calls, resulting in fewer alignments, despite having a better \textsc{mse} than 2-bin in most cases. Setting all base-calls as minimum quality results in the highest proportion of mapped reads (0.881). Conversely, setting all base-calls as maximum quality results in the lowest proportion of mapped reads (0.728). \section{Discussion} We have examined several simple and general approaches for lossy compression of read quality values, and their effect on bioinformatic analyses. Our results demonstrate that some compression methods can result in quality values that are quite useful, and in some cases advantageous, compared to original uncompressed values. Downstream applications dictate the relative performance lossy compression methods. Some bioinformatics tools proved to be robust to moderate information loss in base-call quality. The use of quality values modified via several compression/decompression processes result in approximately the same total number base-calls passing through the combined filtering and trimming processes as when using original quality values, though the absolute and relative number of reads filtered and base-calls trimmed may differ. Genome assembly appears robust to standard sequencing errors, and compression transformed quality values, particularly those associated with profiling-based compression, result in assemblies better than those based on original uncompressed quality values by most measures. Among the potential benefits of compressing quality values is the ability to perform quality control and possibly other operations directly on the compressed representations of the data. For example, with profile-based compression, each of the $k$ profiles can be evaluated for (pre-)processing operations such as filtering and trimming, and the operations transitively applied to the entire set of reads, thus saving substantial computation. Quality values for base-calls in \textsc{dna} sequencing are not sacrosanct; they are representations of estimates for measurement (detection) error determined by various mechanisms including signal processing characteristics and algorithms. The true worth of quality values lies in the utility they provide in bioinformatic analyses. In some cases, as demonstrated here, this value is enhanced through transformation associated with compression. Thus appropriate compression of quality values can both reduce associated memory cost and improve analysis results. \section{Availability} Implementations of the compression methods written in Python and R~\citep{R-Core-Team:2014yq}, and a pipeline to reproduce the results are available at \url{https://github.com/cmhill/q-compression} \section*{Acknowledgement} This project was initiated at the 2014 Bioinformatics Exchange for Students and Teachers (\textsc{best}) Summer School, which was funded by the offices of the Dean of The Graduate School, University of Maryland, and the Rektor of the University of T\"{u}bingen. Additional funding included an International Graduate Research Fellowship from The Graduate School, University of Maryland, to CMH, a Global Partnerships-Faculty Travel Grant from the Office of International Affairs to MPC, and funding from the University of T\"{u}bingen. \bibliographystyle{natbib} %\bibliographystyle{achemnat} %\bibliographystyle{plainnat} %\bibliographystyle{abbrv} %\bibliographystyle{bioinformatics} % %\bibliographystyle{plain} % \bibliography{compression} \end{document}
{ "alphanum_fraction": 0.7958025376, "avg_line_length": 52.2994722955, "ext": "tex", "hexsha": "79c34636a2fee1d1773b212c874a3874fc040e0e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a6c83d12907d51efff124fa96a0a2f0f117b66b5", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "cmhill/q-compression", "max_forks_repo_path": "bioinformatics_manuscript/q-compression.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a6c83d12907d51efff124fa96a0a2f0f117b66b5", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "cmhill/q-compression", "max_issues_repo_path": "bioinformatics_manuscript/q-compression.tex", "max_line_length": 261, "max_stars_count": null, "max_stars_repo_head_hexsha": "a6c83d12907d51efff124fa96a0a2f0f117b66b5", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "cmhill/q-compression", "max_stars_repo_path": "bioinformatics_manuscript/q-compression.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 9868, "size": 39643 }
\section{compare} \index{compare} \begin{shaded} \begin{alltt} /** compare +-TRINARY-+ (1) +-PAD SPACE-+ >>--COMPARE--+---------+------------+---+-----------+---------------------->< +-BINARY--+ (2) | +-PAD-Xorc--+ | | | <-----------------+ | +--ANY DString------+--+ (4) (5) +--EQUAL DString----+ (4) +--LESS DString-----+ (3) (4) +--MORE DString-----+ (3) (4) +--NOTEQUAL DString-+ (4) (1) -1 = Primary is shorter/less, 0 = equal, 1 = Secondary is shorter/less (2) 0 = equal, 1 = not equal (3) Primary is LESS/shorter (or MORE/longer) than secondary (4) DStrings can use any of the following escapes (or the lowercase) for the unequal situation: \begin{verbatim} \C (count) for the record number, \B (byte) for column number \P (primary) for the primary stream record \S (secondary) for the secondary stream record \L (Least) for then stream number that is shortest, -1 if equal \M (Most) for the stream number that is longest, -1 if equal (5) Equal or not, this DString precedes any of the others. \end{verbatim} \end{alltt} \end{shaded}
{ "alphanum_fraction": 0.4635568513, "avg_line_length": 40.3529411765, "ext": "tex", "hexsha": "94869070c00112a7a2583c1c4ea435f89e7a9a8d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ec27b6e3f908fbc50cb6dc54696daea68ae59103", "max_forks_repo_licenses": [ "ICU" ], "max_forks_repo_name": "RexxLA/NetRexx", "max_forks_repo_path": "documentation/njpipes/compare.tex", "max_issues_count": 25, "max_issues_repo_head_hexsha": "ec27b6e3f908fbc50cb6dc54696daea68ae59103", "max_issues_repo_issues_event_max_datetime": "2022-02-01T16:14:50.000Z", "max_issues_repo_issues_event_min_datetime": "2022-01-24T12:13:53.000Z", "max_issues_repo_licenses": [ "ICU" ], "max_issues_repo_name": "RexxLA/NetRexx", "max_issues_repo_path": "documentation/njpipes/compare.tex", "max_line_length": 80, "max_stars_count": null, "max_stars_repo_head_hexsha": "ec27b6e3f908fbc50cb6dc54696daea68ae59103", "max_stars_repo_licenses": [ "ICU" ], "max_stars_repo_name": "RexxLA/NetRexx", "max_stars_repo_path": "documentation/njpipes/compare.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 368, "size": 1372 }
\documentclass{book} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{lmodern} \usepackage[small]{titlesec} \titleformat{\section}{\normalsize\bfseries}{\thesection.}{0.5em}{} \usepackage[ thmmarks, thref]{ntheorem} \usepackage{chngcntr} \usepackage{cleveref} \renewcommand\thesection{\arabic{section}} \theoremstyle{plain} \theoremheaderfont{\scshape} \theoremseparator{.~---} \newtheorem{thm}{Theorem} \counterwithin*{thm}{chapter} \pagestyle{plain} \begin{document} \section{A first section} \begin{thm}\label{testthm} This is a test theorem. \end{thm} We see in \cref{testthm}\dots \end{document}
{ "alphanum_fraction": 0.6939970717, "avg_line_length": 22.7666666667, "ext": "tex", "hexsha": "4629e64070b90f559e27d57015a14e85cfa24828", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ad1bb29434e899d9933f25596f12404f8998aeaa", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "brakdag/latexDocuments", "max_forks_repo_path": "Destilacion/test.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ad1bb29434e899d9933f25596f12404f8998aeaa", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "brakdag/latexDocuments", "max_issues_repo_path": "Destilacion/test.tex", "max_line_length": 69, "max_stars_count": null, "max_stars_repo_head_hexsha": "ad1bb29434e899d9933f25596f12404f8998aeaa", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "brakdag/latexDocuments", "max_stars_repo_path": "Destilacion/test.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 241, "size": 683 }
% --- [ Test Cases ] ----------------------------------------------------------- \subsection{Test Cases} As stated by Edsger W. Dijkstra in 1969, \textit{``testing shows the presence, not the absence of bugs.''}~\cite{absence_of_bugs_quote} For this reason, several independent methods were utilised to verify the correctness of the decompilation components and their utility libraries, including the automatic generation of C programs (with a large number of nested \texttt{if}-statements) which were used to stress test each component of the decompilation pipeline; as further described in section~\ref{sec:ver_performance}. A lot of thought went into designing test cases which attempt to break the code, exploit assumptions, and exercise tricky corner cases (e.g. no whitespace characters between tokens in LLVM IR). These tests were often written prior to the implementation of the software artefacts, to reduce the risk of testing what was built rather than what was intended to be built (as specified by the requirements). The test cases have successfully identified a large number of bugs in the software artefacts, and even uncovered inconsistent behaviour in the reference implementation of the LLVM IR lexer; as further described in section~\ref{sec:impl_llvm_ir_library}. To facilitate extensibility, the test cases were often implemented using a table driven design which separate the test case data from the test case implementation. An extract of the test cases used to verify the candidate discovery logic, the equation solver and the candidate validation logic of the subgraph isomorphism search library is presented in figure~\ref{fig:iso_test_cases}. These test cases are automatically executed by the CI service any time a new change is committed to the source code repository, as further described in section~\ref{sec:ver_continuous_integration}. \begin{figure}[htbp] \begin{center} \begin{BVerbatim} $ go test -v github.com/decomp/graphs/iso === RUN TestCandidates --- PASS: TestCandidates (0.02s) === RUN TestEquationSolveUnique --- PASS: TestEquationSolveUnique (0.00s) === RUN TestEquationIsValid --- PASS: TestEquationIsValid (0.22s) === RUN TestIsomorphism --- PASS: TestIsomorphism (0.18s) === RUN TestSearch --- PASS: TestSearch (0.20s) PASS ok github.com/decomp/graphs/iso 0.62s \end{BVerbatim} \caption{An extract of the test cases used to verify the subgraph isomorphism search library, as visualised by \texttt{go test}.} \label{fig:iso_test_cases} \end{center} \end{figure} % --- [ Subsubsections ] ------------------------------------------------------- \input{sections/8_verification/1_test_cases/1_code_coverage}
{ "alphanum_fraction": 0.7557052001, "avg_line_length": 74.25, "ext": "tex", "hexsha": "4b492867aa5bcc05e431ce73768d2878b0ff26d9", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2019-09-09T07:36:14.000Z", "max_forks_repo_forks_event_min_datetime": "2019-05-25T21:15:26.000Z", "max_forks_repo_head_hexsha": "fb82b6a5074aa8721afb24a5537bf1964ed20467", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "decomp/doc", "max_forks_repo_path": "report/compositional_decompilation/sections/8_verification/1_test_cases.tex", "max_issues_count": 48, "max_issues_repo_head_hexsha": "fb82b6a5074aa8721afb24a5537bf1964ed20467", "max_issues_repo_issues_event_max_datetime": "2020-01-29T19:17:53.000Z", "max_issues_repo_issues_event_min_datetime": "2019-01-30T19:08:59.000Z", "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "decomp/doc", "max_issues_repo_path": "report/compositional_decompilation/sections/8_verification/1_test_cases.tex", "max_line_length": 820, "max_stars_count": 23, "max_stars_repo_head_hexsha": "fb82b6a5074aa8721afb24a5537bf1964ed20467", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "decomp/doc", "max_stars_repo_path": "report/compositional_decompilation/sections/8_verification/1_test_cases.tex", "max_stars_repo_stars_event_max_datetime": "2021-09-16T08:14:04.000Z", "max_stars_repo_stars_event_min_datetime": "2016-05-27T10:16:40.000Z", "num_tokens": 615, "size": 2673 }
%!TEX root = ../thesis.tex %******************************************************************************* %*********************************** Experiment ***************************** %******************************************************************************* \chapter{Experiment} \graphicspath{{chapter-experiment/Figs/Vector/}{chapter-experiment/Figs/}} \glsreset{lhc} The European Laboratory for Particle Physics, CERN, is one of Europe's first joint ventures in science and one of the largest physics research facilities in the world~\cite{CERN:2723123}. It brings together more than \num[group-separator={,}]{12400} scientists of over 110 nationalities~\cite{CERN:2723123} with a common goal of pushing the frontiers of science and technology. Located at the Franco--Swiss border near Geneva, CERN was founded in 1954 and nowadays counts 23 member states~\cite{CERN:2723123}. CERN's main research area is particle physics, hence why the organisation operates a large complex of particle accelerators and detectors. This chapter introduces the \gls{lhc}, CERN's main particle accelerator, as well as the ATLAS experiment, in which the search for \gls{susy} presented in this work is embedded in. \section{The Large Hadron Collider}\label{sec:lhc} The \gls{lhc}~\cite{Bruning:782076} is the largest particle accelerator situated at CERN. It is installed in a tunnel with $\SI{26.7}{\km}$ circumference, that was originally constructed from 1984 to 1989 for the \gls{lep} accelerator. The tunnel is situated on the Franco--Swiss border and wedged between the Jura mountains and lake Léman. It lies between $\SI{45}{\meter}$ (in the limestone of the Jura) and $\SI{170}{\meter}$ (in the molasse rock) below the surface, resulting in a tilt of $1.4\%$ towards the lake. While proton--proton ($pp$) collisions are the main operating mode of the \gls{lhc}, its design also allows it to accelerate and collide heavy ions like lead and xenon. Since data from $pp$ collisions is used in this work, the following sections will mainly focus on this operating mode. As opposed to particle--antiparticle colliders that only need a single ring, the \gls{lhc}, being a particle--particle collider, consists of two rings with counter-rotating beams. With an inner diameter of only $\SI{3.7}{\meter}$, the tunnel is, however, too narrow to fit two separate proton rings. Instead, the \gls{lhc} is built in a twin bore design\footnote{Originally proposed by John Blewett at BNL for cost-saving measures of the Colliding Beam Accelerator~\cite{blewett1971proceedings,Evans:1129806}.}, housing two sets of coils and beam channels in a single magnetic and mechanical structure and cryostat~\cite{Bruning:782076}. While saving costs, this design has the disadvantage of both beams being magnetically coupled, consequently reducing the flexibility of the machine. Before being injected into the LHC, protons are pre-accelerated in an injection chain consisting of multiple existing machines in CERN's accelerator complex, pictured in \cref{fig:accelerator_complex}. The injection chain uses predecessor accelerators that have been upgraded in order to be able to handle the high luminosity and high energy requirements of the \gls{lhc}. The protons for the \gls{lhc} originally stem from a duoplasmatron source~\cite{Scrivens:1382102}, stripping electrons from hydrogen atoms through electric discharges between a hot anode and cathode. The $\SI{90}{\keV}$ protons are then accelerated by a radio frequency quadrupole to $\SI{750}{\keV}$ before being injected into Linac~2\footnote{Originally built to replace Linac 1 in order to produce higher energetic proton beams, Linac~2 has been replaced by Linac~4 in 2020~\cite{linac4:2736208}. Linac~3 was built in 1994 and is still used for acceleration of heavy ions.}, a linear accelerator producing a beam of $\SI{50}{\MeV}$ protons through the use of radio frequency cavities. The protons then enter a set of circular accelerators, the Proton Synchrotron Booster, the Proton Synchrotron and the Super Proton Synchrotron, creating a stepwise acceleration up to an energy of $\SI{450}{\GeV}$, which is the injection energy to the \gls{lhc}. The \gls{lhc} finally accelerates the protons up to nominal beam energy before colliding them. \begin{figure} \centering \includegraphics[width=0.95\textwidth]{CCC-v2019-final-white} \caption[CERN accelerator complex]{CERN accelerator complex as of 2021~\cite{Mobs:2684277}.} \label{fig:accelerator_complex} \end{figure} The \gls{lhc} is composed of eight straight sections and eight arcs. The eight straight sections each serve as interaction points (referred to as \textit{Points} in the following), either for particle detectors, or for machine hardware of the collider itself. The Points are labelled clockwise, with Point 1 being closest to the CERN Meyrin site. Four of the eight Points house the main particle physics experiments at the LHC, called ATLAS, CMS, ALICE and LHCb, covering a wide range of fundamental research. The two general purpose particle detectors ATLAS~\cite{Aad:2008zzm} and CMS~\cite{Chatrchyan:2008aa} are installed at Point~1 and Point~5, respectively. Both ATLAS and CMS are designed to perform high precision SM measurements including Higgs measurements as well as searches for BSM physics. Being very similar in terms of targeted phase space, ATLAS and CMS can be used to cross-check results of each other. ALICE~\cite{Aamodt:2008zz} is situated at Point~2 and specializes on heavy ion physics, studying the physics of quark-gluon plasma at high energy densities. Assembled in Point~8, LHCb~\cite{Alves:2008zz} targets $B$-physics and performs measurements of CP-violation. Apart from the four main experiments, three smaller experiments exist at the \gls{lhc}: TOTEM, MoEDAL and LHCf. While TOTEM~\cite{Anelli:2008zza} and LHCf~\cite{Adriani:2006jd} study forwards physics close to CMS and ATLAS, respectively, MoEDAL~\cite{Pinfold:2009oia} searches for magnetic monopoles. The remaining four Points house accelerator equipment needed for operation of the LHC. Most of the collimation system is placed at Point~3 and Point~7, performing beam cleaning and machine protection through a series of beam intercepting devices, ensuring that no stray particles from experimental debris or beam halo can reach and damage other machine components~\cite{Bruning:782076}. The acceleration of the beam itself is performed at Point~4 with two radio frequency systems, one for each \gls{lhc} beam. The radio frequency cavities operate at $\SI{400}{\MHz}$ and provide $\SI{8}{MV}$ during injection and $\SI{16}{MV}$ during coast~\cite{Bruning:782076}. Due to the radio frequency acceleration, the accelerated protons are necessarily grouped in packages, so-called \textit{bunches}, that each contain roughly $10^{11}$ protons and have a bunch spacing of $\SI{25}{ns}$~\cite{Bruning:782076}. Although roughly \num[group-separator={,}]{35500} radio frequency buckets are available, a design-value of only 2808 bunches are filled in each beam for data-handling reasons~\cite{Bruning:782076}. The remaining Point~6 houses the beam dumping system, allowing to horizontally deflect and fan out both beams into dump absorbers using fast-paced \textit{kicker} magnets. The two nitrogen-cooled dump absorbers each consist of a graphite core contained in a steel cylinder, surrounded by $\SI{750}{\tonne}$~\cite{Bruning:782076} of concrete and iron shielding. Insertion of the beams from the Super Proton Synchrotron into the \gls{lhc} happens at Points~2 and 8, close to the ALICE and LHCb experiments. The eight arcs of the \gls{lhc} are filled with dipole magnets built from superconducting NbTi Rutherford cables. The electromagnets are responsible for keeping the accelerated particles on their circular trajectory and are the limiting factor of the maximal centre-of-mass energy (denoted as $\sqrt{s}$) of the \gls{lhc}. In order to achieve the design energy of $\sqrt{s} = \SI{14}{\TeV}$~\cite{Bruning:782076}, the magnets have to create a field strength of $\SI{8.3}{T}$~\cite{Bruning:782076}. The electric currents needed for such high field strengths can only be sustained by operating the magnets in a superconducting state, which requires them to be cooled down to $\SI{1.9}{K}$~\cite{Bruning:782076} using superfluid helium. In addition to the dipole magnets, the arcs contain quadrupole magnets used to shape and focus the beams, as well as multipole magnets correcting and optimising the beam trajectory. Quadrupole magnets are also used to focus the beam to the smallest possible beam spot size before and after the interaction points. \subsection{Pile-up}\label{sec:pileup} \begin{figure} \centering \begin{subfigure}[b]{0.495\linewidth} \centering\includegraphics[width=\textwidth]{mu_2015_2018} \caption{Luminosity-weighted mean number of interactions per bunch crossing during Run~2 data-taking.\label{fig:mu_2015_2018}} \end{subfigure}\hfill \begin{subfigure}[b]{0.495\linewidth} \centering\includegraphics[width=\textwidth]{peakMuByFill} \caption{Peak mean number of interactions per bunch crossing for each fill during 2018.\label{fig:peakMuByFill}} \end{subfigure}% \caption{Number of interactions per bunch crossing recorded by the ATLAS experiment. Figures taken from \reference\cite{ATLAS:Run2}.}\label{fig:mu_run2} \end{figure} Due to the high number of protons in each bunch, several $pp$ collisions occur at each bunch crossing. This leads to a phenomenon called \textit{pile-up}, where the recorded events not only contain information from the hard-scattering process of interest, but also remnants from additional, often low-energetic, $pp$ collisions. During the Run~2 data-taking period, \ie the period spanning from 2015 throughout 2018, the mean number of inelastic $pp$ collisions per bunch crossing ($\mu$) has varied roughly from 10 to 70, with the majority of bunch crossings having a value of $\mu$ around 30. \Cref{fig:mu_2015_2018} shows the mean number of interactions per bunch crossing during the Run~2 data-taking period, weighted by luminosity (a quantity introduced in~\cref{sec:lumi_datataking}) and split up in the different data-taking years. In 2018, for example, the peak number of interactions per bunch crossing $\mu_\mathrm{peak}$ for each fill has been consistently around 50, as shown in~\cref{fig:peakMuByFill}. Experimentally, pile-up can be divided into five major components~\cite{Marshall:2014mza}: \begin{itemize} \item \textit{In-time} pile-up: multiple interactions during a single bunch crossing, of which not all will be interesting, as often with relatively low energy. If they can be resolved, the main hard-scattering event can still be isolated and studied. \item \textit{Out-of-time} pile-up: additional collisions occurring in bunch crossings before or after the main event of interest. This happens either due to read-out electronics integrating over longer time frames than the $\SI{25}{ns}$ bunch spacing, or detector components being sensitive to several bunch crossings. \item \textit{Cavern background}: gas of thermal neutrons and photons that fill the experimental caverns during a run of the LHC and tend to cause random hits in detector components. \item \textit{Beam halo events}: protons scraping an up-stream collimator, typically resulting in muons travelling parallel to the beam pipe. \item \textit{Beam gas events}: interactions between proton bunches and residual gas in the beam pipe, typically occurring well outside the main interaction region. \end{itemize} While the effects of cavern background can be mitigated through special pieces of shielding, beam halo and beam gas events leave signatures that can be recognised and removed with high efficiency. Signals from in-time and out-of-time pile-up create irreducible overlap with the events of interest, significantly impacting analyses, and thus need to be taken into account with a dedicated \gls{mc} simulation~\cite{Marshall:2014mza}. \subsection{Luminosity and data-taking} \label{sec:lumi_datataking} \begin{figure} \centering \begin{subfigure}[b]{0.49\linewidth} \centering\includegraphics[width=\textwidth]{peakLumiByFill} \caption{\label{fig:peakLumiByFill}} \end{subfigure}\hfill \begin{subfigure}[b]{0.49\linewidth} \centering\includegraphics[width=\textwidth]{intlumivstimeRun2DQall_v0} \caption{\label{fig:intlumivstimeRun2DQall}} \end{subfigure} \caption{Instantaneous and cumulative luminosities in Run~2. Figure~\subref{fig:peakLumiByFill} shows the peak instantaneous luminosity delivered to ATLAS during $pp$ collision data taking in 2018 as a function of time. Figure~\subref{fig:intlumivstimeRun2DQall} shows the cumulative luminosity delivered to ATLAS (green), recorded by ATLAS (yellow) and deemed good for physics analysis (blue) during the entirety of Run~2~\cite{ATLAS:Run2}.}\label{fig:lumi_run2} \end{figure} Apart from the beam energy, the most important quantity for a collider is the instantaneous luminosity $L_\mathrm{inst}$. For a synchrotron with Gaussian beam distribution, the instantaneous luminosity is given by \begin{equation} L_\mathrm{inst} = \frac{N_b^2 n_b f_\mathrm{rev}}{4\uppi\sigma_x\sigma_y} F, \label{eq:lumi} \end{equation} where $n_b$ is the number of bunches, $N_b$ the number of protons per bunch, $f_\mathrm{rev}$ the revolution frequency and $\sigma_x$ and $\sigma_y$ the transverse beam sizes. The parameter $F$ is a geometrical correction factor accounting for the reduction in instantaneous luminosity due to the beams crossing at a certain crossing angle. While the design instantaneous luminosity of the \gls{lhc} at the high-luminosity experiments ATLAS and CMS is $L_\mathrm{inst} = \SI{e34}{\per\cm\squared\per\second}$~\cite{Bruning:782076}, the 2017 and 2018 data-taking periods saw a peak luminosity twice as high~\cite{peak_lumi}. The instantaneous luminosity is related to the total number of events $N$ through the cross section $\sigma$ of the events in question \begin{equation} N = \sigma L = \sigma \int L_\mathrm{inst}\diff t, \end{equation} with $L$ the total integrated luminosity, a measure for the total amount of collision data produced. A precise knowledge of the integrated luminosity corresponding to a given dataset is crucial for both SM measurements, as well as for searches for \gls{bsm} physics. Searches for \gls{susy}, like the one presented in this work, rely on precise measurements of the integrated luminosity in order to estimate the contribution from SM background processes. The luminosity measurement for the Run~2 dataset used within this work is described in detail in \references\cite{ATLAS-CONF-2019-021,Aaboud:2016hhf} and relies on a measurement of the bunch luminosity $L_b$, \ie the luminosity produced by a single pair of colliding bunches, \begin{equation} L_b = \frac{\mu f_\mathrm{rev}}{ \sigma_\mathrm{inel}} = \frac{\mu_\mathrm{vis}f_\mathrm{rev}}{\sigma_\mathrm{vis}}, \end{equation} where $\mu$ is the pile-up parameter, $\sigma_\mathrm{inel}$ is the cross section of inelastic $pp$ collisions, $\mu_\mathrm{vis} = \epsilon \mu$ is the fraction $\epsilon$ of the pile-up parameter $\mu$ visible to the detector, and $\sigma_\mathrm{vis} = \epsilon\sigma_\mathrm{inel}$ is the visible inelastic cross section. If $\sigma_\mathrm{vis}$ is known, the currently recorded luminosity can be determined by measuring $\mu_\mathrm{vis}$. At the ATLAS experiment, the observed number of inelastic interactions per bunch crossing $\mu_\mathrm{vis}$ is measured using dedicated detectors, as for example LUCID-2~\cite{Avoni_2018}, a forward Cherenkov-detector using the quartz windows from photomultipliers as Cherenkov medium. In order to use $\mu_\mathrm{vis}$ as luminosity monitor, the respective detectors need to be calibrated through a measurement of the visible inelastic cross section $\sigma_\mathrm{vis}$. This can be done using so-called \gls{vdm} scans~\cite{vanderMeer:296752,GRAFSTROM201597}, in which the transverse distribution of protons in the bunches is inferred by measuring the relative interaction rates as a function of the transverse beam separation\footnote{This procedure is often referred to as \textit{beam sweeping}.}. The algorithms used to determine the $\sigma_\mathrm{vis}$ calibration are described in \references\cite{ATLAS-CONF-2019-021,Aaboud:2016hhf}. The luminosity during the \gls{vdm} runs can then be determined using~\cref{eq:lumi}. At the \gls{lhc}, \gls{vdm} scans are typically performed in special low-$\mu$ runs with well-known machine parameters in order to minimise uncertainties~\cite{ATLAS-CONF-2019-021}. During high-$\mu$ physics runs, a luminosity measurement is obtained through an extrapolation from the \gls{vdm} runs. The \gls{lhc} entered operation in 2008, with first beams in September and first collisions\footnote{A delay was caused by an incident in September 2008. During powering tests of the main dipole circuit in one of the sectors of the \gls{lhc}, an electrical bus connection between two magnets failed, causing mechanical damage to the machine and the release of helium into the tunnel~\cite{Bajko:1168025}.} in December 2009~\cite{startup}. Its operation is in general structured into so-called \textit{Runs}, that are spanned by multiple years of data-taking. Run~1 spanned from 2009 to 2013 and delivered roughly $\SI{28.5}{\per\femto\barn}$ of $pp$ collision data to ATLAS, taken at centre-of-mass energies of $\SI{7}{\TeV}$ and $\SI{8}{\TeV}$~\cite{Aaboud:2016hhf,Aad:2011dr,Aad:1517411}. Run~2 lasted from 2015 to 2018 and saw a centre-of-mass energy increase to $\SI{13}{\TeV}$, delivering approximately $\SI{156}{\per\femto\barn}$ of $pp$ collision data to ATLAS~\cite{ATLAS-CONF-2019-021}. Run~3 of $pp$ collision data taking with two times design peak luminosity is currently planned to start its physics program in 2022 and last until the end of 2024~\cite{run3}. Current plans foresee\footnote{The \gls{lhc} schedule recently had to be changed due to the COVID-19 pandemic~\cite{run3}.} Run~3 to deliver about $\SI{200}{\per\femto\barn}$ of $pp$ collision data with centre-of-mass energies of $\SI{13}{\TeV}$ and $\SI{14}{\TeV}$. After Run~3, the \gls{lhc} will be upgraded to the \gls{hl-lhc}, significantly increasing the peak instantaneous luminosity and delivering up to $\SI{3000}{\per\femto\barn}$ of $pp$ collision data from 2027 until 2040~\cite{schedule,Apollinari:2284929}. This work uses $pp$ collision data taken by ATLAS during Run~2 of the \gls{lhc}. Of the $\SI{156}{\per\femto\barn}$ delivered to ATLAS, $\SI{147}{\per\femto\barn}$ were recorded, and $\SI{139}{\per\femto\barn}$ were deemed to be of sufficient quality for physics analysis. \Cref{fig:lumi_run2} shows the cumulative luminosity delivered to ATLAS during Run~2. Uncertainties on the measured total recorded luminosity stem from the measurements of $\mu_\mathrm{vis}$ and $\sigma_\mathrm{vis}$, but are dominated by the uncertainties on $\sigma_\mathrm{vis}$ as \gls{vdm} scans can only be done during special runs with more or less fixed machine parameters, while the general conditions during high-$\mu$ conditions change continuously. For the full Run~2 dataset, the uncertainties accumulate to $\pm 1.7 \%$~\cite{ATLAS-CONF-2019-021}. \section{ATLAS Experiment}\label{sec:atlas_experiment} The ATLAS experiment is one of two general-purpose detectors at the LHC. Located at Point~1 in a cavern $\SI{100}{\meter}$ below the surface, it is approximately $\SI{44}{\meter}$ long and $\SI{25}{\meter}$ high~\cite{Aad:2008zzm}. The design of the ATLAS experiment is driven by the aim to allow for a diverse research program, including SM precision measurements, Higgs physics and searches for \gls{bsm} physics, whilst taking into account the unique and challenging conditions set by the \gls{lhc}. The various detector technologies used are designed to withstand the high-radiation environment of the \gls{lhc}, while allowing particle measurements with high spatial and temporal granularity. The general structure of ATLAS is depicted in \cref{fig:atlas_detector}, and consists of a central part, called \textit{barrel}, that has a cylindrical shape around the beam pipe, and two discs, called \textit{end-caps}, that close off the barrel on each side. This makes the ATLAS detector forward-backward symmetric with a coverage of nearly the full solid angle of $4\uppi$, which is needed in order to measure momentum imbalances caused by particles that only interact weakly with the detector material. The interface between the ATLAS experiment and the \gls{lhc} is the beam pipe. In order to be maximally transparent to the particles created in the collisions, but be also able to withstand the forces from the vacuum, the beam pipe is made out of Beryllium close to the \gls{ip}, and stainless-steel further away from the \gls{ip}~\cite{Brock:1354959}. The following sections introduce the working principles of the different detector components employed in ATLAS, starting with the innermost component closest to the \gls{ip}, the inner detector, followed by the calorimeters in the middle and finally the muon spectrometers on the outside. If not otherwise indicated, details on the detector components including the design parameter values are extracted from \reference\cite{Aad:2008zzm}. \begin{figure} \centering \includegraphics[width=0.95\textwidth]{atlas} \caption[The ATLAS detector]{Computer-generated picture of the ATLAS detector, giving an overview of the various subsystems~\cite{Pequenao:1095924}.} \label{fig:atlas_detector} \end{figure} \subsection{Coordinate system} In order to properly describe collision events in the ATLAS detector, a suitable coordinate system is needed. The right-handed coordinate system~\cite{ATLAS:1999uwa} used in ATLAS has its origin at the nominal \gls{ip} in the centre of the detector. The positive $x$-axis points towards the centre of the \gls{lhc} ring, the positive $y$-axis points upwards to the surface, and the beam pipe is used to define the $z$-axis. In the $x$--$y$ plane, called the \textit{transverse plane}, the azimuthal angle $\phi$ is the angle around the beam axis. The polar angle $\theta$ is measured from the beam axis. Furthermore, the rapidity $\upsilon$~\cite{pdg2020} is defined as \begin{equation} \upsilon = \frac{1}{2}\ln\left(\frac{E+p_z}{E-p_z}\right) = \tanh^{-1}{\frac{p_z}{E}}, \end{equation} with $E$ the energy of an object and $p_z$ its momentum in $z$-direction. The rapidity is often preferred over the polar angle, as differences in the rapidity are invariant under Lorentz boosts in $z$-direction. The pseudorapidity $\eta$~\cite{pdg2020} is the high-energy limit ($p\gg m$) of the rapidity, and defined as \begin{align} \eta = - \ln\tan\frac{\theta}{2}, \end{align} with $\cos\theta = p_z/p$. Pseudorapidity and rapidity are approximately equal in the limit where $p\gg m$ and $\theta \gg \frac{1}{\gamma}$. Compared to the rapidity, the pseudorapidity has the advantage of not depending on the energy and momentum calibration of the detected objects. Additionally, it gives a direct correspondence to the polar angle $\theta$ through the relation $\tanh\eta = \cos\theta$. Objects travelling along the beam axis have a pseudorapidity of $\eta = \infty$ and objects travelling in the $x$--$y$ plane have $\eta = 0$. The distance $\upDelta R$ between two objects in the detector is given by \begin{align} \upDelta R=\sqrt{\left(\upDelta \eta\right)^2+\left(\upDelta \phi\right)^2}. \end{align} The longitudinal momentum of the partons composing the colliding hadrons is only known by means of the \glspl{PDF}, giving the probabilities of the partons to have a certain energy in the direction of the beam. Thus, the total longitudinal energy in each collision is not exactly known, impeding the use of physics quantities in the $z$-direction. In the $x$--$y$ plane, however, momentum conservation can be applied, which is why mainly transverse physics quantities are used, indicated by a subscript `T', \eg, $E_\mathrm{T}$ or $\pt$. \subsection{Magnet system} In order to perform precise momentum measurements of particles, ATLAS uses a system of magnets~\cite{Aad:2008zzm}, whose magnetic fields force charged particles on curved tracks due to the Lorentz force. Using precise measurements of the tracks taken in the inner detector and in the muon spectrometers, the curvature of the tracks can be determined, allowing an inference of the charge-to-momentum ratio $q/p$ of charged particles. ATLAS employs a set of four superconducting magnets, one central solenoid, and three toroids, all operating at a nominal temperature of $\SI{4.5}{K}$, achieved through a cryogenic system using liquid helium. The solenoid is aligned on the beam axis and provides a $\SI{2}{T}$ magnetic field for the inner detector. As it is located in front of the calorimeters (as seen from the \gls{ip}), it is specially designed to have minimal material thickness in order to avoid influencing the subsequent energy measurements. The solenoid consists of single-layer coils made of a Nb/Ti conductor and additional aluminium for stability. It operates at a nominal current of $\SI{7.73}{\kilo\ampere}$ and uses the hadronic calorimeter as return yoke. The toroid magnets consist of a barrel toroid and two end-cap toroids, producing a magnetic field of $\SI{0.5}{T}$ and $\SI{1}{T}$ for the muon spectrometers in the barrel and end-caps, respectively\footnote{The magnetic field in of the toroid magnets is designed to be higher in the end-caps in order to ensure sufficient bending power for precise momentum measurements.}. Both barrel and end-cap toroids consist of Nb/Ti/Cu conductors with aluminium stabilisation, wound into double pancake-shaped coils. The barrel toroid coils are enclosed in eight stainless-steel vacuum vessels in a racetrack-shaped configuration and arranged around the barrel calorimeters with an azimuthal symmetry. Aluminium-alloy struts provide the support structure necessary for the vessels to withstand the inward-directed Lorentz force of $\SI{1400}{t}$ in addition to their own weight. For the same reasons, the end-cap toroid coils are assembled in eight square units, and bolted and glued together with eight wedges, forming rigid structures. Both end-cap and barrel toroids operate at a nominal current of $\SI{20.5}{\kilo\ampere}$. \subsection{Inner detector} The \gls{id}~\cite{Aad:2008zzm} is embedded in the magnetic field of the solenoid and measures tracks of charged particles, allowing a determination of their momentum, while also providing crucial information for vertex reconstruction. As the \gls{id} is the detector closest to the beam pipe, its components need to be able to withstand the extreme high-radiation environment close to the \gls{ip}. The \gls{id} consists of three subdetectors and uses two different working principles: semiconductor and gaseous detectors. In semiconductor-based tracking detectors, charged particles passing through the detector create a trail of electron-hole pairs that subsequently drift through the semiconductor material and cause electric signals. In gaseous detectors, traversing particles create electron-ion pairs that drift towards metal electrodes and induce electric signals to be read out by specialised electronics. Closest to the \gls{id} lies the pixel detector, followed by the \gls{sct}, both of which are made of semiconductors. The \gls{sct} is surrounded by the \gls{trt}, a gaseous detector. In total, the \gls{id} provides tracking and momentum information up to $\vert\eta\vert < 2.5$ and down to transverse momenta of nominally $\SI{0.5}{\GeV}$. A schematic illustration of the \gls{id} and its subdetectors is shown in \cref{fig:ID_schematic}. \begin{figure} \centering \begin{subfigure}[b]{0.49\linewidth} \centering\includegraphics[width=\textwidth]{id} \end{subfigure}\hfill \begin{subfigure}[b]{0.45\linewidth} \centering\includegraphics[width=\textwidth]{ibl} \end{subfigure}% \caption{Schematic drawing of the ID and its subdetectors. Images adapted from~\references\cite{Pequenao:1095926, Potamianos:2209070}.}\label{fig:ID_schematic} \end{figure} \subsubsection{Pixel detector} In the high-rate environment directly adjacent to the beam pipe, the only detector technology able to operate and deliver high-precision tracking information over extended periods of time are semiconductor detectors segmented into pixels. As opposed to strip detectors, the reduced size of silicon pixel detectors and thus the significantly reduced hit rate per readout channel allows pixel detectors to be operational in the harsh environment close to the IP. In ATLAS, pixels~\cite{Aad:2008zzm} are hybrids of silicon sensors and readout electronics bonded together, and were originally arranged in three layers in the barrel and the end-caps with a typical pixel size of $\SI{50}{\micro\meter}\times \SI{400}{\micro\meter}$, covering pseudorapidities up to $\vert\eta\vert < 2.5$. In order to increase robustness and performance in the high-luminosity environment, a new innermost layer, called the \gls{ibl}, was installed together with a new, smaller radius beam pipe between Run~1 and Run~2~\cite{Abbott:2018ikt,Capeans:1291633}. The IBL uses smaller pixels with a size of $\SI{50}{\micro\meter}\times \SI{250}{\micro\meter}$ and improves the tracking precision as well as vertex identification performance~\cite{Capeans:1291633}. It also improves the performance of identifying jets originating from $b$-quarks (through a procedure called \textit{b}-tagging, see~\cref{sec:flavour_tagging})~\cite{Aad:2019aic}. The tracking precision obtained by the pixel detector is $\SI{10}{\micro\meter}$ in $R$--$\phi$ and $\SI{115}{\micro\meter}$ in $z$ for the barrel and $R$ for the end-caps. \subsubsection{Silicon microstrip detector} The pixel detector is surrounded by the \gls{sct}~\cite{Aad:2008zzm}, consisting of four layers in the barrel and nine disks in each of the end-caps. In order to provide two-dimensional tracking information, strips are arranged in double-layers with a small crossing angle of $\SI{40}{mrad}$ and a mean pitch of $\SI{80}{\micro\meter}$. A charged particle traversing the \gls{sct} through the barrel thus creates four space point measurements. In the barrel, one set of strips in each of the four double-layers is oriented in beam direction, thereby measuring the $R$--$\phi$ plane. In the end-caps, one set of strips in each layer is oriented in radial direction. The \gls{sct} has roughly $6.3$ million readout channels and provides tracking information up to $\vert\eta\vert <2.5$. It achieves a precision of $\SI{17}{\micro\meter}$ in $R$--$\phi$ and $\SI{580}{\micro\meter}$ in $z$ for the barrel and $R$ for the end-caps. \subsubsection{Transition radiation tracker} The last and also largest of the three subdetectors of the \gls{id} is the \gls{trt}~\cite{Aad:2008zzm}, a gaseous detector made of multiple layers of $\SI{4}{\milli\meter}$ diameter drift tubes, surrounding the pixel detector and the \gls{sct}. The drift tubes consist of an aluminium cathode coated on a polyimide layer reinforced by carbon fibres, and use a gold-plated tungsten wire as anode. The tubes are filled with a Xe-based gas mixture and are embedded in polystyrene foils providing varying electric permittivities, thereby causing transition radiation when traversed by ultra-relativistic particles. While the 73 layers of $\SI{144}{\centi\meter}$ long tubes in the barrel region are aligned parallel to the beam pipe, the 160 layers of $\SI{37}{\centi\meter}$ long tubes in the end-caps are aligned in radial direction, providing coverage up to $\vert\eta\vert <2.0$ and an intrinsic accuracy of $\SI{130}{\micro\meter}$ in $R$--$\phi$. The low accuracy compared to the pixel detector and the \gls{sct} is compensated by the large amount of hits (typically $36$ per track) and the longer measured track length. As the amount of transition radiation given off by a particle, is proportional to its Lorentz factor~\cite{pdg2020}, the \gls{trt} is also used to improve electron identification~\cite{ATLAS-CONF-2011-128}. For the same momentum, electrons will have a higher Lorentz factor than the heavier, charged pions, and consequently give off more transition radiation. \subsection{Calorimeters} \begin{figure} \centering \begin{subfigure}[b]{0.49\linewidth} \centering\includegraphics[width=\textwidth]{cal} \caption{Calorimeter systems\label{fig:calorimeters}} \end{subfigure}\hfill \begin{subfigure}[b]{0.49\linewidth} \centering\includegraphics[width=\textwidth]{ms} \caption{Muon spectrometer\label{fig:muon_system}} \end{subfigure}% \caption{Schematic drawing of the \subref{fig:calorimeters} calorimeter systems and \subref{fig:muon_system} the muon spectrometer in ATLAS. Images adapted from \references\cite{Pequenao:1095927,Pequenao:1095929}.}\label{fig:cal_ms_schematic} \end{figure} The primary goal of calorimeters is to measure the energies of incoming particles by completely absorbing them. As the energies of neutral particles cannot be measured by other means, calorimeters are especially important for jet energy measurements (which contain neutral hadrons)~\cite{Brock:1354959}. Since particles like photons and electrons interact mostly electromagnetically, while hadrons predominantly interact through the strong interaction, two different calorimeter types are adopted in ATLAS. For values in $\eta$ matching the coverage of the \gls{id}, the electromagnetic calorimeter uses a finer granularity designed for precision measurements of electrons and photons. The subsequent hadronic calorimeter has a coarser granularity sufficient for the requirements of jet reconstruction and missing transverse momentum measurements. With a coverage up to $\vert\eta\vert <4.9$, the calorimeter system in ATLAS provides the near hermetic energy measurements needed for the inference of missing transverse momentum created by neutrinos and other weakly interacting neutral particles. Both calorimeters are sampling calorimeters, consisting of alternating layers of active and absorbing material. The absorbing material interacts with the incoming particles, causing them to deposit their energy by creating cascades (often called \textit{showers}) of secondary particles. The active layers are then used to record the shape and intensity of the showers produced. This alternating structure results in reduced material costs but also reduced energy resolution as only part of the particle's energy is sampled in each layer. Due to the typically longer cascades in hadronic interactions compared to electromagnetic ones, and in order to minimise punch-through into the muon system, the hadronic calorimeter requires a greater material depth than the electromagnetic one. The calorimeter systems in ATLAS are schematically illustrated in \cref{fig:calorimeters}. \subsubsection{Electromagnetic calorimeter} The \gls{em} calorimeter~\cite{Aad:2008zzm} uses \gls{lar} as active material and lead as absorber. Due to its accordion-shaped geometry, it provides full $\phi$ symmetry without azimuthal cracks. It is divided into a barrel part and two end-caps, covering $\vert\eta\vert <1.475$ and $1.375 < \vert\eta\vert <3.2$, respectively, and arranged in a way to provide uniform performance and resolution as a function of $\phi$. The barrel \gls{em} calorimeter consists of two identical half-barrels with a small gap of $\SI{4}{\centi\meter}$ at $z=0$. In the end-caps, the \gls{emec} consists of two coaxial wheels, covering the region $1.375 < \vert\eta\vert <2.5$ and $2.5 < \vert\eta\vert <3.2$, respectively. Calorimeter cells in the \gls{em} calorimeter are segmented into multiple layers with fine granularity in the first layers in the $\eta$ region matching the ID, and coarser granularity in the outer layers and for $2.5 < \vert\eta\vert <3.2$. In order to offer good containment of electromagnetic showers, the \gls{em} calorimeter has a depth of at least $22$ ($24$) radiation lengths in the barrel (end-caps). A single instrumented \gls{lar} layer serves as presampler in the region with $\vert\eta\vert <1.8$, allowing measurements of the energy losses upstream of the \gls{em} calorimeter, as for example in the cryostats. The design energy resolution of the \gls{em} calorimeter is $\sigma_E / E = 10\% / \sqrt{E} \oplus 0.7\%$. \subsubsection{Hadronic calorimeter} Placed directly outside the envelope of the \gls{em} calorimeter is the hadronic tile calorimeter~\cite{Aad:2008zzm}. It uses steel plates as absorber and polystyrene-based scintillating tiles as active material, and is subdivided into one central and two extended barrels. Each barrel is segmented in three layers in depth, with a total thickness of $7.4$ interaction lengths. The tiles are oriented radially and perpendicular to the beam pipe and grouped in 64 tile modules per barrel, resulting in a near hermetic azimuthal coverage. Wavelength shifting fibres are used to shift the ultraviolet light produced in the scintillator to visible light and guide it into photomultipliers located at the radially far end of each module. The tile calorimeter covers a region with $\vert\eta\vert <1.7$ and has a granularity of $\upDelta \eta \times \upDelta \phi = 0.1 \times 0.1$ except for the outermost layer which has a slightly coarser granularity in $\eta$. The design energy resolution of the tile calorimeter is $\sigma_E / E = 56.4\% / \sqrt{E} \oplus 5.5\%$. Hadronic calorimetry in the end-caps is provided by two independent calorimeter wheels per end-cap, situated directly behind the \gls{emec}. Similar to the \gls{emec}, the \gls{hec} also uses \gls{lar} as active material, allowing both calorimeter systems to share a single cryostat per end-cap. Instead of lead, the \gls{hec} uses copper as absorber, which not only drastically reduces the mass of a calorimeter at a given interaction length, but also improves the linearity of low-energy hadronic signals~\cite{Lee:2637852}. Each of the four wheels of the \gls{hec} is comprised of 32 wedge-shaped modules, divided into two layers in depth. The \gls{hec} provides coverage in the region with $1.5 < \vert\eta\vert <3.2$, slightly overlapping with the tile calorimeter and thus reducing the drop in material density in the transition region. While the granularity in the precision region with $1.5 < \vert\eta\vert <2.5$ is the same as for the tile calorimeter, more forward regions with large $\vert\eta\vert$ have a granularity of $\upDelta \eta \times \upDelta \phi = 0.2 \times 0.2$. The design resolution of the \gls{hec} is $\sigma_E / E = 70.6\% / \sqrt{E} \oplus 5.8\%$. \subsubsection{Forward Calorimeter} The forward region with $3.1 < \vert\eta\vert <4.9$ is covered by the \gls{lar}~\gls{fcal}~\cite{Aad:2008zzm}, which is integrated into the end-cap cryostats. This hermetic design not only minimises energy losses in cracks between the calorimeter systems, but also reduces the amount of background reaching the muon system in the outer shell of the ATLAS experiment. In order to limit the amount of neutrons reflected into the \gls{id}, the \gls{fcal} is recessed by about $\SI{1.2}{\meter}$ with respect to the \gls{em} calorimeter, motivating a high-density design due to space constraints. The \gls{fcal} in each end-cap consists of three layers with a total depth of 10 interaction lengths. While the first layer uses copper as absorber and is optimised for electromagnetic measurements, the remaining two layers are made of tungsten and cover hadronic interactions. The metals comprising each layer are arranged in a matrix structure with electrodes consisting of rods and tubes parallel to the beam pipe filling out regular channels. The small gaps ($\SI{0.25}{\milli\meter}$ in the first layer) between the rods and tubes of the electrodes are filled with \gls{lar} as active material. \subsection{Muon spectrometer} Muons, being minimum ionising particles, are the only charged particles that consistently pass through the entire detector including the calorimeter system. Providing one of the cleanest signatures for \gls{bsm} physics~\cite{Brock:1354959}, muonic final states are measured with a dedicated detector system in the outermost layer of the ATLAS experiment. Embedded in the magnetic field of the toroid magnets, the \gls{ms}~\cite{Aad:2008zzm} consists of three concentric cylindrical layers in the barrel region, and three wheels in each end-cap, providing momentum measurements up to $\vert\eta\vert <2.7$. It is designed to deliver a transverse momentum resolution of $10\%$ for $\SI{1}{\TeV}$ tracks and be able to measure muon momenta down to roughly $\SI{3}{\GeV}$. The \gls{ms} uses two high-precision gaseous detector chamber types, \gls{mdt} chambers and \glspl{csc}. As both the \glspl{mdt} and \glspl{csc} are drift chambers relying on charges drifting to an anode or cathode, the maximum response times of $\SI{700}{\nano\second}$ and $\SI{50}{\nano\second}$, respectively, are slow compared to the bunch-spacing of $\SI{25}{\nano\second}$. ATLAS therefore uses \glspl{rpc} in the barrel and \glspl{tgc} in the end-caps as triggers in order to associate measurements to the right bunch-crossing. \subsubsection{Monitored drift tubes} The \gls{mdt} chambers~\cite{Aad:2008zzm} are the main subcomponent providing precision measurements of the muon tracks up to $\vert\eta\vert <2.7$, except in the innermost end-cap layer where their coverage only extends to $\vert\eta\vert <2.0$. The \gls{mdt} chambers are made of 3--4 layers of $\SI{30}{\milli\meter}$ diameter drift tubes operated with Ar/CO$_2$ gas\footnote{With a small admixture of $\SI{300}{ppm}$ of water to improve high voltage stability.} pressurised to $\SI{3}{\bar}$. Charged particles traversing the drift tubes ionise the gas, creating electrons that drift towards a central tungsten-rhenium anode wire with a diameter of $\SI{50}{\micro\meter}$. Following the symmetry of the barrel toroid magnet, the \gls{mdt} chambers are arranged as octets around the calorimeters with the drift tubes in $\phi$ direction, \ie tangential to circles around the beam pipe. In order to be able to correct for potential chamber deformations due to varying thermal gradients, each \gls{mdt} chamber is equipped with an internal optical alignment system. Apart from the regular chambers in the barrel and the end-cap wheels, special modules are installed in order to minimise the acceptance losses due to the ATLAS support structure (the \textit{feet} of the experiment). With a single-tube accuracy of $\SI{80}{\micro\meter}$, two combined 3 (4)-tube multi-layers yield a resolution of 35~(30)~$\SI{}{\micro\meter}$. As \gls{mdt} chambers only provide precision measurement in $\eta$, the particle information in $\phi$ is taken from the \glspl{rpc} and \glspl{tgc}. \subsubsection{Cathode strip chambers} In the region with $\vert\eta\vert > 2.0$ in the first layer of the end-caps, the particle flux is too high to allow for safe operation of \gls{mdt} chambers. Instead, \glspl{csc}~\cite{Aad:2008zzm}, multiwire proportional chambers, are used for precision measurements in this region. The gold-plated tungsten-rhenium anode wires in the \glspl{csc} have a diameter of $\SI{30}{\micro\meter}$ and are oriented in radial direction. The wires are enclosed on both sides by cathode planes, one segmented perpendicular to the wires (thus providing the precision coordinate), the other parallel to the wires. Each chamber is filled with an Ar/CO$_2$ gas mixture and consists of four wire planes, resulting in four measurements of $\eta$ and $\phi$ for each track. In addition to the chamber-internal alignment sensors, ATLAS also employs an optical alignment system in order to align the precision chambers to each other. The \glspl{csc} provide a resolution of about $\SI{45}{\micro\meter}$ in $R$ and $\SI{5}{\milli\meter}$ in $\phi$. \subsubsection{Resistive plate chambers} \glspl{rpc}~\cite{Aad:2008zzm} are gaseous parallel electrode-plate chambers filled with a non-flammable, low-cost, tetrafluorethane-based gas mixture. They use two resistive plastic laminate plates kept $\SI{2}{\milli\meter}$ apart by insulating spacers. Due to an electric field of roughly $\SI{4.9}{\kilo\volt\per\milli\meter}$ between the plates, charged particles traversing the chamber cause avalanches of charges that can be read out through capacitive coupling to metallic strips, mounted on the outside of the resistive plates. In order to provide tracking information in both coordinates, each \gls{rpc} consists of two rectangular units, each containing two gas volumes with a total of four pairwise orthogonal sets of readout strips. The tree concentric cylindrical layers of \glspl{rpc} in the barrel region cover $\vert\eta\vert <1.05$ and provide six measurements of $\eta$ and $\phi$. % with a resolution of $\SI{10}{\milli\meter}$ per chamber. \subsubsection{Thin gap chambers} The \glspl{tgc}~\cite{Aad:2008zzm} are not only necessary for triggering in the end-caps of the \gls{ms} but also provide measurements of a second coordinate orthogonal to the measurements of the \glspl{mdt}. \glspl{tgc} are multi-wire proportional chambers enclosed by two cathode planes and a wire-to-wire gap of $\SI{1.8}{\milli\meter}$. The gas mixture of CO$_2$ and n-pentane allows for a quasi-saturated operation mode resulting in a relatively low gas gain. Each \gls{tgc} unit is built from a doublet or triplet of such chambers, separated by a supporting honeycomb structure, In each unit, the azimuthal coordinate is measured by radial copper readout strips, while the bending coordinate is provided by the wire groups. The \glspl{tgc} are mounted in two concentric disks in each end-cap, one covering the rapidity range $1.05 < \vert\eta\vert < 1.92$ and one covering the more forward region $1.92 < \vert\eta\vert <2.4$. \subsection{Forward detectors} Apart from the relative luminosity monitor LUCID-2~\cite{Avoni_2018} (introduced in \cref{sec:lumi_datataking}) located at $\pm \SI{17}{\meter}$ from the \gls{ip}, ATLAS uses three additional small detectors in the forward region. At $\pm \SI{140}{\meter}$ from the \gls{ip}, immediately behind the location where the straight beam pipe splits back into two separate beam pipes, lies the Zero-Degree Calorimeter~\cite{Leite:1628749}. It is embedded in a neutral particle absorber and mainly measures forward neutrons with $\vert\eta\vert > 8.3$ in heavy-ion collisions. Even further out from the \gls{ip} at $\pm \SI{240}{\meter}$, lies the \gls{alfa} detector~\cite{AbdelKhalek:2016tiv}, consisting of scintillating fibre trackers placed in Roman pots~\cite{AMALDI1977390}, and measuring the absolute luminosity through small scattering angles of $\SI{3}{\micro\radian}$ (necessitating the special beam conditions also used for the LUCID-2 calibrations). The last of the forward detectors is the \gls{afp} detector~\cite{Adamczyk:2017378}, installed at the end of 2016 and operational since early 2017. It is situated $\pm\SI{205}{\meter}$ and $\pm\SI{217}{\meter}$ from the \gls{ip}, and consists of Roman pots containing silicon trackers and time-of-flight detectors, allowing to study very forward protons from elastic and diffractive scattering processes. \subsection{Trigger and data acquisition system}\label{sec:trigger} With a nominal bunch spacing of $\SI{25}{\nano\second}$, the bunch crossing rate within ATLAS is $\SI{40}{\MHz}$. Even with only a single $pp$ collision event per bunch crossing, a mean event size of about $\SI{1.6}{\mega\byte}$ would result in a data volume of more than $\SI{60}{\tera\byte}$ per second. Building and maintaining computing and storage facilities able to handle this bandwidth would significantly exceed the available resources. Luckily, interesting\footnote{The definition of what is deemed to be interesting is to some extent subjective and is at the origin of the diverse \textit{trigger menu}~\cite{Martinez:2016udm} used in ATLAS.} physics events will often only occur at relatively low rates and will generally be hidden in vast amounts of QCD processes that have much higher cross-sections. In order to reduce the event rate written to disk and focus on interesting signatures worth studying, ATLAS used a two-level \textit{trigger} system during the Run~2 data-taking period~\cite{Martinez:2016udm}. The general approach consists of buffering events into temporary memory until the trigger system has decided to keep or discard them. The size of the temporary memory directly dictates the latency available to the trigger system for making a decision. The \gls{l1} trigger~\cite{CERN-LHCC-98-014} is the first stage of the trigger system. It is hardware-based and uses only coarse granularity calorimeter and muon detector information. With the inclusion of the \gls{l1} topological processor~\cite{Aad:2020wji} in Run~2, the \gls{l1} trigger is able to exploit topological features based on angular and kinematic selections and defines \glspl{roi}, \ie regions in $\eta$ and $\phi$ with interesting properties, that will be further analysed by the subsequent trigger step. Memory constraints allow for a decision time of $\SI{2.5}{\micro\second}$ per event, thus the \gls{l1} trigger reduces the event rate from the bunch-crossing rate of $\SI{40}{\MHz}$ to $\SI{100}{\kHz}$. The \glspl{roi} generated by the \gls{l1} trigger are subsequently processed by the \gls{hlt}~\cite{Jenni:616089}, a software-based trigger running on a dedicated computing farm. The \gls{hlt} has access to the full detector granularity in the \glspl{roi} as well as the entire event and runs reconstruction algorithms similar to those used in offline analysis, allowing to significantly refine the decisions from the \gls{l1} trigger. The \gls{hlt} reduces the event rate from $\SI{100}{\kHz}$ to $\SI{1}{\kHz}$. Events that pass one of the \gls{hlt} chains are written to permanent storage at CERN. The data-flow from the detectors to the storage elements and between the \gls{l1} and \gls{hlt} trigger elements is handled by the \gls{daq}~\cite{Jenni:616089}. %\subsection{Object reconstruction} \subsection{Monte Carlo simulation}\label{sec:mc_simulation} \glsreset{mc} \gls{mc} methods play a crucial role for simulating physics events in ATLAS. \gls{mc} simulations are computational algorithms using repeated random sampling to solve complex problems, often the estimation of multi-dimensional integrals for which analytical solutions are not known. According to the law of large numbers, the numerical approximations obtained by such a stochastic method become more accurate, the larger the sample size is. In addition, the central limit theorem also allows to state an uncertainty on the estimation of an expected value. As this method can in principle be used for any problem with a probabilistic interpretation, it is well suited for particle physics where many aspects are inherently connected to \glspl{pdf}. In the ATLAS experiment, \gls{mc} methods are not only used in physics analysis to estimate contributions from various physics processes in different phase space regions, but also to simulate particle interactions with the detector material. Furthermore, \gls{mc} methods find ample applications in detector design and optimisation, as well as physics objects reconstruction techniques. All of these applications rely on the \gls{mc} simulations being as precise as possible, \ie correctly describing the physics processes and detector responses underlying the data recorded by the ATLAS experiment. For reasons of efficient computing resource utilisation and easier software validation, the ATLAS simulation infrastructure~\cite{Aad:2010ah} can be divided into three main steps: \begin{enumerate}[label=(\roman*)] \item Event generation, \item Detector simulation, \item Digitisation, \end{enumerate} producing an output format identical to that of the \gls{daq} for recorded $pp$ collision events, such that the same trigger and reconstruction algorithms can be run on simulated data. \subsubsection{Event generation} \begin{figure} \floatbox[{\capbeside\thisfloatsetup{capbesideposition={right,center},capbesidewidth=0.38\textwidth}}]{figure}[\FBwidth] {\caption{Pictorial representation of a top quark pair-production event in association with a Higgs boson ($t\bar{t}+h$), simulated by a \gls{mc} event generator. The hard interaction (big red blob) is followed by the decay of the two top quarks and the Higgs boson (small red blobs). \gls{isr} and \gls{fsr} are shown as curly blue and red lines, respectively. A second interaction is simulated (purple blob) and contributions from the underlying event are modelled (purple lines). The hadronisation of final-state partons (light green blobs) is followed by the decays of unstable hadrons (dark green blobs). \gls{qed} radiation (yellow lines) is added at each stage of the event simulation. Figure adapted from~\reference\cite{Gleisberg:2008ta}.}\label{fig:sherpa_event}} {\includegraphics[width=0.6\textwidth]{sherpa_event}} \end{figure} Only a fraction of all $pp$ events actually involve a \textit{hard-scattering} event with high-momentum transfer, rendering them interesting for particle physicists to study. Generating and understanding the final states of these $pp$ collision events is an enormously challenging problem as it typically involves hundreds of particles with energies spanning many orders of magnitude~\cite{Buckley:2011ms}. This makes the matrix elements connected to these processes too complicated to be computed beyond the first few orders of perturbation theory. The treatment of divergences and the integration over large phase spaces further complicates the calculation of experimental observables. Due to the high-momentum transfer scale, the cross section of the hard-scatter interaction can be calculated perturbatively using collinear factorisation~\cite{Buckley:2011ms}, \begin{equation} \sigma = \sum_{a,b}{\int_0^1{\diff x_a \diff x_b\int{\diff\Phi_n f_a^{h_1}(x_a,\mu_\mathrm{F}) f_b^{h_2}(x_b,\mu_\mathrm{F})} \times \frac{1}{2 x_a x_b s}\vert \mathcal{M}_{ab\rightarrow n} \vert^2 (\Phi_n; \mu_\mathrm{F},\mu_\mathrm{R})}}, \end{equation} where $x_a$ and $x_b$ are the momentum fractions of the partons $a$ and $b$ with respect to their parent hadrons $h_1$ and $h_2$, $\mu_\mathrm{F}$ and $\mu_\mathrm{R}$ are the unphysical factorisation and the renormalisation scales, and $\diff\Phi_n$ is the differential final state phase space element. The phase space integration is typically done using \gls{mc} sampling methods. The choices for $\mu_\mathrm{R}$ and $\mu_\mathrm{F}$ are to some extent arbitrary, but are typically chosen to be in accordance with the logarithmic structure of \gls{qcd}, such that the matrix elements can be combined with the subsequent parton showers~\cite{Buckley:2011ms}. The \gls{me} $\vert\mathcal{M}_{ab\rightarrow n}\vert^2$ can be calculated using different methods~\cite{Buckley:2011ms}, with most \gls{mc} generators employing \gls{lo} computations. As \gls{lo} matrix elements are only reliable for the shapes of the distributions, an additional \textit{K-factor}, correcting the normalisation of the cross section to \gls{nlo}, is typically used~\cite{Buckley:2011ms}. The probability of finding a parton with momentum fractions $x$ in a hadron $h$, is given by the \gls{PDF} $f_a^{h}(x,\mu_\mathrm{F})$ and depends on the probed factorisation scale $\mu_\mathrm{F}$. The \glspl{PDF} depend on non-perturbative aspects of the proton wave function and can thus not be calculated from first principles. Instead, they are extracted from measurements in deep inelastic scattering experiments (see \eg~\references\cite{Gribov:1972ri, Blumlein:1996wj}). The variety of \glspl{PDF} provided by different groups is accessible in a common format through a unified interface implemented by the \textsc{LHAPDF} library~\cite{Buckley:2014ana}. In \gls{mc} generators, the choice of \glspl{PDF} not only plays a crucial role for the simulation of the hard process, but also in the subsequent parton showers and multiple parton interactions, thus influencing both cross sections and event shapes. Fixed-order matrix elements work well for describing separated, hard partons but are not sufficient to describe soft and collinear partons. Higher order effects from gluon radiation can be simulated using a \gls{ps} algorithm. The emitted gluons will radiate additional gluons or split into quark--antiquark pairs which can, in turn, undergo additional gluon radiation. The \gls{ps} thus describes an evolutionary process in momentum transfer scales from the scale of the hard scatter interaction down to the infrared scale of $\mathcal{O}(\SI{1}{\GeV})$, where \gls{qcd} becomes non-perturbative, and partons are confined into hadrons. Both \gls{isr} and \gls{fsr} processes are simulated through the parton showering. As opposed to \gls{me} calculations, parton showers offer poor modelling of few hard partons, but excel in the simulation of collinear and soft multi-parton states. In order to avoid double counting, the hard partons described by the calculation of the \gls{me} and the soft collinear emissions of the \gls{ps} have to be connected to each other. This is done either through \textit{matching} or \textit{merging}. \gls{me} matching approaches~\cite{Bengtsson:1986hr} integrate higher-order corrections to an inclusive process with the \gls{ps}~\cite{Buckley:2011ms}. Merging techniques like the CKKW~\cite{Catani:2001cc} or CKKW-L~\cite{Lonnblad:2001iq} methods define an unphysical merging scale which can be understood as a jet resolution scale, such that higher order \gls{me} corrections are only calculated for jets above that scale (while jets below that scale are modelled with the \gls{ps}). Additional activity in the event not directly associated to the hard process is simulated. The underlying event is typically defined to be the remaining, additional activity after \gls{isr} and \gls{fsr} off the hard process has been taken into account~\cite{Buckley:2011ms}. Furthermore, \textit{multiple interactions} can occur in a single $pp$ collision. The modelling of multiple interactions involves multiple hard scatter processes per $pp$ collision as well as multiple soft interactions in addition to the hard scatter process. Once the \gls{ps} reaches energies of $\mathcal{O}(\SI{1}{\GeV})$, entering the non-perturbative regime of \gls{qcd}, the coloured objects need to be transformed into colourless states. This so-called \textit{hadronisation} step cannot be calculated from first principles but has to be modelled, typically with either a \textit{string} or a \textit{cluster} model. The most advanced of the string models is the \textit{Lund} model~\cite{Andersson:1983ia,andersson_1998}. It starts from linear confinement and considers a linear potential between a $q\bar{q}$ pair that can be thought of as a uniform colour flux tube stretching between the $q$ and $\bar{q}$, with a transverse dimension of the order of typical hadronic size (\ie around $\SI{1}{\femto\meter}$). As the $q\bar{q}$ pair moves apart, the flux tube stretches in length, leading to an increase in potential energy, finally breaking apart once enough energy is available to create a new $q'\bar{q}'$ pair, resulting in two colourless quark pairs $q\bar{q}'$ and $q'\bar{q}$. The new quark pairs can again move apart and break up further, leading to quark anti-quark pairs with low relative momentum, forming the final hadrons. The cluster model is based on the preconfinement property of \glspl{ps}~\cite{Amati:1979fg}, stating that the colourless clusters of partons can be formed at any evolution scale $Q_0$ of the \gls{ps}, and result in universal invariant mass distributions that depend only on $Q_0$ and the \gls{qcd} scale $\Lambda$, but not on the energy scale $Q$ or nature of the hard process at the origin of the \gls{ps}~\cite{Buckley:2011ms}. The universal invariant mass distribution holds in the asymptotic limit where $Q_0 \ll Q$. If further $Q_0 \gg \Lambda$, then the mass, momentum and multiplicity distributions of the colourless clusters can even be calculated perturbatively~\cite{Buckley:2011ms}. Cluster models start with non-perturbative splitting of gluons and $q\bar{q}$ pairs, followed by the formation of clusters from colour-connected pairs. Clusters further split up until the $Q_0$ scale is reached, at which point they form the final mesons. As not all hadrons formed in the hadronisation process are stable, the affected hadrons need to be decayed until they form resonances stable enough to reach the detector material. In addition, \gls{qed} radiation, that can happen at any time during the event, needs to be simulated. This is typically either done with algorithms similar to the ones used for the \gls{ps}, or using the Yennie--Frautschi--Suura formalism proposed in \reference\cite{YENNIE1961379}. The simulation steps that cannot be performed from first principles but rely on phenomenological models (underlying event, \gls{ps}, hadronisation) introduce free parameters that need to be derived or \textit{tuned} from parameter optimisations against experimental data. In ATLAS, the output of \gls{mc} event generators is stored in so-called EVNT data format containing HepMC-like~\cite{Dobbs:2001ck} event records. Although only the stable final-state particles are propagated to the detector simulation, the original event record contains the connected tree (either the entirety or only part of it, depending on the \gls{mc} generator used) as so-called \textit{Monte Carlo truth}. A representation of a full simulated \gls{susy} signal event considering the simplified model for electroweakino pair production from \cref{fig:Wh_model} is shown in \cref{fig:mcviz_signal}. %\begin{figure} % \centering % \includegraphics[width=\textwidth]{test26} % \caption{Pictorial representation of a (relatively simple) fully showered electroweakino pair production event with a final state including an electron and two \textit{b}-jets. Most of the additional activity in the event stems from \gls{qcd} interactions and results in a large amount of hadrons in the final state. The two incoming protons are marked as blue blobs. Gluons are represented as green curly green lines, and gluon self interaction is shown as green blobs (indicating only initial and final particles). Gauge and Higgs bosons are shown as pink lines. Photon radiation is shown as curly yellow lines.} % \label{fig:mcviz_signal} %\end{figure} \subsubsection{Detector simulation}\label{sec:detector_simulation} Only the final-state particles generated by the \gls{mc} event generator are read into the detector simulation. In ATLAS, the full detector simulation is handled by \textsc{Geant4}~\cite{geant:2002hh}, a toolkit providing detailed models for physics processes as well as an infrastructure for particle transportation through a given geometry. \textsc{Geant4} has knowledge about the full detector geometry as well as the materials used in the subdetectors and is able to compute the energy deposits (so-called \textit{hits}) from single particles in the different sensitive portions of the detector components. The \textsc{Geant4} simulation adds information to the Monte Carlo truth content created during the event generation, including however only the most relevant tracks (mostly from the \gls{id}) due to size constraints~\cite{Aad:2010ah}. The complicated detector geometry and the detailed description of physics processes requires large computing resources for the full detector simulation using \textsc{Geant4}, rendering it inaccessible for many physics studies requiring large statistics. Several varieties of fast simulations are available as an alternative. One of the most-used ones is \textsc{ATLFAST-II}~\cite{Aad:2010ah}, a fast simulation that uses the \textsc{Geant4} full simulation only for the \gls{id} and \gls{ms}. The slow simulation in the calorimeters---taking about 80\% of the full simulation time---is replaced with \textsc{FastCaloSim}~\cite{ATL-SOFT-PUB-2018-002}, using parameterised electromagnetic and hadronic showers. Compared to the $\mathcal{O}(\SI{e3}{\second})$ simulation time per event in the full simulation, the \textsc{ATLFAST-II} detector simulation only takes $\mathcal{O}(\SI{e2}{\second})$~\cite{Aad:2010ah}. For the large-scale reinterpretation discussed in \cref{part:reinterpretation} of this thesis, even the ATLASFAST-II detector simulation is still too computationally expensive, and instead the truth-level \gls{mc} events have to be used. In order to approximate the detector response, dedicated \textit{four-vector smearing techniques}, introduced in \cref{ch:preservation}, are employed. \subsubsection{Digitisation} During the digitisation step, the hits from the detector simulation are converted into detector responses, so-called \textit{digits} that are typically produced when currents or voltages in the respective readout channels rise above a certain threshold in a given time window. The digitisation considers a modelling of the peculiarities of each detector component, including electronic noise and cross-talk~\cite{Aad:2010ah}. The effects from out-of-time and in-time pile-up are considered by reading in multiple events and overlaying their hits. In order to match the true pile-up distribution in data, the number of events to overlay per bunch crossing can be set at run time. As described in~\cref{sec:pileup}, effects from cavern background, beam halo and beam gas can either be mitigated or removed at analysis level and are therefore typically not simulated. \begin{sidewaysfigure}[ht] \includegraphics[width=0.9\textwidth]{test26} \caption{Pictorial representation of a (relatively simple) fully showered electroweakino pair production event with a final state including an electron and two \textit{b}-jets. Most of the additional activity in the event stems from \gls{qcd} interactions and results in a large amount of hadrons in the final state. The two incoming protons are marked as blue blobs. Gluons are represented as green curly green lines, and gluon self interaction is shown as green blobs (indicating only initial and final particles). Gauge and Higgs bosons are shown as pink lines. Photon radiation is shown as curly yellow lines.} \label{fig:mcviz_signal} \end{sidewaysfigure}
{ "alphanum_fraction": 0.7844243962, "avg_line_length": 141.7382978723, "ext": "tex", "hexsha": "6b79fa026b085faac7d90f576f94948ca34ab272", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "607efdd3d48ec4def49ba41188c4453b04dd99d2", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "eschanet/phd-thesis", "max_forks_repo_path": "chapter-experiment/experiment.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "607efdd3d48ec4def49ba41188c4453b04dd99d2", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "eschanet/phd-thesis", "max_issues_repo_path": "chapter-experiment/experiment.tex", "max_line_length": 1580, "max_stars_count": null, "max_stars_repo_head_hexsha": "607efdd3d48ec4def49ba41188c4453b04dd99d2", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "eschanet/phd-thesis", "max_stars_repo_path": "chapter-experiment/experiment.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 16845, "size": 66617 }
\chapter{Multiple Modules and GLOBAL} PEARL supports the separation of the application into seperate modules. In OpenPEARL, the module name given by the \texttt{MODULE(name)} statement defines the namespace in C++. In order to avoid conflicts with namespaces in the standard libraries, the supplied module name gets a prefix \texttt{pearl\_}. All user supplied identifiers reside in the namespace of the module. The complete runtime library uses the namespace \texttt{pearlrt}. OpenPEARL needs some other variables for the organisation. They may be in the same namespace, if conflicts are avoided. This is achieved by prefixing all user supplied identifiers with a prefix \texttt{\_}. \begin{PEARLCode} MODULE(moduleA); PROBLEM; DCL a FIXED GLOBAL; SPC b FIXED GLOBAL (moduleB); ... ! in any PROC a := b; ... \end{PEARLCode} will produce a C++ code like: \begin{CppCode} #include "PearlIncludes.h" namespace pearl_moduleA { pearlrt::Fixed<31> _a; } namespace pearl_moduleB { extern pearlrt::Fixed<31> _b; } namespace pearl_moduleA { ... // in any PROC _a := moduleB::_b; ... } \end{CppCode}
{ "alphanum_fraction": 0.7450980392, "avg_line_length": 23.375, "ext": "tex", "hexsha": "a30d5257775e047e2cf5b3eeb5309d57a5fba08b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d7db83b0ea15b7ba0f6244d918432c830ddcd697", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "BenniN/OpenPEARLThesis", "max_forks_repo_path": "OpenPEARL/openpearl-code/runtime/doc/global.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d7db83b0ea15b7ba0f6244d918432c830ddcd697", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "BenniN/OpenPEARLThesis", "max_issues_repo_path": "OpenPEARL/openpearl-code/runtime/doc/global.tex", "max_line_length": 78, "max_stars_count": 1, "max_stars_repo_head_hexsha": "d7db83b0ea15b7ba0f6244d918432c830ddcd697", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "BenniN/OpenPEARLThesis", "max_stars_repo_path": "OpenPEARL/openpearl-code/runtime/doc/global.tex", "max_stars_repo_stars_event_max_datetime": "2020-09-15T07:26:00.000Z", "max_stars_repo_stars_event_min_datetime": "2020-09-15T07:26:00.000Z", "num_tokens": 283, "size": 1122 }
% ---------------------------------------------------------------------------- \typeout{--------------- INTROduction ---------------------------------------} \chapter{Introduction} \label{INTRO} Soar has been developed to be an architecture for constructing general intelligent systems. It has been in use since 1983, and has evolved through many different versions. This manual documents the most current of these: Soar, version \SoarVersionMajor.\SoarVersionMinor.\SoarVersionRevision. Our goals for Soar include that it is to be an architecture that can: \vspace{-12pt} \begin{itemize} \item be used to build systems that work on the full range of tasks expected of an \linebreak intelligent agent, from highly routine to extremely difficult, open-ended problems;\vspace{-6pt} \item represent and use appropriate forms of knowledge, such as procedural, declarative, episodic, and possibly iconic;\vspace{-6pt} \item employ the full range of problem solving methods;\vspace{-6pt} \item interact with the outside world; and\vspace{-6pt} \item learn about all aspects of the tasks and its performance on those tasks. \end{itemize} In other words, our intention is for Soar to support all the capabilities required of a general intelligent agent. Below are the major principles that are the cornerstones of Soar's design: \vspace{-12pt} \begin{enumerate} \item The number of distinct architectural mechanisms should be minimized. Classically Soar had a single representation of permanent knowledge (productions), a single representation of temporary knowledge (objects with attributes and values), a single mechanism for generating goals (automatic subgoaling), and a single learning mechanism (chunking). It was only as Soar was applied to diverse tasks in complex environments that we found these mechanisms to be insufficient and have recently added new long-term memories (semantic and episodic) and learning mechanisms (semantic, episodic, and reinforcement learning) to extend Soar agents with crucial new functionalities. \vspace{-6pt} \item All decisions are made through the combination of relevant knowledge at run-time. In Soar, every decision is based on the current interpretation of sensory data and any relevant knowledge retrieved from permanent memory. Decisions are never precompiled into uninterruptible sequences. \end{enumerate} % ---------------------------------------------------------------------------- % ---------------------------------------------------------------------------- \section{Using this Manual} \nocomment{check that this describes the final form of the manual} We expect that novice Soar users will read the manual in the order it is presented: \begin{description} \item[Chapter \ref{ARCH} and Chapter \ref{SYNTAX}] describe Soar from different perspectives: \textbf{Chapter \ref{ARCH}} describes the Soar architecture, but avoids issues of syntax, while \textbf{Chapter \ref{SYNTAX}} describes the syntax of Soar, including the specific conditions and actions allowed in Soar productions. \item[Chapter \ref{CHUNKING}] describes chunking, Soar's mechanism to learn new procedural knowledge. Not all users will make use of chunking, but it is important to know that this capability exists. \item[Chapter \ref{RL}] describes reinforcement learning (RL), a mechanism by which Soar's procedural knowledge is tuned given task experience. Not all users will make use of RL, but it is important to know that this capability exists. \item[Chapter \ref{SMEM} and Chapter \ref{EPMEM}] describe Soar's long-term declarative memory systems, semantic and episodic. Not all users will make use of these mechanisms, but it is important to know that they exist. \item[Chapter \ref{INTERFACE}] describes the Soar user interface --- how the user interacts with Soar. The chapter is a catalog of user-interface commands, grouped by functionality. The most accurate and up-to-date information on the syntax of the Soar User Interface is found online, on the Soar Wiki, at \hspace{2em}\soar{\htmladdnormallink{http://code.google.com/p/soar/}{http://code.google.com/p/soar/}}. \end{description} Advanced users will refer most often to Chapter \ref{INTERFACE}, flipping back to Chapters \ref{ARCH} and \ref{SYNTAX} to answer specific questions. There are several appendices included with this manual: \begin{description} %\item[Appendix \ref{GLOSSARY}] is a glossary of terminology used in this manual. \item[Appendix \ref{BLOCKSCODE}] contains an example Soar program for a simple version of the blocks world. This blocks-world program is used as an example throughout the manual. %\item[Appendix \ref{USING}] is an overview of example programs currently available %(provided with the Soar distribution) with explanations of how to run them, %and pointers to other help sources available for novices. %\item[Appendix \ref{DEFAULT}] describes Soar's default knowledge, which can be used %(or not) with any Soar task. \item[Appendix \ref{GRAMMARS}] provides a grammar for Soar productions. \item[Appendix \ref{SUPPORT}] describes the determination of o-support. \item[Appendix \ref{PREFERENCES}] provides a detailed explanation of the preference resolution process. %\item[Appendix \ref{Tcl-I/O}] gives an example of Soar I/O functions, written in Tcl. \item[Appendix \ref{GDS}] provides an explanation of the Goal Dependency Set. \end{description} \subsubsection*{Additional Back Matter} The appendices are followed by an index; the last pages of this manual contain a summary and index of the user-interface functions for quick reference. \subsubsection*{Not Described in This Manual} Some of the more advanced features of Soar are not described in this manual, such as how to interface with a simulator, or how to create Soar applications using multiple interacting agents. A discussion of these topics is provided in a separate document, the \textit{SML Quick Start Guide}, which is available at the Soar project website (see link below). For novice Soar users, try \textit{The Soar} \textit{\SoarVersionMajor} \textit{Tutorial}, which guides the reader through several example tasks and exercises. See Section \ref{CONTACT} for information about obtaining Soar documentation. % ---------------------------------------------------------------------------- %\section{Other Soar Documentation} %\label{DOCUMENTATION} % %In addition to this manual, there are three other documents that you may want %to obtain for more information about different aspects of Soar: % %\begin{description} %\item[The Soar 8 Tutorial] is written for novice Soar users, and guides the % reader through several example tasks and exercises. %\item[The Soar Advanced Applications Manual] is written for advanced Soar % users. This guide describes how to add input and output routines to % Soar programs, how to run multiple Soar ``agents'' from a single Soar % image, and how to extend Soar by adding your own user-interface % functions, simulators, or graphical user interfaces. %\item[Soar Design Dogma] gives advice and examples about good Soar programming style. % It may be helpful to both the novice and mid-level Soar user. %\end{description} % ---------------------------------------------------------------------------- \section{Contacting the Soar Group} \label{CONTACT} \subsection*{Resources on the Internet} The primary website for Soar is: \hspace{2em}\soar{\htmladdnormallink{http://sitemaker.umich.edu/soar/}{http://sitemaker.umich.edu/soar/}} Look here for the latest downloads, documentation, and Soar-related announcements, as well as links to information about specific Soar research projects and researchers and a FAQ (list of frequently asked questions) about Soar. Soar kernel development is hosted on Google Code at \hspace{2em}\soar{\htmladdnormallink{http://code.google.com/p/soar/}{http://code.google.com/p/soar/}} This site contains the public subversion repository, active documentation wiki, and is also where bugs should be reported. For questions about Soar, you may write to the Soar e-mail list at: \hspace{2em}\soar{\htmladdnormallink{[email protected]}{mailto:[email protected]}} If you would like to be on this list yourself, visit: \hspace{2em}\soar{\htmladdnormallink{http://lists.sourceforge.net/lists/listinfo/soar-group/}{http://lists.sourceforge.net/lists/listinfo/soar-group/}} %The online FAQ will usually contain the most current information on Soar. It %is available at: %\soar{http://acs.ist.psu.edu/soar-faq/soar-faq.html} \newpage \subsection*{For Those Without Internet Access} If you cannot reach us on the internet, please write to us at the following address: \begin{flushleft} \hspace{2em}The Soar Group \\ \hspace{2em}Artificial Intelligence Laboratory \\ \hspace{2em}University of Michigan\\ \hspace{2em}2260 Hayward Street\\ \hspace{2em}Ann Arbor, MI 48109-2121 \\ \hspace{2em}USA \\ \end{flushleft} % ---------------------------------------------------------------------------- % ---------------------------------------------------------------------------- \section{A Note on Different Platforms and Operating Systems} \label{INTRO-platforms} \index{Unix} \index{Linux} \index{Macintosh} \index{Personal Computer} \index{Windows} \index{Operating System} Soar runs on a wide variety of platforms, including Linux, Unix (although not heavily tested), Mac OS X, and Windows 7, Vista, and XP (and probably 2000 and NT). This manual documents Soar generally, although all references to files and directories use Unix format conventions rather than Windows-style folders.
{ "alphanum_fraction": 0.730500821, "avg_line_length": 43.8918918919, "ext": "tex", "hexsha": "6eb64ca8a64c50879cd521d9cca39a04057c0384", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "74a6f32ba1be3a7b3ed4eac0b44b0f4b2e981f71", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "sleyzerzon/soar", "max_forks_repo_path": "Documentation/ManualSource/intro.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "74a6f32ba1be3a7b3ed4eac0b44b0f4b2e981f71", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "sleyzerzon/soar", "max_issues_repo_path": "Documentation/ManualSource/intro.tex", "max_line_length": 151, "max_stars_count": 1, "max_stars_repo_head_hexsha": "74a6f32ba1be3a7b3ed4eac0b44b0f4b2e981f71", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "sleyzerzon/soar", "max_stars_repo_path": "Documentation/ManualSource/intro.tex", "max_stars_repo_stars_event_max_datetime": "2016-04-01T04:02:28.000Z", "max_stars_repo_stars_event_min_datetime": "2016-04-01T04:02:28.000Z", "num_tokens": 2215, "size": 9744 }
This chapter gives a brief description on how to use the included % periodic samplers (Section~\ref{sec:componentsMonitoring:monitoringController:periodicSamplers}) % for monitoring CPU utilization and memory/swap usage. % The directory \dir{\SigarExampleReleaseDirDistro/} contains the % sources, gradle scripts etc.\ used in this example. % These samplers employ the Sigar API~\cite{HypericSigarWebsite}. \\% \section{Preparation} \begin{compactenum} \item Copy the files \file{\mainJarEMF} and \file{\sigarJar} from the % binary distribution to the example's \dir{lib/} directory. \item Additionally, depending on the underlying system platform, % corresponding Sigar native libraries need to be placed in the example's \dir{lib/} directory. % Kieker's \dir{lib/sigar-native-libs/} folder already includes the right libraries for 32 and 64~bit Linux/Windows platforms. % Native libraries for other platforms can be downloaded from~\cite{HypericSigarWebsite}. % \end{compactenum} \section{Using the Sigar-Based Samplers} \WARNBOX{ Using a very short sampling period with Sigar ($< 500$ ms) can result in monitoring log entries with NaN values. } The Sigar API~\cite{HypericSigarWebsite} provides access to a number of system-level inventory and monitoring data, % e.g., regarding memory, swap, cpu, file system, and network devices. % Kieker includes Sigar-based samplers % for monitoring CPU utilization % (\class{CPUsDetailedPercSampler}, \class{CPUsCombinedPercSampler}) % and memory/swap usage (\class{MemSwapUsageSampler}). % When registered as a periodic sampler (Section~\ref{sec:componentsMonitoring:monitoringController:periodicSamplers}), % these samplers collect the data of interest employing the Sigar API, % and write monitoring records of types \class{CPUUtilizationRecord}, % \class{ResourceUtilizationRecord}, and \class{MemSwapUsageRecord} respectively % to the configured monitoring log/stream. % Listing~\ref{listing:sigarSamplerMonitoringStarterExample} shows an excerpt from % this example's \class{MonitoringStarter} % which creates and registers two Sigar-based peridioc samplers. % For reasons of performance and thread-safety, the \class{SigarSamplerFactory} % should be used to create instances of the Sigar-based Samplers. %\pagebreak \setJavaCodeListing \lstinputlisting[firstline=38, lastline=51, firstnumber=38, caption=Excerpt from MonitoringStarter.java, label=listing:sigarSamplerMonitoringStarterExample]{\SigarExampleDir/src/kieker/examples/userguide/appendixSigar/MonitoringStarter.java} \noindent Based on the existing samplers, users can easily create custom Sigar-based % samplers by extending the class \class{AbstractSigarSampler}. For example, Listing~% \ref{listing:sigarSamplerMethod} in Section~\ref{sec:componentsMonitoring:monitoringController:periodicSamplers} % shows the \class{MemSwapUsageSampler}'s \method{sample} method. % Typically, it is also required to define a corresponding monitoring record type, % as explained in Section~\ref{sec:componentsMonitoring:monitoringRecords}. % When implementing custom Sigar-based samplers, the \class{SigarSamplerFactory}'s \method{getSigar} method should % be used to retrieve a \class{Sigar} instance. % This example uses a stand-alone Java application to set up % a Sigar-based monitoring process. When using servlet containers, % users may consider implementing this routine as a \class{ServletContextListener}, % which are executed when the container is started and shutdown. % As an example, Kieker includes a \class{CPUMemUsageServletContextListener}. % \section{Executing the Example} The execution of the example is performed by the following two steps:\\ \begin{compactenum} \item Monitoring CPU utilization and memory usage for 30~seconds (class \class{MonitoringStarter}): \setBashListing \begin{lstlisting}[caption=Start of the monitoring under UNIX-like systems] #\lstshellprompt{}# #\textbf{./gradlew}# runMonitoring \end{lstlisting} \begin{lstlisting}[caption=Start of the monitoring under Windows] #\lstshellprompt{}# #\textbf{gradlew.bat}# runMonitoring \end{lstlisting} Kieker's console output lists the location of the directory containing the file system % monitoring log. The following listing shows an excerpt: % %\enlargethispage{1.5cm} \pagebreak \setBashListing \begin{lstlisting} Writer: 'kieker.monitoring.writer.filesystem.AsciiFileWriter' Configuration: kieker.monitoring.writer.filesystem.AsciiFileWriter.QueueFullBehavior='0' kieker.monitoring.writer.filesystem.AsciiFileWriter.QueueSize='10000' kieker.monitoring.writer.filesystem.AsciiFileWriter.customStoragePath='' kieker.monitoring.writer.filesystem.AsciiFileWriter.storeInJavaIoTmpdir='true' Writer Threads (1): Finished: 'false'; Writing to Directory: '/tmp/kieker-20110511-10095928-UTC-avanhoorn-thinkpad-KIEKER-SINGLETON' \end{lstlisting} A sample monitoring log can be found in the directory \dir{\SigarExampleReleaseDirDistro/testdata/kieker-20110511-10095928-UTC-avanhoorn-thinkpad-KIEKER-SINGLETON/}. \item Analyzing the monitoring data (class \class{AnalysisStarter}): \setBashListing \begin{lstlisting}[caption=Start of the monitoring data analysis under UNIX-like systems] #\lstshellprompt{}# #\textbf{./gradlew}# runAnalysis #\textbf{-Danalysis.directory}#=</path/to/monitoring/log/> \end{lstlisting} \begin{lstlisting}[caption=Start of the monitoring data analysis under Windows] #\lstshellprompt{}# #\textbf{gradlew.bat}# runAnalysis #\textbf{-Danalysis.directory}#=</path/to/monitoring/log/> \end{lstlisting} You need to replace \dir{</path/to/monitoring/log/>} by the location of the file system monitoring log. % You can also use the above-mentioned monitoring log included in the example. % The \class{AnalysisStarter} produces a simple console output for each monitoring record, % as shown in the following excerpt: \setBashListing \begin{lstlisting} Wed, 11 May 2011 10:10:01 +0000 (UTC): [CPU] host: thinkpad ; cpu-id: 0 ; utilization: 0.00 % Wed, 11 May 2011 10:10:01 +0000 (UTC): [CPU] host: thinkpad ; cpu-id: 1 ; utilization: 0.00 % Wed, 11 May 2011 10:10:01 +0000 (UTC): [Mem/Swap] host: thinkpad ; mem usage: 722.0 MB ; swap usage: 0.0 MB Wed, 11 May 2011 10:10:06 +0000 (UTC): [CPU] host: thinkpad ; cpu-id: 0 ; utilization: 5.35 % Wed, 11 May 2011 10:10:06 +0000 (UTC): [CPU] host: thinkpad ; cpu-id: 1 ; utilization: 1.31 % Wed, 11 May 2011 10:10:06 +0000 (UTC): [Mem/Swap] host: thinkpad ; mem usage: 721.0 MB ; swap usage: 0.0 MB Wed, 11 May 2011 10:10:11 +0000 (UTC): [CPU] host: thinkpad ; cpu-id: 0 ; utilization: 1.80 % Wed, 11 May 2011 10:10:11 +0000 (UTC): [CPU] host: thinkpad ; cpu-id: 1 ; utilization: 0.20 % Wed, 11 May 2011 10:10:11 +0000 (UTC): [Mem/Swap] host: thinkpad ; mem usage: 721.0 MB ; swap usage: 0.0 MB Wed, 11 May 2011 10:10:16 +0000 (UTC): [CPU] host: thinkpad ; cpu-id: 0 ; utilization: 1.40 % Wed, 11 May 2011 10:10:16 +0000 (UTC): [CPU] host: thinkpad ; cpu-id: 1 ; utilization: 0.79 % Wed, 11 May 2011 10:10:16 +0000 (UTC): [Mem/Swap] host: thinkpad ; mem usage: 721.0 MB ; swap usage: 0.0 MB Wed, 11 May 2011 10:10:21 +0000 (UTC): [CPU] host: thinkpad ; cpu-id: 0 ; utilization: 1.80 % Wed, 11 May 2011 10:10:21 +0000 (UTC): [CPU] host: thinkpad ; cpu-id: 1 ; utilization: 0.79 % Wed, 11 May 2011 10:10:21 +0000 (UTC): [Mem/Swap] host: thinkpad ; mem usage: 721.0 MB ; swap usage: 0.0 MB Wed, 11 May 2011 10:10:26 +0000 (UTC): [CPU] host: thinkpad ; cpu-id: 0 ; utilization: 0.40 % Wed, 11 May 2011 10:10:26 +0000 (UTC): [CPU] host: thinkpad ; cpu-id: 1 ; utilization: 0.59 % Wed, 11 May 2011 10:10:26 +0000 (UTC): [Mem/Swap] host: thinkpad ; mem usage: 721.0 MB ; swap usage: 0.0 MB \end{lstlisting} \end{compactenum}
{ "alphanum_fraction": 0.7595572146, "avg_line_length": 57.125, "ext": "tex", "hexsha": "a780b6eadc6f7720e6eb26577b84437f942ea9f3", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "64c8a74422643362da92bb107ae94f892fa2cbf9", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "Zachpocalypse/kieker", "max_forks_repo_path": "kieker-documentation/userguide/Appendix-ch-Sigar.inc.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "64c8a74422643362da92bb107ae94f892fa2cbf9", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "Zachpocalypse/kieker", "max_issues_repo_path": "kieker-documentation/userguide/Appendix-ch-Sigar.inc.tex", "max_line_length": 241, "max_stars_count": 1, "max_stars_repo_head_hexsha": "64c8a74422643362da92bb107ae94f892fa2cbf9", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "Zachpocalypse/kieker", "max_stars_repo_path": "kieker-documentation/userguide/Appendix-ch-Sigar.inc.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-01T14:04:34.000Z", "max_stars_repo_stars_event_min_datetime": "2022-01-01T14:04:34.000Z", "num_tokens": 2276, "size": 7769 }
\section{Case Analysis, Access Elimination and Read Introduction} \label{sec:refine} The previous section shows the simplicity and beauty of pomsets with preconditions as a model of relaxed memory. In this section we look at some of the complications and ugliness. We consider the following optimizations on relaxed access: case analysis \eqref{CA}, dead store elimination \eqref{DS}, store forwarding \eqref{SF}, read elimination \eqref{RE}, and irrelevant read introduction \eqref{RI}. We do not attempt to validate rewrites that eliminate $\mRA$/$\mSC$ accesses, beyond those already given. \begin{definition} \label{def:cover} Extend the definition of prefixing (Cand.~\ref{def:prefix}) to require: \begin{itemize} \item[{\labeltextsc[P6]{(P6)}{6}}] if $\bEv$ is a release, $\aEv_1$ is an acquire, $\aEv_1\le\aEv_2$, then $\labelingForm(\aEv_2)$ is location independent. \end{itemize} Let $\aPS\in(\relfilt[]{\aLoc} \aPSS)$ when %be the set $\aPSS'\subseteq\aPSS$ such that $\aPS\in\aPSS$ and for every release $\aEv\in\Event$, there is some $\bEv\in\Event$ that writes $\aLoc$ such that $\bEv \le\aEv$. Let $\aPS'\in(\Rdis{\aLoc}{\aVal}\aPSS)$ %be the set $\aPSS'$ where $\aPS'\in\aPSS'$ when there is $\aPS\in\aPSS$ such that $\Event' = \Event$, ${\le'} = {\le}$, $\labelingAct' = \labelingAct$, and either $\labelingForm'(\aEv)$ implies $\labelingForm(\aEv)$ or $\aEv$ is $\le$-minimal\footnote{$\aEv$ is \emph{$\le$-minimal} if there is no $\bEv$ such that $\bEv\le\aEv$.} and $\labelingAct(\aEv)=\DR[\mRLXquiet]{\aLoc}{\aVal}$. \begin{align*} \sem{\aReg\GETS\aLoc^\mRLX\SEMI \aCmd} &\eqdef\;\mathhl{ \sem{\aCmd}[\aLoc/\aReg]}\; \cup \textstyle\bigcup_\aVal\; (\DR[\mRLX]\aLoc\aVal) \prefix \;\mathhl{\Rdis{\aLoc}{\aVal}}\;\, \sem{\aCmd} [\aLoc/\aReg] \\ \sem{\aLoc^\mRLX\GETS\aExp\SEMI \aCmd} & \eqdef \;\mathhl{%\Wdis{\aLoc}{\aExp}\bigl( \relfilt[]{\aLoc} \sem{\aCmd}[\aExp/\aLoc] }\; \cup \textstyle\bigcup_\aVal\; (\aExp=\aVal \mid \DW[\mRLX]\aLoc\aVal) \prefix \sem{\aCmd}[\aExp/\aLoc] \\ \begin{aligned} \sem{\aReg\GETS\aLoc^\amode\SEMI \aCmd} \\ \sem{\aLoc^\amode\GETS\aExp\SEMI \aCmd} \end{aligned} & \begin{aligned} &\eqdef \textstyle\bigcup_\aVal\; (\DRmode\aLoc\aVal) \prefix \sem{\aCmd} [\aLoc/\aReg] &&\textif \amode\neq\mRLX \\ &\eqdef \textstyle\bigcup_\aVal\; (\aExp=\aVal \mid \DWmode\aLoc\aVal) \prefix \sem{\aCmd}[\aExp/\aLoc] &&\textif \amode\neq\mRLX \end{aligned} \end{align*} \end{definition} There are four changes in the definition: To validate read elimination, we include $\sem{\aCmd}[\aLoc/\aReg]$. To ensure that read elimination does not allow stale reads, we require \ref{6}. To validate write elimination, we include $\relfilt[]{\aLoc} \sem{\aCmd}[\aExp/\aLoc]$. To validate case analysis, we apply $\Rdis{\aLoc}{\aVal}$ before prefixing a read. We close this section with a read-enriched semantics that validates irrelevant read introduction. \myparagraph{Read Elimination and Store Forwarding} In our work on microarchitecture \citep{2019-sp}, read actions could be observed using cache effects. Candidate \ref{cand:ord} maintains this perspective---for example, it distinguishes $\sem{r\GETS x}$ and $\sem{\SKIP}$ % since even though there is no context in the language of this paper that can distinguish these programs. If one accepts that these programs should be equated at an architectural level, then one would expect the semantics to validate read elimination \eqref{RE} and store forwarding \eqref{SF}. \begin{align*} \taglabel{RE} \sem{\aReg \GETS \aLoc\SEMI\aCmd} & \supseteq \sem{\aCmd} &&\textif \aReg\not\in\free(\aCmd)&&\hbox{} \\ \taglabel{SF} \sem{\aLoc^\amode \GETS \aExp \SEMI \aReg \GETS \aLoc\SEMI\aCmd} &\supseteq \sem{\aLoc^\amode \GETS \aExp \SEMI \aReg \GETS \aExp\SEMI\aCmd} \end{align*} These optimizations are validated by Definition \ref{def:cover}, since $\sem{\aReg\GETS\aLoc^\mRLXquiet\SEMI \aCmd}\supseteq \sem{\aCmd}[\aLoc/\aReg]$. The proof of \ref{SF} also appeals to the definition of write and the definition of register assignment. Let us revisit the internal read examples from \textsection\ref{sec:litmus}. With read elimination, the read action $(\DR{y}{1})$ can be elided in \ref{Internal2}; regardless, the substitution into the write of $z$ is the same. On a more troubling note, the read action $(\DR{x}{1})$ can be also elided in \ref{Internal1}, potentially converting this non-execution into a valid execution, violating \drfsc{}. The addition of \ref{6} to the definition of prefixing prevents this outcome. When computing $\sem{x\GETS1 \SEMI a^\mRA\GETS1 \SEMI \IF{b^\mRA}\THEN y\GETS x \FI}$, \ref{6} prevents prefixing $(\DWRel{a}{1})$ in front of: \begin{gather*} \hbox{\begin{tikzinline}[node distance=1.5em] \event{a6}{\DRAcq{b}{1}}{} \event{a7}{\DR{x}{1}}{right=of a6} \sync{a6}{a7} \event{a8}{x=1\mid\DW{y}{1}}{right=of a7} \graypo{a7}{a8} \sync[out=10,in=170]{a6}{a8} \end{tikzinline}} \end{gather*} In order to satisfy \ref{6}, the precondition of $(\DW{y}{1})$ must be location independent. \myparagraph{Dead Store Elimination} Dead store elimination \eqref{DS} is symmetric to redundant load elimination. \begin{align*} \taglabel{DS} \sem{\aLoc \GETS \aExp \SEMI \aLoc \GETS \bExp\SEMI\aCmd} &\supseteq \sem{\aLoc \GETS \bExp\SEMI\aCmd} \end{align*} The rewrite is less general than \ref{RE} because general store elimination is unsound. For example, ``${x\GETS 0}$'' and ``${x\GETS 0\SEMI x\GETS 1}$'' can be distinguished by the context ``$\hole{}\PAR z\GETS x$''. Using $\relfilt[]{\aLoc}$, \ref{DS} is validated by Definition \ref{def:cover}. A write may only be removed if it is \emph{covered} by a following write. This restriction is sufficient to prevent bad executions such as: \begin{gather*} x\GETS 1\SEMI x\GETS 2\SEMI y^\mRA\GETS 1 \PAR \aReg\GETS y^\mRA \SEMI \bReg\GETS x \\[-1ex] \hbox{\begin{tikzinline}[node distance=1.5em] \event{a1}{\DW{x}{1}}{} \internal{a2}{\DW{x}{2}}{right=of a1} \graywk{a1}{a2} \event{a3}{\DWRel{y}{1}}{right=of a2} \graypo{a2}{a3} \event{b1}{\DRAcq{y}{1}}{right=2em of a3} \rf{a3}{b1} \event{b2}{\DR{x}{1}}{right=of b1} \sync{b1}{b2} \rf[out=10,in=170]{a1}{b2} \sync[out=-10,in=-170]{a1}{a3} \end{tikzinline}} \end{gather*} In this diagram, we have included a ``non-event''---dashed border---to mark the eliminated write. In general, there may need to be many following writes, one for each subsequent release. \myparagraph{Case Analysis} Definition \ref{def:cover} satisfies \emph{disjunction closure}. \begin{definition} \label{def:dis} We say that $\aPS$ is a \emph{disjunct of $\aPS'$ and its downset $\aPS''$} when $\Event=\Event' \supseteq\Event''$, % \supseteq \{ \bEv \in \Event' \mid \exists\aEv\in\Event''.\; \bEv\le\aEv\}$, ${\le}={\le'}\supseteq{\le''}$, $\labelingAct=\labelingAct' \supseteq\labelingAct''$, $\labelingForm(\aEv)$ implies $\labelingForm'(\aEv)\lor \labelingForm''(\aEv)$ if $\aEv\in\Event''$, and $\labelingForm(\aEv)$ implies $\labelingForm'(\aEv)$ otherwise. We say that $\aPSS$ is \emph{disjunction closed} if $\aPS\in\aPSS$ whenever there are $\{\aPS',\,\aPS''\}\subseteq \aPSS$ such that $\aPS$ is a disjunct of $\aPS'$ and downset $\aPS''$. \end{definition} Disjunction closure is sufficient to establish case analysis \eqref{CA}: \begin{align*} \taglabel{CA} \sem{\aCmd} &\supseteq \sem{\IF{\aExp}\THEN\aCmd\ELSE\aCmd\FI} \end{align*} The definition disjunction closure requires that $\aPS''$ is a downset of $\aPS'$, whereas the definition of disjunction makes no such requirement. This requirement is implied by causal strengthening: once you take an event that has been chosen from one side of the conditional---of the form $\aExp\land\ldots$---then all subsequent events must satisfy $\aExp$. Candidate \ref{cand:ord} is not disjunction closed. For example, consider the two sides of the composition defined by the conditional, where $\cmdR = \aReg\GETS x \SEMI \IF{\aExp}\THEN \bReg\GETS x \FI$. \begin{align*} \begin{gathered} \IF{\bExp}\THEN \cmdR \FI \\ \hbox{\begin{tikzinline}[node distance=2em] \eventl{d}{a}{\bExp \mid\DR{x}{0}}{} \eventl{e}{b}{\bExp \land \aExp\mid\DR{x}{0}}{right=of a} \end{tikzinline}} \end{gathered} && \begin{gathered} \IF{\lnot \bExp}\THEN \cmdR \FI \\ \hbox{\begin{tikzinline}[node distance=2em] \eventl{e}{a}{\lnot \bExp \mid\DR{x}{0}}{} \eventl{d}{b}{\lnot \bExp \land \aExp\mid\DR{x}{0}}{right=of a} \end{tikzinline}} \end{gathered} \end{align*} Because the reads are unordered, they can be confused when coalescing, resulting in: \begin{align*} \begin{gathered} \IF{\bExp}\THEN \cmdR \ELSE \cmdR \FI \\ \hbox{\begin{tikzinline}[node distance=2em] \eventl{d}{a}{\bExp\lor (\lnot \bExp \land \aExp) \mid\DR{x}{0}}{} \eventl{e}{b}{(\bExp \land \aExp)\lor \lnot \bExp \mid\DR{x}{0}}{right=of a} \end{tikzinline}} \end{gathered} \end{align*} which is: \begin{align*} \begin{gathered} \hbox{\begin{tikzinline}[node distance=2em] \eventl{d}{a}{\bExp\lor \aExp\mid\DR{x}{0}}{} \eventl{e}{b}{\lnot \bExp\lor \aExp\mid\DR{x}{0}}{right=of a} \end{tikzinline}} \end{gathered} \end{align*} But this pomset does not occur in $\sem{\cmdR}$. Our solution is to weaken the preconditions on reads using $\Rdis{\aLoc}{\aVal}$ so that both $\sem{\cmdR}$ and $\sem{\IF{\bExp}\THEN \cmdR \ELSE \cmdR \FI}$ include: \begin{align*} \begin{gathered} \hbox{\begin{tikzinline}[node distance=2em] \eventl{d}{a}{\DR{x}{0}}{} \eventl{e}{b}{\DR{x}{0}}{right=of a} \end{tikzinline}} \end{gathered} \end{align*} Note that the precondition on the reads are weaker than one would expect. This is not a problem for reads, since they must also be fulfilled---allowing more reads \emph{increases} the obligations of fulfillment. The same solution would not work for writes---as we discussed at the end of \textsection\ref{sec:props}, allowing more writes is simply unsound. Fortunately, this problem does not occur when prefixing a write in front of another write, due to the order required by \ref{5b}. If \ref{5b} is strengthened to include read-read coherence, then disjunction closure holds without $\Rdis{\aLoc}{\aVal}$. In this case, however, \ref{CSE} fails. This compromise may be reasonable for C11 atomics, which are meant to be used sparingly. It is less attractive for %relaxed access in safe languages, like Java. \myparagraph{Irrelevant Read Introduction} A compiler may introduce reads in order to lift code. Consider the following example \cite[\textsection1.4.5]{SevcikThesis}: \begin{align*} \sem{\IF{\aReg} \THEN \bReg \GETS \aLoc \SEMI \bLoc \GETS \bReg \FI} &\not\supseteq \sem{\bReg \GETS \aLoc \SEMI \IF{\aReg} \THEN \bLoc \GETS \bReg \FI} \end{align*} The right-hand program is derived from the left by introducing an irrelevant read in the else-branch, then moving the common code out of the conditional. Definition \ref{def:cover} does \emph{not} validate this rewrite. Read introduction is only valid ``modulo irrelevant reads.'' We capture this idea using \emph{read saturation}. Read saturation allows us to add actions of the form $(\DR{x}{v})$ to the left-hand side, validating the inclusion. Let $\aPS'\in\readc(\aPSS)$ %be the set $\aPSS'$ where $\aPS'\in\aPSS'$ when $\exists\aPS\in\aPSS$ and $\exists D$ such that $\Event'= \Event\uplus D$, ${\le'}\supseteq{\le}$, ${\labeling'} \supseteq{\labeling}$, and $\forall \bEv\in D.\;\exists\aLoc.\;\exists\aVal$. $\labelingAct'(\bEv)=(\DR[\mRLXquiet]{\aLoc}{\aVal})$. Note that if $\aPSS\supseteq\aPSS'$, then $\readc(\aPSS)\supseteq \readc(\aPSS')$. Read introduction \eqref{RI} is valid under the saturated semantics. \begin{align*} \taglabel{RI} \readc\sem{\aCmd} & \supseteq \readc\sem{\aReg \GETS \aLoc\SEMI\aCmd} &&\textif \aReg\not\in\free(\aCmd)&&\hbox{} \end{align*} With \ref{RI}, the model satisfies all of the transformations of \citet[\textsection 5.3-4]{SevcikThesis} except redundant write after read elimination (see \textsection\ref{sec:limits}) and reordering with external actions, which we do not model.
{ "alphanum_fraction": 0.675227309, "avg_line_length": 41.5165562914, "ext": "tex", "hexsha": "976e4f6be17cdde62bebe6ff98b01a9cf92f0405", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "fd606fdb6a04685d9bb0bee61a5641e4623b10be", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "chicago-relaxed-memory/memory-model", "max_forks_repo_path": "corrigendum/refinements.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "fd606fdb6a04685d9bb0bee61a5641e4623b10be", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "chicago-relaxed-memory/memory-model", "max_issues_repo_path": "corrigendum/refinements.tex", "max_line_length": 116, "max_stars_count": 3, "max_stars_repo_head_hexsha": "fd606fdb6a04685d9bb0bee61a5641e4623b10be", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "chicago-relaxed-memory/memory-model", "max_stars_repo_path": "corrigendum/refinements.tex", "max_stars_repo_stars_event_max_datetime": "2021-07-25T12:46:13.000Z", "max_stars_repo_stars_event_min_datetime": "2020-08-13T02:36:22.000Z", "num_tokens": 4420, "size": 12538 }
\appendix \chapter{Glossary} These terms are used interchangeably throughout the dissertation, to not tire the reader by using the same term repeatedly. \begin{itemize} \item Timetable = Schedule = Itinerary \item Duty = Shift \item Break = Meal-Relief \item HGV = Heavy Goods Vehicles = 7.5 tonne lorries \item HGV Driver = Employee \item Model = Formulation \item Maximum Difference \cite{maxdif} = Measure of the difference in the distribution of load between the heaviest and least loaded duties. \end{itemize} \vspace{\baselineskip} \noindent Unless stated otherwise, all \textbf{time} values are presented in (HH:mm) format. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% CHAPTER %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{Dataset Findings} \label{chapter: second appendix} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Section %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \vspace{\baselineskip} \noindent In this section of the appendix, we mention some interesting facts that were observed during the study of the historical schedules that we not deemed important enough to display in the main part of the port. However, we believe that they are useful for reference purposes which is why we outline them below. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% sub-Section %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Attributes Featured in the Dataset} \label{subsection: Eliminated Attributes} In the following section we outline a detailed list of the attributes observed in the dataset as well as a description of the information they provide. As mentioned in Section \ref{section: Data Cleaning} of Chapter \ref{chapter: Problem Definition} we preserve information from only a handful of them and the rest are taken into account for the purposes of this project. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Table %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %Table containing the types of activities \begin{table}[ht] \small \centering \begin{tabular}{|l|p{8.3cm}|} \hline \multicolumn{1}{|c|}{ \textbf{Activity}} & \multicolumn{1}{|c|}{ \textbf{Description}} \\ \hline \texttt{Operator} & Indicates the ID of the operator that structured each duty. \\ \hline \texttt{Sort\_Order} & Attaches a unique ID to every activity. \\ \hline \multirow{2}*{\texttt{Duty\_ID}} & Provides each duty with a unique code for identification purposes. \\ \hline \multirow{2}*{\texttt{Date\_Amended} } & Mentions the date that the particular duty was last modified. \\ \hline \texttt{Commencement Time} & Start time of each activity. \\ \hline \texttt{Ending Time} & End time of each activity. \\ \hline \texttt{Element Type} & Mentions the type of each activity. \\ \hline \texttt{Element Time} & Contains the duration of each activity. \\ \hline \texttt{Due to Convey} & Mentions the purpose of each \texttt{travel} activity. \\ \hline \multirow{2}*{\texttt{Vehicle Type}} & Mentions the type of HGV vehicle utilised for each travel leg. \\ \hline \texttt{From\_Site} & Contains the start location at which each activity occurs. \\ \hline \multirow{2}*{\texttt{To\_Site}} & Contains the end location at which each activity is completed. \\ \hline \multirow{2}*{\texttt{Driver\_Grade}} & Mentions the qualification of the driver undertaking each activity. \\ \hline \multirow{2}*{\texttt{Leg\_Mileage}} & Contains information about the distance (in miles) of each travel leg. \\ \hline \end{tabular}% \medbreak \caption{List of the types of attributes featured in the dataset.} \label{table:Attribute List} \end{table} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% sub-Section %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Activities Featured in the Dataset} \label{section: Appendix Activities Feaure in the Dataset} The activities seen in Table \ref{table:Activity List} were those that were observed in the original form of the dataset as was provided to us by Royal Mail. Upon the implementation of the Data Cleaning procedures as seen in Section \ref{section: Data Cleaning} of Chapter \ref{chapter: Problem Definition}, the list of activities was transformed to its \textbf{Finalised Dataset} form as seen in Table \ref{table:Final Activity List} of Section \ref{section: Data Cleaning}. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Table %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %Table containing the types of activities \begin{table}[ht] \small \centering \begin{tabular}{|l|p{8.3cm}|} \hline \multicolumn{1}{|c|}{ \textbf{Activity}} & \multicolumn{1}{|c|}{ \textbf{Description}} \\ \hline \texttt{Start} & Indicates the \textit{beginning} of a duty. \\ \hline \texttt{End} & Indicates the \textit{end} of a duty. \\ \hline \texttt{Travel} & The \textit{travel leg} from one location to the next. \\ \hline \texttt{Load} & The \textit{loading} of mail units before leaving a location. \\ \hline \multirow{2}*{\texttt{Unload}} & The \textit{offloading} of mail units after arriving at a designated location. \\ \hline \multirow{2}*{\texttt{Meal-Relief}} & The \textit{meal allowance} break to meet EU \textit{driving time} regulations. \\ \hline \texttt{Distribution} & Non-essential administrative tasks. \\ \hline \texttt{Processing} & Non-essential administrative tasks. \\ \hline \texttt{Park Vehicle} & \textit{Parking} of HGV at end of duty. \\ \hline \texttt{Check} & Scheduled \textit{servicing} of HGV. \\ \hline \texttt{Clean} & Scheduled \textit{cleaning} of HGV. \\ \hline \end{tabular}% \medbreak \caption{List of the types of activities, as featured in the dataset.} \label{table:Activity List} \end{table} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% sub-Section %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Starting Times of Duties} \label{subsection: Appendix Starting times} As was mentioned in the Data Exploration section \ref{section: Data Exploration}, the duties of each driver tend to start in \textit{clusters}, internally referred to as \textbf{waves}. This is clearly observed in Figure \ref{fig:starting time} where we have plotted the \textit{starting times} of duties, with the duties sorted in an increasing order. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Figure %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{figure}[ht] \begin{center} \includegraphics[width=0.46\linewidth]{appendix/Appendix-Start-wave.png} \end{center} \caption{Plot of the starting times of duties, illustrating the wave like fashion of the starts of duties.} \label{fig:starting time} \end{figure} \vspace{\baselineskip} \noindent Using Figure \ref{fig:starting time} as our guide, we manually characterised each cluster of starting times of duties as \textbf{wave instances}, splitting our overall dataset into the \textbf{three wave sub-instances} of Table \ref{table:Starting Waves} as outlined in Section \ref{section: Wave Instances - Data}. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Table %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{table}[ht] \small \centering \begin{tabular}{|l|c|c|c|c|} \hline \textbf{Wave} & \multicolumn{4}{|c|}{ \textbf{Characteristics}} \\ \hline & \textit{Start time} & \textit{End time} & \textit{Timespan} & \textit{Number of Duties} \\ \hline \texttt{morning} & 4:00 AM & 6:00 AM & 2 hours & 60 \\ \hline \texttt{afternoon} & 9:00 AM & 4:00 PM & 7 hours & 60 \\ \hline \texttt{night} & 8:00 PM & 0:40 PM & 4 hours, 40 minutes & 63 \\ \hline \end{tabular}% \medbreak \caption{Table outlining the Starting time, End time and Number of duties of each wave. Time values are stated in (HH:mm) AM/PM units.} \label{table:Starting Waves} \end{table} \vspace{\baselineskip} \noindent As we can see in Table \ref{table:Starting Waves} the waves have practically the same amount of duties, however they differ considerably in their timespan. The \texttt{morning} wave is around half the length of the \texttt{night} wave, and the \texttt{afternoon} is almost equal to the sum of the \texttt{morning, night} waves. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% sub-Section %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Number of Daily Duties} We thought, it would be an interesting fact to see how many duties occurred each day of the week. This information was deduced from our dataset which contained one week's worth of duties. We plotted the number of duties that occurred each day on Figure \ref{fig: Number of shifts per day}. The findings from this plot would consequently, be indicative of the number of HGV drivers required to carry out those duties per week. Hence, we can infer what Royal Mail's overall fleet of drivers looks like on a given week, since each driver performs one duty per day. \vspace{\baselineskip} \noindent We can observe from the figure that Royal Mail, requires on average around 60 drivers to be active each day of the week. Understandably considerably less drivers are required to carry out the weekend shifts since there is not much activity over the weekend. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Figure %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{figure}[ht] \begin{center} \includegraphics[width=0.46\linewidth]{appendix/shift per day.png} \end{center} \caption{Plot of the number of shifts occurring each day.} \label{fig: Number of shifts per day} \end{figure} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Section %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{Operations on Historical Schedules} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Sub-Section %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Redefining Historical Schedules} \label{subsection: redefine appexnix} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Figure %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{figure}[ht] \centering \subfloat[\textit{Duty lengths} sorted in increasing order, showing the effect of deleting the \textbf{non-useful} activities.]{%\begin{center} \includegraphics[width=0.46\linewidth]{[1] - chapter/Image Files/1-Effect-of-Redefined.png} }%\end{center}}%picture #1 \qquad %picture #2 \centering \subfloat[Histogram showing the effect of Redefining the dataset.]{%\begin{center} \includegraphics[width=0.46\linewidth]{[1] - chapter/Image Files/1-Effect-of-Redefined-histogram.png} }%\end{center}}%end of picture #2 \caption{Figures illustrating the effects of the Redefining the dataset by deleting the \textbf{non-useful} activities from the Historical Schedule.}% \label{fig: Redefined Historical.}% \end{figure} In this section we discuss the effects of the operation carried out in Section \ref{section: Redefined Dataset} regarding the deletion of \textbf{non-useful} activities from the original dataset. As one can see Figure \ref{fig: Redefined Historical.}(a) there is a step change deletion of overall time to be schedule from the Historical dataset, once we delete the non-useful activities. Subsequently, Figure \ref{fig: Redefined Historical.}(b) shows the same effect as observed in a histogram graph. The deletion of the non-useful time is observed as a horizontal shift to of the histogram to the left, signifying the fact that less overall hours are not contained in the schedule. \vspace{\baselineskip} \noindent The operation of deleting the activities has a direct impact on the structure of the blocks. Namely, blocks that contain such \textbf{non-useful} activities will see their duration decreased. This is seen more practically in the following table: %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Table %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{table}[ht] \small \centering \begin{tabular}{|c|c|c|c|} \hline \textbf{Instance} & \multicolumn{3}{|c|}{ \textbf{Blocks (HH:mm)}} \\ \hline & \texttt{Average} & \texttt{Minimum} & \texttt{Maximum} \\ \hline Historical & 03:05 & 00:40 & 08:25 \\ \hline Morning & 03:28 & 00:50 & 08:25 \\ \hline Afternoon & 03:05 & 01:10 & 07:20 \\ \hline Night & 02:52 & 00:40 & 07:30 \\ \hline \end{tabular}% \medbreak \end{table} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Sub-Section %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Comparison of Disturbed Nominal and Optimised Schedules} \label{subsection: Appendix Comparison of Disturbed Nominal and Optimised Schedules} In Figure \ref{fig: Nominal Uncertainty Sets Effects.} our goal is to determine the robustness to uncertainty of the nominal schedule. To identify its level of robustness we compare, for of the three uncertainty sets, the disturbed instance optimised under uncertainty with the nominal schedule disturbed with the same instance. There are two such cases for each uncertainty set, one that involves instances that have been \texttt{reduced}, with respect to the overall time scheduled, after the application of uncertainty, and those that have been \texttt{augmented} respectively. The former are presented in Figure \ref{fig: Nominal Uncertainty Sets Effects.}(a) while the latter in Figure \ref{fig: Nominal Uncertainty Sets Effects.}(b). %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Double Figure %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{figure}[ht] \centering \subfloat[Disturbed instances with a \textbf{decrease} in overall labor time (\texttt{reduced}).]{%\begin{center} \includegraphics[width=0.46\linewidth]{appendix/Comparison_Of_Uncertainty_Sets_min_and_nominal.png} }%\end{center}}%end of picture #2 \qquad %picture #2 \centering \subfloat[Disturbed instances with an \textbf{increase} in overall labor time (\texttt{augmented}).]{%\begin{center} \includegraphics[width=0.46\linewidth]{appendix/Comparison_Of_Uncertainty_Sets_max_and_nominal.png} }%\end{center}}%picture #1 \caption{The histograms provide an overview of the effects of various levels of uncertainty set on $Duty$ lengths.} \label{fig: Nominal Uncertainty Sets Effects.} \end{figure} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Section %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{Supporting Notes} Tables showing the characteristics of Schedules throughout the report provide measurements of length of time in (HH:mm) units, unless otherwise stated. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Sub-Section %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{EU Directives for HGV Drivers} \label{section: EU rules} This section makes reference to the European Union (EU) rules on drivers' hours and working time as dictated by the Department for Transport (DfT). This is an important real-life aspect of our problem, that is mentioned and referred to, at various points in the report. %link: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/856360/simplified-guidance-eu-drivers-hours-working-time-rules.pdf \begin{enumerate}[label=\textbf{(\arabic*)}] \item \textbf{\underline{Driving-time Directive}: } \vspace{\baselineskip} \noindent \begin{enumerate}[label=\roman*] \item \underline{Time Limit:} \begin{itemize} \item 9 hours daily driving limit. \item Maximum of 56 hours weekly driving limit. \item Maximum of 90 hours fortnightly driving limit. \end{itemize} \vspace{\baselineskip} \noindent \item \underline{Break:} \begin{itemize} \item 45 minutes break after 4.5 hours driving \end{itemize} \end{enumerate} \item \textbf{\underline{Working-time Directive}: } \vspace{\baselineskip} \noindent \begin{enumerate}[label=\roman*] \item \underline{Time Limit:} \begin{itemize} \item Working time must not exceed average of 48 hours a week. \item Maximum working time of 60 hours in one week. \item Maximum working time of 10 hours if night work performed. \end{itemize} \vspace{\baselineskip} \noindent \item \underline{Break:} \begin{itemize} \item Cannot work for more than 6 hours without a break. A break should be at least 15 minutes long \item 30 minute break if working between 6 and 9 hours in total \item 45 minute break if working more than 9 hours in total \end{itemize} \end{enumerate} \end{enumerate} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Sub-Section %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Relaxation of a Mathematical Program} \label{section: Appednix Relaxation} In order to utilise the efficacy of the simplex algorithm, and apply to solve MILPs we require to obtain the relaxation of an integer-linear program into a LP. A relaxed version of an integer program is defined as below. \begin{equation} \begin{aligned} & \underset{x}{\text{minimise}} & & f(x) \\ & \text{subject to} & & h_i(x) = 0 \\ & & & g_j(x) \leq 0 \\ \end{aligned} \end{equation} \[\text{where} \; x \in S_{original}\] \noindent with the corresponding \textbf{relaxed} version of the problem,\par \begin{equation} \begin{aligned} & \underset{x}{\text{minimise}} & & f(x) \\ & \text{subject to} & & h_i(x) = 0 \\ & & & g_j(x) \leq 0 \\ \end{aligned} \end{equation} \[\text{where} \; x \in S_{relaxed} \; \text{and} \; S_{original} \subseteq S_{relaxed}\] \vspace{\baselineskip} \noindent The relaxation of a problem is usually obtained by the removal of one or more constraints of the original formulation. For example, when obtaining the relaxation of an integer program, we usually refer to the process of neglecting the integrality constraint on the integer program's decision variable(s) to transform our problem into a standard LP.
{ "alphanum_fraction": 0.6077050539, "avg_line_length": 51.6363636364, "ext": "tex", "hexsha": "0b800fe89e094c59741620cb189d02f561508944", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "6943f5fc406891ae1635e42dff6e7fba28a2bffc", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "liaskast/Final-Year-Project", "max_forks_repo_path": "Report/.tex files/appendix/appendix.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6943f5fc406891ae1635e42dff6e7fba28a2bffc", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "liaskast/Final-Year-Project", "max_issues_repo_path": "Report/.tex files/appendix/appendix.tex", "max_line_length": 740, "max_stars_count": 3, "max_stars_repo_head_hexsha": "6943f5fc406891ae1635e42dff6e7fba28a2bffc", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "liaskast/Final-Year-Project", "max_stars_repo_path": "Report/.tex files/appendix/appendix.tex", "max_stars_repo_stars_event_max_datetime": "2020-07-26T14:12:36.000Z", "max_stars_repo_stars_event_min_datetime": "2020-06-21T21:22:10.000Z", "num_tokens": 4590, "size": 19312 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % % COMIS - Reference Manual -- LaTeX Source % % % % Front Material: Title page, % % Copyright Notice % % Preliminary Remarks % % Table of Contents % % EPS file : cern15.eps, cnastit.eps % % % % Editor: Michel Goossens / CN-AS % % Last Mod.: 17 Aug 1993 18:20 mg % % % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Tile page % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %begin{latexonly} \def\Ptitle#1{\special{ps: /Printstring (#1) def} \epsfbox{cnastit.eps}} \begin{titlepage} \vspace*{-23mm} \includegraphics[height=30mm]{cern15.eps}% \hfill \raisebox{8mm}{\Large\bf CERN Program Library Long Writeups L210} \hfill\mbox{} \begin{center} \mbox{}\\[10mm] \mbox{\Ptitle{COMIS}}\\[2cm] {\LARGE Compilation and Interpretation System}\\[1cm] {\LARGE Reference Manual}\\[2cm] {\LARGE Version 2.}\\[3cm] {\Large Application Software and Databases}\\[1cm] {\Large Computing and Networks Division}\\[2cm] \end{center} \end{titlepage} %end{latexonly} \begin{htmlonly} \begin{center}{\Large\bf CERN Program Library Long Writeup L210}\\[5mm] {\Huge COMIS}\\[5mm] {\Large Compilation and Interpretation System}\\[5mm] {\LARGE Reference Manual}\\[5mm] {\LARGE Version 2.}\\[5mm] {\Large Application Software and Databases}\\[1cm] {\Large Computing and Networks Division}\\[5mm] {\Large CERN Geneva, Switzerland}\\[5mm] \end{center} \begin{rawhtml} <HR> <H3><A href="http://wwwinfo.cern.ch/asd/cernlib/comis/comis.ps"> PostScript version of this manual</A></H3> <HR> \end{rawhtml} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Copyright page % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{htmlonly} \chapter{Copyright Notice} \end{htmlonly} %begin{latexonly} \thispagestyle{empty} \framebox[\textwidth][t]{\hfill\begin{minipage}{0.96\textwidth}% \vspace*{3mm}\begin{center}Copyright Notice\end{center} \parskip\baselineskip %end{latexonly} {\bf COMIS -- Compilation and Interpretation System} \par CERN Program Library entry {\bf L210} \par \copyright{} Copyright CERN, Geneva 1994--1998 \par Copyright and any other appropriate legal protection of these computer programs and associated documentation reserved in all countries of the world. \par These programs or documentation may not be reproduced by any method without prior written consent of the Director-General of CERN or his delegate. \par Permission for the usage of any programs described herein is granted apriori to those scientific institutes associated with the CERN experimental program or with whom CERN has concluded a scientific collaboration agreement. \par Requests for information should be addressed to: %begin{latexonly} \vspace*{-.5\baselineskip} \begin{center} \tt\begin{tabular}{l} CERN Program Library Office \\ CERN-IT Division \\ CH-1211 Geneva 23 \\ Switzerland \\ Tel. +41 22 767 4951 \\ Fax. +41 22 767 8630 \\ Internet: [email protected] \end{tabular} \end{center} \vspace*{2mm} \end{minipage}\hfill}%end of minipage in framebox \vspace{6mm} %end{latexonly} \begin{htmlonly} \par \begin{flushleft} CERN Program Library Office \\ CERN-IT Division \\ CH-1211 Geneva 23 \\ Switzerland \\ Tel.: +41 22 767 4951 \\ Fax.: +41 22 767 8630 \\ Internet: \texttt{[email protected]} \end{flushleft} \par \end{htmlonly} %begin{latexonly} {\bf Trademark notice: All trademarks appearing in this guide are acknowledged as such.} \vfill \begin{tabular}{l@{\quad}l@{\quad}>{\tt}l} \emph{Contact Person}: & Vladimir Berezhnoi /EP & \texttt{[email protected]}\\[1mm] \emph{Cocumentation consultant}: & Michel Goossens /CN &(goossens\atsign cern.ch)\\[1cm] {\em Edition -- August 1998} \end{tabular} %end{latexonly} \begin{htmlonly} {\bf Trademark notice: All trademarks appearing in this guide are acknowledged as such.} \begin{tabular}{lll} \emph{Contact Person}: & Vladimir Berezhnoi /EP & \texttt{[email protected]}\\ \emph{Documentation consultant}: & Michel Goossens /IT & \texttt{[email protected]}\\ \emph{Edition -- August 1998} \end{tabular} \end{htmlonly} \newpage %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Introductory material % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %begin{latexonly} \pagenumbering{roman} \setcounter{page}{1} \section*{Preliminary remarks} %end{latexonly} \begin{htmlonly} \chapter{Foreword} \end{htmlonly} This manual serves at the same time as a {\bf Reference manual} and as a {\bf User Guide} for the \COMIS{} system. Historically the following IHEP (Institute for High Energy Physics, Moscow Region, Russia) people have worked on the \COMIS{} system: V.~Bereshnoi, S.~Nikitin, Y.~Petrovych and V.~Sikolenko. At CERN Ren\'e Brun has contributed to the development of the system. In this manual examples are in {\tt monotype face} and strings to be input by the user are \Ucom{underlined}. In the index the page where a routine is defined is in {\bf bold}, page numbers where a routine is referenced are in normal type. In the description of the routines a \Lit{*} following the name of a parameter indicates that this is an {\bf output} parameter. If another \Lit{*} precedes a parameter in the calling sequence, the parameter in question is both an {\bf input} and {\bf output} parameter. %begin{latexonly} This document has been produced using \LaTeX\footnote{Leslie Lamport, ``\LaTeX, {A Document Preparation System}, Addison and Wesley, 1986}% with the \Lit{cernman} style option, developed at CERN. A compressed PostScript file \Lit{comis.ps.Z}, containing a complete printable version of this manual, can be obtained from any CERN machine by anonymous ftp as follows (commands to be typed by the user are underlined): \vspace*{3mm} \begin{alltt} \underline{ftp asis01.cern.ch} Trying 128.141.201.136... Connected to asis01.cern.ch. 220 asis01 FTP server (Version 6.10 ...) ready. Name (asis01:username): \underline{anonymous} Password: \underline{your\_{}mailaddress} 230 Guest login ok, access restrictions apply. ftp> \underline{cd cernlib/doc/ps.dir} ftp> \underline{get comis.ps.Z} ftp> \underline{quit} \end{alltt} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Tables of contents ... % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \newpage \tableofcontents %end{latexonly}
{ "alphanum_fraction": 0.5679566768, "avg_line_length": 37.4801980198, "ext": "tex", "hexsha": "14e5c383a30573399313566e07a3731bed87972d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "berghaus/cernlib-docs", "max_forks_repo_path": "comis/comfront.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "berghaus/cernlib-docs", "max_issues_repo_path": "comis/comfront.tex", "max_line_length": 105, "max_stars_count": 1, "max_stars_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "berghaus/cernlib-docs", "max_stars_repo_path": "comis/comfront.tex", "max_stars_repo_stars_event_max_datetime": "2019-07-24T12:30:01.000Z", "max_stars_repo_stars_event_min_datetime": "2019-07-24T12:30:01.000Z", "num_tokens": 1830, "size": 7571 }